WO2022100712A1 - 真实环境画面中虚拟道具的显示方法、***及存储介质 - Google Patents

真实环境画面中虚拟道具的显示方法、***及存储介质 Download PDF

Info

Publication number
WO2022100712A1
WO2022100712A1 PCT/CN2021/130445 CN2021130445W WO2022100712A1 WO 2022100712 A1 WO2022100712 A1 WO 2022100712A1 CN 2021130445 W CN2021130445 W CN 2021130445W WO 2022100712 A1 WO2022100712 A1 WO 2022100712A1
Authority
WO
WIPO (PCT)
Prior art keywords
control
virtual
prop
target
real environment
Prior art date
Application number
PCT/CN2021/130445
Other languages
English (en)
French (fr)
Inventor
张晓理
季杰
马潇潇
闵昶
赵婕
张明华
李由
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP21891232.7A priority Critical patent/EP4246287A1/en
Publication of WO2022100712A1 publication Critical patent/WO2022100712A1/zh
Priority to US18/318,575 priority patent/US20230289049A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the embodiments of the present application relate to the technical field of human-computer interaction, and in particular, to a method, system and storage medium for displaying virtual props in a real environment picture.
  • Augmented Reality (AR) technology is a technology that integrates virtual content with the real world. After simulating and simulating virtual content such as text, images, 3D models, music, and videos generated by computer equipment, it is superimposed and displayed in the real environment.
  • virtual reality (Virtual Reality, VR) technology is based on the data collected from the real environment to simulate the virtual environment and virtual content. Users can experience various operations on AR content or VR content using the head-mounted device.
  • the corresponding operation is triggered by the function buttons in the head-mounted device, and when the head-mounted audio-visual device focuses on a certain virtual object, the virtual object is highlighted. display, so that the user makes the head-mounted audio-visual device execute corresponding instructions through preset operations.
  • Embodiments of the present application provide a method, system and storage medium for displaying virtual props in a real environment picture.
  • the technical solution is as follows:
  • an embodiment of the present application provides a method for displaying virtual props in a real environment picture, the method is used for a head-mounted device, and the method includes:
  • a scene editing interface and a virtual ray are superimposed and displayed on the real environment picture, the scene editing interface includes a prop selection list, and the prop selection list includes a prop selection control corresponding to at least one virtual prop;
  • the target virtual prop corresponding to the target prop selection control is displayed in the real environment picture.
  • an embodiment of the present application provides a display device for virtual props in a real environment picture, the device comprising:
  • a first display module configured to superimpose and display a scene editing interface and a virtual ray on the real environment picture
  • the scene editing interface includes a prop selection list
  • the prop selection list includes a prop selection control corresponding to at least one virtual prop
  • a first adjustment module configured to move the virtual ray to intersect with the target prop selection control in the real environment picture based on the ray adjustment data
  • the second display module is configured to display the target virtual prop corresponding to the target prop selection control in the real environment screen in response to the first control instruction.
  • an embodiment of the present application provides a system for displaying virtual props in a real environment picture, where the system for displaying virtual props in a real environment picture includes a head-mounted device and a control device, the head-mounted device and all A data connection is established between the control devices; the control device is used to send control instructions and ray adjustment data to the head-mounted device; the head-mounted device includes a processor and a memory; the memory stores at least An instruction, at least one piece of program, code set or instruction set loaded and executed by the processor to implement the real environment as described in the above aspects How to display virtual props on the screen.
  • an embodiment of the present application provides a computer-readable storage medium, where at least one piece of program code is stored in the computer-readable storage medium, and the program code is loaded and executed by a processor to implement the above aspects
  • the display method of virtual props in the real environment picture
  • a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
  • the processor of the head mounted device or the control device reads the computer instructions from the computer readable storage medium, the processor executes the computer instructions so that the head mounted device or the control device implements the various optional implementations of the above aspects provided in The display method of virtual props in the real environment picture.
  • FIG. 1 is a schematic diagram of a head-mounted device provided by an exemplary embodiment of the present application
  • FIG. 2 is a schematic diagram of a head-mounted device provided by another exemplary embodiment of the present application.
  • FIG. 3 is a schematic diagram of a display system for virtual props in a real environment picture provided by an exemplary embodiment of the present application;
  • FIG. 4 is a flowchart of a method for displaying virtual props in a real environment picture provided by an exemplary embodiment of the present application
  • FIG. 5 is a schematic diagram of a scene editing interface and virtual props provided by an exemplary embodiment of the present application.
  • FIG. 6 is a flowchart of a method for displaying virtual props in a real environment picture provided by another exemplary embodiment of the present application.
  • FIG. 7 is a schematic diagram of a moving virtual prop provided by an exemplary embodiment of the present application.
  • FIG. 8 is a schematic diagram of editing a virtual prop provided by an exemplary embodiment of the present application.
  • FIG. 9 is a flowchart of a method for displaying virtual props in a real environment picture provided by another exemplary embodiment of the present application.
  • FIG. 10 is a schematic diagram of shooting controls in different display states provided by an exemplary embodiment of the present application.
  • FIG. 11 is a schematic diagram of shooting preview content provided by an exemplary embodiment of the present application.
  • FIG. 12 is a flowchart of a method for displaying virtual props in a real environment picture provided by another exemplary embodiment of the present application.
  • FIG. 13 is a schematic diagram of opening a scene editing interface through a scene selection control provided by an exemplary embodiment of the present application
  • FIG. 14 is a schematic diagram of opening a scene selection list through a scene switching control provided by an exemplary embodiment of the present application
  • FIG. 15 is a structural block diagram of a display device for virtual props in a real environment picture provided by an exemplary embodiment of the present application.
  • FIG. 16 is a structural block diagram of a display system for virtual props in a real environment picture provided by an exemplary embodiment of the present application.
  • plural refers to two or more.
  • “And/or”, which describes the association relationship of the associated objects, means that there can be three kinds of relationships, for example, A and/or B, which can mean that A exists alone, A and B exist at the same time, and B exists alone.
  • the character “/” generally indicates that the associated objects are an "or" relationship.
  • the focus position of the head-mounted device is located on the target virtual prop to be interacted with, and the head-mounted device focuses on a certain virtual prop.
  • a virtual prop is displayed, it will be highlighted, so that the user can determine whether the current focus position is on the target virtual prop, and then trigger the corresponding function button to make the head-mounted device execute the instruction to realize the interaction with the target virtual prop.
  • the above-mentioned interaction method based on function buttons is not convenient for users to quickly complete the operation.
  • a certain learning cost is required.
  • the user also needs to observe each virtual Only the display state of the object can determine the current focus position, and the user needs to observe the change of the display state of the virtual object while controlling the posture of the device, so as to move the focus position to the target virtual object and edit the target virtual object. High and the steps are more complicated and the efficiency is low.
  • the head-mounted device is an AR device, a VR device, or an audio-visual device integrating AR and VR.
  • the head-mounted device uses AR technology to display multimedia content, it can be roughly divided into three types according to the display principle:
  • One is a head-mounted device provided with a display screen and a camera, which collects pictures of the surrounding real environment through the camera, then superimposes virtual information with the real environment pictures, and displays the superimposed pictures through the display screen.
  • a head-mounted device provided with a projection assembly and a transparent lens, which projects virtual information on the transparent lens through the projection assembly, and the user can observe the real environment and virtual information through the transparent lens at the same time, so as to obtain information in the real environment.
  • the experience of editing virtual information is provided with a projection assembly and a transparent lens, which projects virtual information on the transparent lens through the projection assembly, and the user can observe the real environment and virtual information through the transparent lens at the same time, so as to obtain information in the real environment. The experience of editing virtual information.
  • the projection component is arranged inside the device, and the virtual information can be directly projected to the user's eyeball through the projection component, so that the user can get the ability to edit the virtual information in the real environment. feeling of using.
  • the virtual information includes text, models, web pages, and multimedia content (eg, virtual images, video, audio), and the like.
  • FIG. 1 shows a head-mounted device 110
  • the device 110 is a head-mounted display (Head-Mounted Display, HMD) device
  • the head-mounted device 110 collects the real environment picture through the camera 111 in real time, and compares the virtual information with the real environment picture. After superimposing, the superimposed picture is displayed on the display screen 112 .
  • the user wears the head-mounted device 110 on the head, the user can observe the scene in which the virtual information and the real environment picture are merged through the display screen 112 .
  • FIG. 2 shows another head-mounted device 210.
  • the device 210 is a glasses-type device.
  • a projection assembly 211 is provided on the outside of the lens of the head-mounted device 210.
  • the head-mounted device 210 projects virtual information to the The lens 212 , after the user wears the head-mounted device 210 , the real environment picture and virtual information can be observed simultaneously through the lens 212 .
  • the head-mounted device is an example of a head-mounted device provided with a display screen and a camera for description.
  • the head-mounted device 310 is provided with a camera assembly 311 and a display screen assembly 312.
  • the camera assembly 311 captures the surrounding real environment pictures in real time, and after merging the real environment pictures with AR information, the display screen assembly 312 is shown inside the head mounted device 310 .
  • the head-mounted device 310 has virtual scene editing and shooting functions, and the user adjusts the content of the scene by changing the device posture of the head-mounted device 310 .
  • the head-mounted device 310 can be used alone to implement various functions, or can be used in conjunction with the control device 320 .
  • the processor in the head-mounted device 310 is responsible for performing most of the data processing tasks in the embodiments of the present application
  • the control device 320 sends instructions and data to the head-mounted device 310 based on the trigger operation of the user;
  • the execution result of the control device 320 performs screen rendering and the like. This embodiment of the present application does not limit this.
  • the control device 320 is connected to the head-mounted device 310, and the device type includes at least one of a handle, a smart phone, and a tablet computer.
  • the control device 320 is provided with at least one of a touch area and a touch button, and the head mounted device 310 indicates the device pointing of the control device 320 through a virtual ray in the real environment screen, so that the user can observe the position and direction of the virtual ray by observing the position and direction of the virtual ray.
  • the device orientation of the control device 320 is grasped in real time, and the head mounted device 310 is controlled to execute corresponding instructions in combination with the touch operation on the control device.
  • the head-mounted device 310 synchronously receives the control instructions sent by the control device 320 .
  • the head-mounted device 310 and the control device 320 are connected through a data cable, a wireless fidelity (Wireless Fidelity, WiFi) hotspot, or Bluetooth.
  • a wireless fidelity Wireless Fidelity, WiFi
  • Bluetooth Wireless Fidelity
  • the scene editing interface and the virtual rays are superimposed and displayed on the real environment screen, the scene editing interface includes a prop selection list, and the prop selection list includes the prop selection controls corresponding to at least one virtual prop;
  • the target virtual prop corresponding to the target prop selection control is displayed in the real environment picture.
  • the first control instruction includes an item selection instruction and an item placement instruction
  • the target virtual prop is displayed at the placement position indicated by the prop placement instruction.
  • the method further includes:
  • the virtual ray and the added props are moved.
  • the method further includes:
  • the target virtual prop is edited based on the editing mode corresponding to the target editing control.
  • the prop editing control includes at least one of a delete control, a zoom-in control, and a zoom-out control;
  • the target virtual prop is edited based on the editing mode corresponding to the target editing control, including:
  • the target editing control being an enlargement control and receiving the fourth control instruction, enlarging the target virtual object to a preset magnification
  • the target virtual object is zoomed out to a preset zoom-out ratio.
  • the method further includes:
  • photographing real environment pictures and virtual props including:
  • the shooting preview content is displayed superimposed on the scene editing interface.
  • the target shooting mode is determined based on the instruction type of the fifth control instruction, including:
  • the shooting mode is video recording, and the recording progress is displayed through the shooting control.
  • a data connection is established between the head-mounted device and the control device, the control device is used to send ray adjustment data and control instructions to the head-mounted device, and the ray direction of the virtual ray is the device of the control device points.
  • the method before superimposing and displaying the scene editing interface and the virtual ray on the real environment picture, the method further includes:
  • a scene selection interface is displayed superimposed on the real environment picture, and the scene selection interface includes at least one theme scene selection control;
  • the scene editing interface and virtual rays are superimposed on the real environment screen, including:
  • the virtual ray and the scene editing interface corresponding to the target scene selection control are superimposed and displayed on the real environment picture.
  • a scene switching control is displayed in the scene editing interface
  • the method further includes:
  • a scene selection list is displayed superimposed on the real environment screen, and the scene selection list includes at least one theme scene selection control;
  • the scene editing interface corresponding to the target scene selection control is superimposed and displayed on the real environment picture.
  • FIG. 4 shows a flowchart of a method for displaying virtual props in a real environment picture provided by an exemplary embodiment of the present application. This embodiment is described by taking the method for a head-mounted device as an example, and the method includes the following steps:
  • a scene editing interface and a virtual ray are superimposed and displayed on the real environment screen.
  • the scene editing interface includes a prop selection list, and the prop selection list includes prop selection controls corresponding to at least one virtual prop.
  • the virtual ray is used to indicate the trigger position of the control operation, and the head-mounted audio-visual device acquires data including the pointing of the virtual ray in real time, and displays the virtual ray in the real environment picture.
  • the user controls the direction of the virtual ray in a preset manner, so that the head-mounted device acquires data.
  • the head-mounted device acquires the user's sight based on eyeball recognition, and uses the direction of the user's sight as the direction of the user's sight.
  • the user can change the position, direction, etc.
  • the head-mounted device adjusts the direction of the virtual ray synchronously, so as to achieve the effect of turning the head to control the virtual ray.
  • the head-mounted audio-visual device After the head-mounted audio-visual device is turned on, the real-time environment picture is collected in real time, and the virtual information to be displayed is determined according to the user input.
  • the head-mounted audio-visual device runs a camera application, and the above-mentioned virtual information is a scene editing interface and a virtual ray.
  • the head-mounted audio-visual device captures the real environment picture directly in front of the device through the camera assembly, and fuses the scene editing interface and the virtual rays into the real environment picture for display through the display screen assembly, or directly displays
  • the scene editing interface for example, the display screen assembly is located at the front of the head-mounted audio-visual device, so that the user can observe the scene-editing interface and virtual rays by looking straight ahead after wearing the head-mounted audio-visual device.
  • the camera application in the embodiment of the present application has a scene editing function.
  • the user uses the virtual props included in the prop selection list to form a virtual scene, and the head-mounted audio-visual device fuses and displays the virtual props and the real environment picture.
  • Users can create and edit virtual scenes with head-mounted audio-visual equipment, and shoot virtual scenes and real environment pictures, instead of only shooting preset virtual scenes and real-environment pictures displayed by head-mounted audio-visual equipment.
  • the head-mounted audio-visual device displays the scene editing interface at a preset position relative to the real environment picture, for example, the head-mounted audio-visual device displays the scene editing interface in the left area of the display.
  • FIG. 5 shows a display screen of a head-mounted audio-visual device.
  • the head-mounted audio-visual device displays a scene editing interface 502 , virtual props 504 and virtual rays 505 superimposed on the real environment screen 501 , wherein the scene editing interface 502 includes a prop selection control 503 corresponding to at least one virtual prop.
  • other functional controls are also displayed in the scene editing interface 502, such as a return control, which is used to return to display the previous virtual interface; a flip-up control and a flip-down control, which are used to display different prop selection controls; an empty control, which is used for a key to clear the virtual props that have been placed in the current real environment screen.
  • Step 402 based on the ray adjustment data, move the virtual ray to intersect with the target prop selection control in the real environment picture.
  • the ray adjustment data includes the ray direction of the virtual ray.
  • the head-mounted device acquires ray adjustment data based on information such as user operations or user operations sent by other devices, and moves virtual rays in the real environment screen based on the acquired ray adjustment data. For example, the head-mounted device performs eyeball recognition in real time, captures the user's gaze direction, and determines the gaze direction as the ray direction of the virtual ray. At this time, the ray adjustment data is obtained based on the change of the user's gaze direction, and the user can turn the eyeball to control the The direction of the virtual ray in the real environment picture.
  • the head-mounted device uses the intersection of the virtual ray and the target prop selection control as the condition for selecting the target prop selection control, that is, when the user adds the target virtual prop in the real environment screen, he needs to change the ray direction of the virtual ray to make the virtual ray point to the target.
  • the prop selection control intersects with the target prop selection control, and then through a preset user operation, the head-mounted device displays the target virtual prop corresponding to the target prop selection control in the real environment screen according to the instruction.
  • Step 403 in response to the first control instruction, display the target virtual prop corresponding to the target prop selection control in the real environment picture.
  • the head-mounted device is provided with a touch area, and the user causes the head-mounted device to receive the first instruction through a touch operation acting on the touch area;
  • a first instruction or the like is received when a gesture is detected.
  • the first instruction is an instruction used to instruct the head-mounted device to perform a user operation, which includes the operation type and operation data that trigger the operation, etc.
  • the head-mounted device is based on the specific information contained in the first instruction (operation type and operation data, etc.) and the virtual object (control, virtual prop, etc.) pointed to by the current virtual ray, to determine the instruction to be executed.
  • the head-mounted device When the virtual ray intersects with the target prop selection control and receives the first control instruction, the head-mounted device adds the target virtual prop corresponding to the target prop selection control in the real environment screen, as shown in FIG. 5 , the virtual ray 505 It intersects with the prop selection control 503, and the head mounted device receives the first control instruction, and the head mounted device displays the virtual prop 504 corresponding to the prop selection control 503 in the real environment screen 501 according to the first control instruction.
  • a data connection is established between the head-mounted device and the control device, the control device is used to send ray adjustment data and control instructions to the head-mounted device, and the ray direction of the virtual ray is the direction of the control device.
  • the device points to.
  • the control device sends its own device pointing data (for example, the angle of the control device relative to the x-axis, y-axis, and z-axis in space, etc.) to the head-mounted device in real time, and the head-mounted device is based on The device points to the data to obtain the ray direction of the virtual ray, so that the virtual ray is displayed in the real environment screen, so that the user wearing the head-mounted device can grasp the device pointing of the control device by observing the virtual ray, and clearly control the virtual object pointed by the device.
  • the control device sends its own device pointing data (for example, the angle of the control device relative to the x-axis, y-axis, and z-axis in space, etc.) to the head-mounted device in real time, and the head-mounted device is based on The device points to the data to obtain the ray direction of the virtual ray, so that the virtual ray is displayed in the real environment screen, so that the user wearing the head-mounted device can grasp the
  • the virtual prop is provided to the user, so that the user can select the target virtual prop according to the needs and add it to the appropriate position in the real environment screen, so that the user can freely create a virtual reality Scene; by displaying the virtual ray in real time and moving the virtual ray based on the ray adjustment data to indicate the trigger position of the control operation, so that the user can grasp the trigger position in real time by observing the position and direction of the virtual ray, the user only needs to match the virtual ray with the virtual ray.
  • the virtual props can be quickly controlled through the head-mounted virtual device, which improves the control efficiency and operation accuracy of the virtual props.
  • FIG. 6 shows a flowchart of a method for displaying virtual props in a real environment picture provided by another exemplary embodiment of the present application. This embodiment is described by taking the method for a head-mounted device as an example, and the method includes the following steps:
  • a scene editing interface and a virtual ray are superimposed and displayed on the real environment screen.
  • the scene editing interface includes a prop selection list, and the prop selection list includes prop selection controls corresponding to at least one virtual prop.
  • Step 602 based on the ray adjustment data, move the virtual ray to intersect with the target prop selection control in the real environment picture.
  • steps 601 to 602 For specific implementations of steps 601 to 602, reference may be made to the foregoing steps 401 to 402, and details are not described herein again in this embodiment of the present application.
  • Step 603 in response to the virtual ray intersecting the target prop selection control, highlight the target prop selection control.
  • the scene editing interface usually includes multiple prop selection controls and other types of controls.
  • the headset will perform the target prop selection control on the target prop selection control. Highlighted, so that the user knows the prop selection control currently selected by the virtual ray, so that they can quickly operate when they need to add virtual props through this control, or adjust the direction of the virtual ray in time when they want to add other props. Also, highlighting the target prop selection control when the virtual ray intersects the target prop selection control can immediately prompt the user that the target prop selection control has intersected when the virtual ray touches the edge of the target prop selection control, without requiring the user to observe carefully. Check the position of the intersection of the virtual ray and the scene editing interface, confirm whether it intersects, or continue to move the virtual ray so that the intersection is in the center of the prop selection control to ensure the intersection.
  • the highlighting manner includes at least one of highlighting, magnifying display, changing color, and the like.
  • the prop selection control includes a triggerable state, a selected state and a non-triggerable state, when the virtual ray intersects the target prop selection control (the intersection of the virtual ray and the scene editing interface is located at the edge of the target prop selection control or Internal), and when the item selection control is in the triggerable state, highlight the target item selection control to switch the target item selection control from the triggerable state to the selected state.
  • the corresponding prop selection control is in a non-triggerable state.
  • virtual props cannot be added due to reasons such as the application version has not been updated.
  • the display state of the prop selection control in the non-triggerable state is different from the prop selection control in the triggerable state.
  • the prop selection control in the triggerable state contains the thumbnail of the virtual prop
  • the prop selection control in the non-triggerable state contains Shadows corresponding to virtual props, etc., are not limited in this embodiment of the present application.
  • the virtual ray 505 intersects the prop selection control 503, the prop selection control 503 is in the selected state, the head-mounted device zooms in and highlights it, and the prop selection control 506 is in a non-triggerable state, the head-mounted The device only displays the shadow map of the corresponding virtual prop in this control, and other prop selection controls are in a triggerable state, and the thumbnails of the corresponding virtual props are displayed.
  • Step 604 in response to the prop selection instruction, display the target virtual prop at the intersection of the virtual ray and the real environment picture.
  • the head-mounted device displays the target virtual prop in the real environment screen, and displays the target virtual prop at the intersection of the virtual ray and the real environment screen.
  • the display effect of the target virtual prop being "absorbed" on the virtual ray, so that the target virtual prop can move with the virtual ray.
  • the target virtual prop can automatically move with the virtual ray based on the ray adjustment data, so that the user can move the target virtual prop to any position in the real environment picture by controlling the virtual ray.
  • the item selection instruction is an instruction generated by the control device when receiving the item selection operation.
  • the item selection operation is a single-click operation, a double-click operation, or a long-press operation, etc. This embodiment of the present application This is not limited.
  • the head-mounted device when the head-mounted device receives control instructions and ray adjustment data through the control device, and the prop selection operation is a long-press operation, the user can control the virtual ray to intersect with the target prop selection control, and perform the operation in the touch area of the control device.
  • the long-press operation makes the head-mounted device display the target virtual prop at the intersection of the virtual ray and the real environment screen.
  • the control device receives the long-press operation, it sends the received long-press operation to the head-mounted device.
  • the operation instruction the head-mounted device determines that the target virtual prop needs to be displayed at the intersection of the virtual ray and the real environment screen according to the operation type indicated by the instruction and the object pointed by the current virtual ray, and displays it accordingly. .
  • Step 605 move the virtual ray and the target virtual prop based on the ray adjustment data after the prop selection instruction.
  • the head-mounted device receives the ray adjustment data through the user's operation, and the head-mounted device moves the virtual ray and the target virtual prop based on the ray adjustment data, and displays it in real time.
  • the head-mounted device when the head-mounted device receives the control instruction and the ray adjustment data through the control device, the user makes the head-mounted device select the target virtual prop through a long-press operation on the control device, even if the target virtual prop is Displayed at the intersection of the virtual ray and the real environment screen, and when the long-press operation is not over, change the device posture of the control device, such as moving or rotating the control device, etc.
  • the control device sends the ray adjustment data including the moving direction and moving distance.
  • the head-mounted device determines the moving direction of the virtual ray and the target virtual prop based on the moving direction of the control device, and determines the moving distance of the target virtual prop based on the moving distance of the control device and the distance mapping relationship, where the distance
  • the mapping relationship is the relationship between the moving distance of the control device and the mapping distance in the real environment picture.
  • the user can add the target virtual prop through the head-mounted device alone, and the head-mounted device obtains the device posture based on its own sensor, and then determines the ray adjustment data when the device posture changes.
  • the head-mounted device is provided with a touch area, and the user makes the target virtual prop "suck" at the intersection of the virtual ray and the real environment screen through a touch operation (such as a long-press operation) acting on the touch area, and When the touch operation is not over, the device posture of the head-mounted device is adjusted so that the virtual ray and the target virtual prop move with the user's head movement.
  • Step 606 in response to the prop placement instruction, display the target virtual prop at the placement position indicated by the prop placement instruction.
  • the head-mounted device when the head-mounted device receives the prop placement instruction, the target virtual prop is separated from the virtual ray and fixedly placed at the current position.
  • the prop placement instruction is an instruction generated by the head-mounted device or other device when receiving the prop placement operation.
  • the second control instruction ends, it is determined that the prop placement instruction is received, that is, the user stops displaying the target virtual prop at the intersection of the virtual ray and the real environment screen to place the target virtual prop.
  • the head-mounted device generates a prop placement instruction based on a trigger operation received by itself or based on a detected user gesture, or the head-mounted device receives a prop placement instruction sent by other devices, which is not limited in this embodiment of the present application.
  • the head-mounted device when the head-mounted device receives control instructions and ray adjustment data through the control device, if the control device receives a long-press operation, it sends a prop selection instruction to the head-mounted device.
  • the control device detects that the long-press operation ends, Send a prop placement instruction to the head-mounted device, that is, when the user adds a virtual prop in the real environment screen, by long pressing and moving the control device, the virtual prop is moved, and when the virtual prop is moved to the desired position, just let go Makes the virtual prop appear fixed at the current position.
  • FIG. 7 shows a schematic diagram of a process of adding and preventing a target virtual object.
  • the virtual ray 702 intersects with the prop selection control 701
  • the head mounted device receives the prop selection instruction sent by the control device
  • the virtual prop 703 is displayed at the intersection of the virtual ray 702 and the real environment screen, and based on the command sent by the control device, the virtual prop 703 is displayed at the intersection
  • the ray adjustment data move the virtual prop 703 and the virtual ray 702
  • the dotted line in the figure represents the virtual prop 703 and the virtual ray 702 in the moving process, when the virtual prop 703 and the virtual ray 702 move to the position of the solid line in FIG.
  • the wearable device When the wearable device receives the prop placement instruction sent by the control device, it displays the virtual prop 703 at the display position indicated by the ray adjustment data, and fixes the virtual prop 703 at this position, and the virtual prop 703 is subsequently moved when the virtual ray 702 moves. does not move with it.
  • the head-mounted device In order to facilitate users to quickly grasp the interaction method of adding virtual props, the head-mounted device superimposes and displays operation prompt information on the real environment screen, such as "long press to drag the model, release to place” in Figure 7, etc.
  • Step 607 based on the ray adjustment data, move the virtual ray in the real environment picture to intersect with the added props in the real environment picture.
  • Step 608 in response to the second control instruction, display the added prop at the intersection of the virtual ray and the real environment picture.
  • the user can move the virtual props when adding the virtual props through the head-mounted device, and can also move the virtual props that have been placed in the real environment picture (that is, the added props).
  • the added prop is displayed at the intersection of the virtual ray and the real environment picture.
  • the second control instruction is an instruction generated by the head-mounted device or other device when receiving the prop moving operation.
  • the prop moving operation is a single-click operation, a double-click operation, a long-press operation or a preset gesture, etc.
  • the wearable device generates the second control instruction based on the trigger operation received by itself or based on the detected user gesture, or the head mounted device receives the second control instruction sent by other devices, which is not limited in this embodiment of the present application.
  • Step 609 move the virtual ray and the added props based on the ray adjustment data after the second control instruction.
  • the head-mounted device moves and displays the two based on the received ray adjustment data.
  • the ray adjustment data includes moving direction and moving distance, and the like.
  • Step 610 based on the ray adjustment data, move the virtual ray to intersect with the target virtual prop in the real environment picture.
  • Step 611 in response to the third control instruction, display the editing control corresponding to the target virtual prop.
  • the head-mounted device can also perform editing operations on the virtual props, and the user can achieve the desired display effect by editing the virtual props.
  • the editing control corresponding to the target virtual prop is displayed, that is, the user can control the virtual ray and the target virtual prop by controlling the virtual ray and the target virtual prop.
  • the props intersect and the prop editing operation is performed, so that the head mounted device receives the third control instruction, so that the virtual prop is in an editing state.
  • the prop editing operation is a single-click operation, a double-click operation, or a long-press operation, etc., which is not limited in this embodiment of the present application.
  • the head-mounted device receives control instructions and ray adjustment data through the control device, and the virtual ray 802 intersects with the virtual prop 801.
  • the head-mounted device receives the third control instruction sent by the control device , to display the edit control corresponding to the virtual prop 801 .
  • multiple virtual props in the real environment screen can be in the editing state at the same time, and each virtual prop has corresponding editing controls.
  • the relative position of the editing control corresponding to the target virtual prop and the target virtual prop is fixed (such as the editing control It is displayed on the front of the target virtual prop) and moves with the movement of the target virtual prop, which is convenient for users to flexibly edit multiple virtual props and simplifies user operations.
  • the editing state of the target virtual prop is canceled, that is, the editing control corresponding to the target virtual prop is canceled.
  • Step 612 based on the ray adjustment data, move the virtual ray to intersect the target edit control.
  • Step 613 in response to the fourth control instruction, edit the target virtual prop based on the editing mode corresponding to the target editing control.
  • the target virtual prop corresponds to editing controls with different functions
  • the user controls the virtual ray to intersect with the target editing control
  • the target virtual prop is edited according to the editing mode corresponding to the target editing control.
  • the edit control includes at least one of a delete control, a zoom-in control, and a zoom-out control
  • step 613 includes the following steps:
  • Step 613a in response to the target edit control being the delete control and receiving the fourth control instruction, delete the target virtual object.
  • the target virtual object is deleted from the real environment picture.
  • the fourth control instruction is an instruction generated by the control device when receiving the triggering operation of the editing control, and the triggering operation of the editing control may be a single-click operation, a double-click operation, a long According to operations and the like, this is not limited in the embodiments of the present application.
  • Step 613b in response to the target editing control being a magnifying control and receiving the fourth control instruction, magnify the target virtual object to a preset magnification factor.
  • the fourth control instruction is an instruction generated when the control device receives the triggering operation of the editing control, wherein the triggering operation of the editing control may include different operation types, for example, when When the editing control triggering operation is a single-click operation, the head-mounted device triggers a zoom-in control 804 based on the fourth control instruction, that is, a zoom-in operation is performed on the target virtual prop 801; when the editing control triggering operation is a long-press operation, the head-mounted device The device continuously triggers the zoom control 804 based on the fourth control instruction, that is, continuously zooms in on the target virtual prop 801. When the long-press operation stops, the control device sends an editing end instruction to the head-mounted device, so that the head-
  • Step 613c in response to the target editing control being a zoom-out control and receiving the fourth control instruction, zoom out the target virtual object to a preset zoom-out ratio.
  • the target virtual object is zoomed out to a preset zoom factor.
  • the target prop selection control when the virtual ray intersects the target prop selection control, the target prop selection control is highlighted, so that the user can know the prop selection control currently selected by the virtual ray, and there is no need to carefully observe the virtual ray and the target prop selection.
  • the position of the controls ensures the intersection and is convenient for the user to operate; when the virtual ray intersects the target prop selection control and receives the prop selection command, the head mounted device displays the target virtual prop at the intersection of the virtual ray and the real environment screen.
  • the target virtual prop can be moved only by changing the ray direction of the virtual ray, which improves the control efficiency and operation accuracy of the virtual prop.
  • FIG. 9 shows a flowchart of a method for displaying virtual props in a real environment picture provided by another exemplary embodiment of the present application. This embodiment is described by taking the method for a head-mounted device as an example. After step 403, the method further includes the following steps:
  • Step 404 based on the ray adjustment data, move the virtual ray in the real environment picture to intersect with the shooting controls displayed superimposed on the real environment picture.
  • Step 405 in response to the fifth control instruction, photograph the real environment picture and the virtual props.
  • the head-mounted device uses the intersection of the virtual ray and the shooting control as the condition for selecting the shooting control, that is, when the user needs to shoot, first change the ray direction of the virtual ray, so that the virtual ray points to the shooting control, intersects with the shooting control, and then passes
  • the user operation causes the head-mounted device to receive the fifth control instruction, so that the head-mounted device shoots the real environment picture and the virtual props.
  • the content such as the image or video obtained by shooting is automatically stored in a preset storage location, so that the user can transmit the captured content to other devices.
  • step 405 includes the following steps:
  • Step 405a in response to the fifth control instruction, switch the shooting control from the default display state to the shooting display state.
  • the head-mounted device In order to facilitate the user to confirm whether the current head-mounted device is shooting, when the virtual ray intersects the shooting control (the intersection of the virtual ray and the scene editing interface is located at the edge or inside of the shooting control) and receives the fifth control instruction, the head-mounted device will The shooting controls switch from the default display state to the shooting display state.
  • the shooting controls in the shooting display state are different from the shooting controls in the default display state in at least one element such as control graphics, control sizes, control display colors, and control display effects. During shooting, the shooting controls remain on the shooting display until the shooting ends.
  • Step 405b determining the target shooting mode based on the instruction type of the fifth control instruction.
  • the shooting methods in the head-mounted device include shooting images and recording videos.
  • the corresponding trigger operation is different, and the type of the corresponding fifth control instruction is also different.
  • the head mounted device or the control device generates the corresponding fifth control instruction based on the received operation type of the trigger operation, and the head mounted device is based on the fifth control instruction.
  • the instruction type of the instruction determines the target shooting mode, for example, the user's single-click operation on the shooting control triggers image shooting, and the user's long-pressing operation on the shooting control triggers video recording.
  • Step 405b includes the following steps:
  • Step 1 in response to the fifth control instruction being a photographing instruction, determine that the photographing mode is photographing an image.
  • the photographing instruction is an instruction generated when the control device receives the photographing operation.
  • the photographing method is determined to be photographing an image.
  • the photographing operation includes a single-click operation, a double-click operation, a long-press operation, or a preset gesture, and the like.
  • Step 2 in response to the fifth control instruction being a video recording instruction, determine that the shooting mode is video recording, and display the recording progress through the shooting control.
  • the video recording instruction is an instruction generated when the control device receives the video recording operation.
  • the head mounted device determines that the virtual ray intersects the capture control, and determines that the control device receives the video recording operation based on the fifth control command, the video recording mode is determined to be video recording.
  • the recording operation includes a single-click operation, a double-click operation, a long-press operation, or a preset gesture.
  • the operation types of the photographing operation and the recording operation are different.
  • the photographing operation is a single-click operation
  • the recording operation is a long-press operation
  • the recording duration is equal to the pressing duration of the long-pressing operation.
  • the shooting display states of the shooting controls are different in different shooting modes.
  • the head-mounted device switches the shooting control 1001a in the default display state to the shooting control 1001b in the shooting display state at the moment of shooting, and restores it to the default display state at the moment of shooting.
  • Step 405c using the target shooting mode to shoot the real environment picture and the virtual props.
  • the head-mounted device uses the target shooting method to shoot real environment pictures and virtual props, that is, the captured images or videos include both real environment pictures and virtual props, but do not include Virtual rays, scene editing interface and other controls, virtual content, etc.
  • Step 405d superimposing and displaying the shooting preview content on the scene editing interface.
  • the head-mounted device In order to facilitate the user to check the shooting effect in time, after the shooting is completed, the head-mounted device superimposes and displays the shooting preview content on the scene editing interface, for example, superimposes and displays the preview image on the scene editing interface.
  • the head-mounted device displays the shooting preview content for a preset duration, and after the display duration of the shooting preview content reaches the preset duration, automatically cancels the display of the shooting preview content, and returns to the scene editing interface, No need for users to manually close the preview content, simplifying user operations.
  • the head-mounted device shoots the real environment picture and virtual props, obtains the corresponding image, and edits it in the scene.
  • the shooting preview content 1104 is displayed superimposed on the interface 1103 .
  • the head-mounted device switches the shooting controls from the default display state to the shooting display state, so that it is convenient for the user to confirm whether the current head-mounted device is working Shooting; distinguish the interaction modes corresponding to different shooting modes, so that users can quickly select the target shooting mode according to their needs; and after the shooting is completed, the shooting preview content is superimposed on the scene editing interface, so that users can check the shooting effect in time.
  • Fig. 12 shows a flowchart of a method for displaying virtual props in a real environment picture provided by another exemplary embodiment of the present application. This embodiment is described by taking the method for a head-mounted device as an example, and the method includes the following steps:
  • Step 1201 superimposing and displaying a scene selection interface on the real environment picture, where the scene selection interface includes scene selection controls of at least one theme.
  • the head-mounted device after the head-mounted device starts the camera application, it first superimposes and displays a scene selection interface on the real environment screen, which includes scene selection controls for at least one theme, prompting the user to select one of the themes to perform the scene selection.
  • a scene selection interface on the real environment screen, which includes scene selection controls for at least one theme, prompting the user to select one of the themes to perform the scene selection.
  • the setup and shooting experience, the virtual props included in different themed scenes are different.
  • Step 1202 based on the ray adjustment data, move the virtual ray to intersect with the target scene selection control in the real environment picture.
  • Step 1203 in response to the sixth control instruction, superimpose and display the virtual ray and the scene editing interface corresponding to the target scene selection control on the real environment picture.
  • the sixth control instruction is an instruction generated when the control device or the head-mounted device receives the scene selection operation, when the head-mounted device determines that the virtual ray intersects the target scene selection control, and based on the sixth control
  • the control instruction determines that the scene selection operation is received
  • the virtual ray and the scene editing interface corresponding to the target scene selection control are superimposed and displayed on the real environment picture.
  • the scene selection operation includes a single-click operation, a double-click operation, a long-press operation, or a preset gesture, etc., which is not limited in this embodiment of the present application.
  • scene selection controls with three different themes are superimposed and displayed on the real environment screen, the virtual ray 1301 intersects the scene selection control 1302, and when the head-mounted device receives the sixth control instruction, the trigger is triggered.
  • the scene selection control 1302 switches the scene selection interface to the scene editing interface.
  • Step 1204 based on the ray adjustment data, move the virtual ray to intersect with the target prop selection control in the real environment picture.
  • Step 1205 in response to the first control instruction, display the target virtual prop corresponding to the target prop selection control in the real environment picture.
  • steps 1204 to 1205 For specific implementations of steps 1204 to 1205, reference may be made to the foregoing steps 402 to 403, and details are not described herein again in this embodiment of the present application.
  • Step 1206 based on the ray adjustment data, move the virtual ray to intersect with the scene switching control in the real environment picture.
  • Step 1207 in response to the seventh control instruction, superimpose and display a scene selection list on the real environment screen, where the scene selection list includes scene selection controls of at least one theme.
  • the scene editing interface displayed by the head-mounted device also includes a scene switching control.
  • the user can trigger the scene switching control to make the headset
  • the device displays a scene selection list, and opens other scenes through the scene selection controls in the scene selection list, without returning to the scene selection interface for re-selection, which simplifies user operations.
  • the head-mounted device receives control instructions and ray adjustment data through the control device, and the head-mounted device displays the scene switching control 1401 in the scene editing interface.
  • the head-mounted device displays the scene switching control 1401 in the scene editing interface.
  • the head-mounted device superimposes and displays the scene selection list 1403 on the real environment screen.
  • Step 1208 based on the ray adjustment data, move the virtual ray to intersect with the target scene selection control in the real environment picture.
  • Step 1209 in response to the eighth control instruction, superimpose and display the scene editing interface corresponding to the target scene selection control on the real environment picture.
  • the virtual props that have been placed in the current real environment picture are retained, so that the user can use the virtual prop settings in different theme scenes virtual scene.
  • the head-mounted device receives control commands and ray adjustment data through the control device.
  • the head-mounted device is in The scene editing interface corresponding to the target scene selection control is displayed superimposed on the real environment screen.
  • the camera application in the embodiment of the present application includes virtual scenes with different subjects, and the user can select scenes with corresponding themes to experience according to their own preferences;
  • the scene editing interface displayed by the head-mounted device also includes scene switching controls, when the virtual ray When intersecting with the scene switching control and receiving the seventh control instruction, the head-mounted device displays a scene selection list, and the head-mounted device opens other scenes by triggering the scene selection control in the scene selection list, so as to satisfy the user's experience in a certain theme. It is not necessary to return to the scene selection interface for re-selection, which simplifies the user operation.
  • FIG. 15 shows a structural block diagram of a display device for virtual props in a real environment picture provided by an exemplary embodiment of the present application.
  • the apparatus can be implemented as all or a part of the terminal through software, hardware or a combination of the two.
  • the device includes:
  • the first display module 1501 is used to superimpose and display a scene editing interface and a virtual ray on the real environment screen.
  • the scene editing interface includes a prop selection list, and the prop selection list includes at least one prop selection control corresponding to a virtual prop. ;
  • a first adjustment module 1502 configured to move the virtual ray to intersect with the target prop selection control in the real environment picture based on the ray adjustment data
  • the second display module 1503 is configured to display the target virtual prop corresponding to the target prop selection control in the real environment screen in response to the first control instruction.
  • the first control instruction includes an item selection instruction and an item placement instruction
  • the second display module 1503 includes:
  • a first display unit configured to highlight the target prop selection control in response to the virtual ray intersecting the target prop selection control
  • a second display unit configured to display the target virtual prop at the intersection of the virtual ray and the real environment picture in response to the prop selection instruction
  • a moving unit configured to move the virtual ray and the target virtual prop based on the ray adjustment data after the prop selection instruction
  • the releasing unit is used for displaying the target virtual prop at the placement position indicated by the prop placing instruction in response to the prop placing instruction.
  • the device further includes:
  • a second adjustment module configured to move the virtual ray in the real environment picture to intersect with the added props in the real environment picture based on the ray adjustment data
  • the third display module is used for displaying the added props at the intersection of the virtual ray and the real environment picture in response to the second control instruction; the moving module is used for displaying the added props based on the second control instruction.
  • the ray adjusts data, moves the virtual ray and the added prop.
  • the device further includes:
  • a third adjustment module configured to move the virtual ray to intersect with the target virtual prop in the real environment picture based on the ray adjustment data
  • a fourth display module configured to display an editing control corresponding to the target virtual prop in response to a third control instruction
  • a fourth adjustment module configured to move the virtual ray to intersect with a target editing control in the real environment picture based on the ray adjustment data
  • the editing module is configured to edit the target virtual prop based on the editing mode corresponding to the target editing control in response to the fourth control instruction.
  • the prop editing control includes at least one of a delete control, a zoom-in control, and a zoom-out control;
  • the editing module includes:
  • a first editing unit configured to delete the target virtual object in response to the target editing control being the deleting control and receiving the fourth control instruction
  • a second editing unit configured to magnify the target virtual object to a preset magnification in response to the target editing control being the magnifying control and receiving the fourth control instruction;
  • a third editing unit configured to reduce the target virtual object to a preset reduction ratio in response to the target editing control being the reduction control and receiving the fourth control instruction.
  • the device further includes:
  • a fifth adjustment module configured to move the virtual ray in the real environment picture to intersect with the shooting controls superimposed and displayed on the real environment picture based on the ray adjustment data
  • a photographing module configured to photograph the real environment picture and the virtual prop in response to the fifth control instruction.
  • the shooting module includes:
  • a third display unit configured to switch the shooting control from a default display state to a shooting display state in response to the fifth control instruction
  • a determining unit configured to determine a target shooting mode based on the instruction type of the fifth control instruction
  • a shooting unit configured to use the target shooting mode to shoot the real environment picture and the virtual props
  • the fourth display unit is configured to superimpose and display the shooting preview content on the scene editing interface.
  • the determining unit is further configured to:
  • the shooting mode is video recording, and the recording progress is displayed through the shooting control.
  • a data connection is established between the head-mounted device and a control device, and the control device is configured to send the ray adjustment data and control instructions to the head-mounted device, and the ray direction of the virtual ray.
  • Device pointing for the control device is established.
  • the device further includes:
  • a fifth display module configured to superimpose and display a scene selection interface on the real environment picture, the scene selection interface including at least one theme scene selection control;
  • the first display module 1501 includes:
  • a first adjustment unit configured to move the virtual ray to intersect with a target scene selection control in the real environment picture based on the ray adjustment data
  • a fifth display unit configured to superimpose and display the virtual ray and the scene editing interface corresponding to the target scene selection control on the real environment screen in response to the sixth control instruction.
  • a scene switching control is displayed in the scene editing interface
  • the device also includes:
  • the sixth adjustment module is used for moving the virtual ray to intersect with the scene switching control in the real environment picture based on the ray adjustment data; the sixth display module is used for responding to the seventh control instruction, in the The scene selection list is superimposed and displayed on the real environment picture, and the scene selection list includes at least one theme scene selection control;
  • a seventh adjustment module configured to move the virtual ray to intersect with the target scene selection control in the real environment picture based on the ray adjustment data
  • a seventh display module configured to respond to the eighth control instruction, in the The scene editing interface corresponding to the target scene selection control is superimposed and displayed on the real environment screen.
  • the virtual ray is displayed in real time, and the virtual ray is moved and displayed based on the ray adjustment data to indicate the trigger position of the control operation, so that the user can grasp the trigger position in real time by observing the position and direction of the virtual ray , the user only needs to intersect the virtual ray with the virtual props, controls and other objects that need to be controlled, and then the virtual props can be quickly controlled by the head-mounted virtual device, which improves the control efficiency and operation accuracy of the virtual props.
  • an embodiment of the present application provides a structural block diagram of a system for displaying virtual props in a real environment picture.
  • the display system for virtual props in a real environment picture includes a head-mounted device 1600 and a control device 1700 .
  • Mode device 1600 may include one or more of the following components: processor 1601, memory 1602, power supply component 1603, multimedia component 1604, audio component 1605, input/output (I/O) interface 1606, sensor component 1607, and Communication component 1608.
  • the processor 1601 generally controls the overall operations of the head mounted device, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processor 1601 may include one or more processing cores.
  • the processor 1601 uses various interfaces and lines to connect various parts of the entire device 1600, and executes the terminal by running or executing the instructions, programs, code sets or instruction sets stored in the memory 1602, and calling the data stored in the memory 1602. 1600's various functions and processing data.
  • the processor 1601 may adopt at least one of digital signal processing (Digital Signal Processing, DSP), field-programmable gate array (Field-Programmable Gate Array, FPGA), and programmable logic array (Programmable Logic Array, PLA).
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PLA programmable logic array
  • the processor 1601 may integrate one or a combination of a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphics Processing Unit, GPU), a modem, and the like.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the CPU mainly handles the operating system, user interface and application programs, etc.
  • the GPU is used for rendering and drawing the content that needs to be displayed on the screen
  • the modem is used for handling wireless communication. It can be understood that, the above-mentioned modem may not be integrated into the processor 1601, and is implemented by a communication chip alone.
  • Memory 1602 is configured to store various types of data to support operation at the head mounted device. Examples of such data include instructions for any application or method operating on the headset, models, contact data, phonebook data, messages, images, videos, etc.
  • the memory 1602 may include random access memory (Random Access Memory, RAM), or may include read-only memory (Read-Only Memory, ROM).
  • the memory 1602 includes a non-transitory computer-readable storage medium. Memory 1602 may be used to store instructions, programs, codes, sets of codes, or sets of instructions.
  • the memory 1602 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playback function, an image playback function, etc.) , instructions for implementing the above method embodiments, etc.
  • the operating system can be an Android (Android) system (including a system based on the deep development of the Android system), an IOS system developed by Apple (including a system based on the deep development of the IOS system) or other systems.
  • the storage data area may also store data created by the terminal 1600 during use (eg phone book, audio and video data, chat record data) and the like.
  • Power component 1603 provides power to various components of head mounted device 1600.
  • Power supply components 1603 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to head mounted device 1600 .
  • Multimedia component 1604 includes a screen that provides an output interface between the head mounted device 1600 and the user.
  • the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP).
  • LCD Liquid Crystal Display
  • TP Touch Panel
  • the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • the multimedia component 1604 includes a front-facing camera and/or a rear-facing camera.
  • the front camera and/or the rear camera may receive external multimedia data.
  • the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
  • Audio component 1605 is configured to output and/or input audio signals.
  • the audio component 1605 includes a microphone (Microphone, MIC) configured to receive external audio signals when the headset 1600 is in operating modes, such as calling mode, recording mode, and voice recognition mode.
  • the received audio signal may be further stored in memory 1602 or transmitted via communication component 1608.
  • audio component 1605 also includes a speaker for outputting audio signals.
  • the I/O interface 1606 provides an interface between the processor 1601 and a peripheral interface module, and the above-mentioned peripheral interface module can be a keyboard, a click wheel, a button, a touch panel, and the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
  • Sensor assembly 1607 includes one or more sensors for providing various aspects of status assessment for head mounted device 1600 .
  • the sensor assembly 1607 can detect the open/closed state of the head mounted device 1600, the relative positioning of the components, such as the display screen and keypad of the head mounted device 1600, and the sensor assembly 1607 can also detect the head mounted device 1600. Changes in the position of the device 1600 or a component of the headset 1600, the presence or absence of user contact with the headset 1600, the orientation or acceleration/deceleration of the headset 1600 and changes in the temperature of the headset 1600.
  • Sensor assembly 1607 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. Sensor assembly 1607 may also include a light sensor for use in imaging applications.
  • the sensor component 1607 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the head mounted audiovisual device 1600 determines the operation type of the control operation through the pressure sensor.
  • Communication component 1608 is configured to facilitate wired or wireless communication between head mounted device 1600 and other devices (eg, control devices). Head mounted device 1600 may access wireless networks based on communication standards. In one exemplary embodiment, the communication component 1608 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1608 further includes a Near Field Communication (NFC) module to facilitate short-range communication.
  • NFC Near Field Communication
  • the NFC module may be based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (Blue Tooth, BT) technology and other technologies to achieve.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wide Band
  • Bluetooth Bluetooth
  • the structure of the device 1600 shown in the above drawings does not constitute a limitation on the device 1600, and the device may include more or less components than those shown in the drawings, or combine some components, or a different arrangement of components.
  • a connection is established between the head-mounted device 1600 and the control device 1700 through a data cable, a WiFi hotspot, or Bluetooth.
  • the control device 1700 may include one or more of the following components: a processor 1710 , a memory 1720 and a display screen 1730 .
  • Processor 1710 may include one or more processing cores.
  • the processor 1710 uses various interfaces and lines to connect various parts in the entire terminal 1700, and executes the terminal by running or executing the instructions, programs, code sets or instruction sets stored in the memory 1720, and calling the data stored in the memory 1720. 1700's various functions and processing data.
  • the processor 1710 may be implemented in at least one hardware form among DSP, FPGA, and PLA.
  • the processor 1710 may integrate one or a combination of a CPU, a GPU, a Neural-network Processing Unit (NPU), a modem, and the like.
  • NPU Neural-network Processing Unit
  • the CPU mainly processes the operating system, user interface, and application programs; the GPU is used for rendering and drawing the content that needs to be displayed on the touch display screen 1730; the NPU is used for implementing artificial intelligence (Artificial Intelligence, AI) functions; the modem is used for Handle wireless communications. It can be understood that, the above-mentioned modem may not be integrated into the processor 1710, but is implemented by a single chip.
  • the memory 1720 may include RAM or ROM.
  • the storage 1720 includes a non-transitory computer-readable storage medium.
  • Memory 1720 may be used to store instructions, programs, codes, sets of codes, or sets of instructions.
  • the memory 1720 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playback function, an image playback function, etc.), Instructions and the like for implementing the above-mentioned various method embodiments; the storage data area may store data (such as audio data, phone book) and the like created according to the use of the terminal 1700 .
  • Display screen 1730 is a display component for displaying a user interface.
  • the display screen 1730 also has a touch function, and through the touch function, a user can use any suitable object such as a finger, a touch pen, and the like to perform a touch operation on the display screen 1730 .
  • the display screen 1730 is usually provided on the front panel of the terminal 1730 .
  • the display screen 1730 can be designed as a full screen, a curved screen, a special-shaped screen, a double-sided screen or a folding screen.
  • the display screen 1730 can also be designed to be a combination of a full screen and a curved screen, or a combination of a special-shaped screen and a curved screen, which is not limited in this embodiment.
  • the structure of the terminal 1700 shown in the above drawings does not constitute a limitation on the terminal 1700, and the terminal may include more or less components than those shown in the drawings, or combine some components, or a different arrangement of components.
  • the terminal 1700 also includes a camera assembly, a microphone, a speaker, a radio frequency circuit, an input unit, a sensor (such as an acceleration sensor, an angular velocity sensor, a light sensor, etc.), an audio circuit, a WiFi module, a power supply, a Bluetooth module, and other components. No longer.
  • Embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium stores at least one instruction, and the at least one instruction is loaded and executed by a processor to realize the real environment picture described in the above embodiments Display method of virtual props in .
  • a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
  • a processor of the head mounted device or control device reads the computer instructions from a computer readable storage medium, the processor executes the computer instructions to cause the head mounted device or control device to perform various optional implementations of the above aspects provided in The display method of virtual props in the real environment picture.
  • Computer-readable storage media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种真实环境画面中虚拟道具的显示方法、***及存储介质,属于人机交互领域。该方法包括:在真实环境画面上叠加显示场景编辑界面和虚拟射线(401);基于射线调整数据,在真实环境画面中将虚拟射线移动至与目标道具选择控件相交(402);响应于第一控制指令,在真实环境画面中显示目标道具选择控件对应的目标虚拟道具(403)。上述方法、***及存储介质有助于提高通过头戴式虚拟设备控制虚拟道具的控制效率和操作的准确性。

Description

真实环境画面中虚拟道具的显示方法、***及存储介质
本申请要求于2020年11月16日提交的申请号为202011282179.6、发明名称为“真实环境画面中虚拟道具的显示方法、***及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及人机交互技术领域,特别涉及一种真实环境画面中虚拟道具的显示方法、***及存储介质。
背景技术
增强现实(Augmented Reality,AR)技术是一种将虚拟内容与真实世界融合的技术,将计算机设备生成的文字、图像、三维模型、音乐、视频等虚拟内容模拟仿真后,叠加显示在真实环境中,而虚拟现实(Virtual Reality,VR)技术则是根据从现实环境中采集的数据模拟出虚拟环境和虚拟内容。用户可以利用头戴式设备体验对AR内容或VR内容的多种操作。
相关技术中,用户通过头戴式设备完成各项操作时,通过头戴式设备中的功能按键触发相应操作,当头戴式视听设备聚焦在某一虚拟对象时,对该虚拟对象进行高亮显示,从而使用户通过预设操作使头戴式视听设备执行相应指令。
发明内容
本申请实施例提供了一种真实环境画面中虚拟道具的显示方法、***及存储介质。所述技术方案如下:
一方面,本申请实施例提供了一种真实环境画面中虚拟道具的显示方法,所述方法用于头戴式设备,所述方法包括:
在真实环境画面上叠加显示场景编辑界面和虚拟射线,所述场景编辑界面中包含道具选择列表,所述道具选择列表中包含至少一种虚拟道具对应的道具选择控件;
基于射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与目标道具选择控件相交;
响应于第一控制指令,在所述真实环境画面中显示所述目标道具选择控件对应的目标虚拟道具。
另一方面,本申请实施例提供了一种真实环境画面中虚拟道具的显示装置,所述装置包括:
第一显示模块,用于在真实环境画面上叠加显示场景编辑界面和虚拟射线,所述场景编辑界面中包含道具选择列表,所述道具选择列表中包含至少一种虚拟道具对应的道具选择控件;
第一调整模块,用于基于射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与目标道具选择控件相交;
第二显示模块,用于响应于第一控制指令,在所述真实环境画面中显示所述目标道具选择控件对应的目标虚拟道具。
另一方面,本申请实施例提供了一种真实环境画面中虚拟道具的显示***,所述真实环境画面中虚拟道具的显示***包括头戴式设备和控制设备,所述头戴式设备与所述控制设备之间建立有数据连接;所述控制设备用于向所述头戴式设备发送控制指令以及射线调整数据;所述头戴式设备包括处理器和存储器;所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如上述方面所述的真实环境画面中虚拟道具的显示方法。
另一方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条程序代码,所述程序代码由处理器加载并执行以实现如上述方面所述的真实环境画面中虚拟道具的显示方法。
根据本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。头戴式设备或控制设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该头戴式设备或控制设备实现上述方面的各种可选实现方式中提供的真实环境画面中虚拟道具的显示方法。
附图说明
图1是本申请一个示例性实施例提供的头戴式设备的示意图;
图2是本申请另一个示例性实施例提供的头戴式设备的示意图;
图3是本申请一个示例性实施例提供的真实环境画面中虚拟道具的显示***的示意图;
图4是本申请一个示例性实施例提供的真实环境画面中虚拟道具的显示方法的流程图;
图5是本申请一个示例性实施例提供的场景编辑界面和虚拟道具的示意图;
图6是本申请另一个示例性实施例提供的真实环境画面中虚拟道具的显示方法的流程图;
图7是本申请一个示例性实施例提供的移动虚拟道具的示意图;
图8是本申请一个示例性实施例提供的编辑虚拟道具的示意图;
图9是本申请另一个示例性实施例提供的真实环境画面中虚拟道具的显示方法的流程图;
图10是本申请一个示例性实施例提供的不同显示状态下拍摄控件的示意图;
图11是本申请一个示例性实施例提供的拍摄预览内容的示意图;
图12是本申请另一个示例性实施例提供的真实环境画面中虚拟道具的显示方法的流程图;
图13是本申请一个示例性实施例提供的通过场景选择控件开启场景编辑界面的示意图;
图14是本申请一个示例性实施例提供的通过场景切换控件开启场景选择列表的示意图;
图15是本申请一个示例性实施例提供的真实环境画面中虚拟道具的显示装置的结构框图;
图16是本申请一个示例性实施例提供的真实环境画面中虚拟道具的显示***的结构框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
相关技术中,用户通过头戴式设备与虚拟道具进行交互时,通过控制头戴式设备的设备姿态,使其聚焦位置位于想要进行交互的目标虚拟道具上,头戴式设备聚焦在某一虚拟道具时会对其进行高亮显示,以便用户确定当前聚焦位置是否位于目标虚拟道具,然后通过触发相应的功能按键,使头戴式设备执行指令,实现对目标虚拟道具的交互。
然而,上述基于功能按键的交互方式不便于用户快速完成操作,用户执行操作时寻找并触发正确的功能按键,需要一定的学习成本,而对于高亮显示聚焦对象的方式,用户还需要观察各个虚拟对象的显示状态才能够确定当前聚焦位置,并且用户需要边控制设备姿态边观察虚拟对象显示状态的变化情况,才能够将聚焦位置移动至目标虚拟对象,并对目标虚拟对象进行编辑,操作学习成本高且步骤较为繁琐,效率较低。
在一种可能的实施方式中,头戴式设备为AR设备、VR设备或AR和VR一体的视听设备。
头戴式设备在利用AR技术显示多媒体内容时,按照显示原理可以大致划分为三种:
一种为设置有显示屏和摄像头的头戴式设备,其通过摄像头采集周围的真实环境画面,再将虚拟信息与真实环境画面叠加,通过显示屏展示叠加后的画面。
一种为设置有投影组件和透明镜片的头戴式设备,其通过投影组件将虚拟信息投影到透明镜片上,则用户可以通过透明镜片同时观察到真实环境和虚拟信息,从而获得在真实环境中编辑虚拟信息的体验。
另一种设置有投影组件和透明镜片的头戴式设备,其投影组件设置在设备内侧,可以通过投影组件将虚拟信息直接投影至用户的眼球,从而使用户得到在真实环境中编辑虚拟信息的使用感受。其中,虚拟信息包括文字、模型、网页以及多媒体内容(例如虚拟图像、视频、音频)等。
图1示出了一种头戴式设备110,该设备110为头盔显示器(Head-Mounted Display,HMD)设备,头戴式设备110通过摄像头111实时采集真实环境画面,将虚拟信息与真实环境画面进行叠加,并将叠加后的画面通过显示屏112进行展示,用户将头戴式设备110佩戴在头部后,即可通过显示屏112观察到虚拟信息与真实环境画面融合的场景。图2示出了另一种头戴式设备210,该设备210为眼镜式设备,头戴式设备210的镜片外侧设置有投影组件211,头戴式设备210通过投影组件211将虚拟信息投影至镜片212,用户佩戴头戴式设备210后,通过镜片212即可同时观察到真实环境画面和虚拟信息。
本申请以头戴式设备为设置有显示屏和摄像头的头戴式设备为例进行说明。如图3所示,头戴式设备310设置有摄像头组件311和显示屏组件312,其通过摄像头组件311实时拍摄周围的真实环境画面,并将真实环境画面与AR信息融合后,通过显示屏组件312在头戴式设备310内侧进行展示。在一种可能的实现方式中,头戴式设备310具有虚拟场景编辑和拍摄功能,用户通过改变头戴式设备310的设备姿态调整取景内容。
在一种可能的实施方式中,头戴式设备310可以被单独使用实现各种功能,也可以与控制设备320配合使用。当头戴式设备310与控制设备320配合成为真实环境画面中虚拟道具的显示***时,可选的,头戴式设备310中的处理器负责执行本申请实施例中大部分的数据处理任务,控制设备320基于用户的触发操作向头戴式设备310发送指令和数据;或者,控制设备320中的处理器负责执行本申请实施例中大部分的数据处理任务,而头戴式设备310负责基于控制设备320的执行结果进行画面渲染等。本申请实施例对此不作限定。
控制设备320与头戴式设备310相连,其设备类型包括:手柄、智能手机、平板电脑中的至少一种。控制设备320中设置有触控区域和触控按键中的至少一种,头戴式设备310在真实环境画面中通过虚拟射线指示控制设备320的设备指向,使用户通过观察虚拟射线的位置和方向实时掌握控制设备320的设备指向,并结合对控制设备的触控操作控制头戴式设备310执行相应的指令。在一种可能的实现方式中,当控制设备320与头戴式设备310连接时,头戴式设备310同步接收控制设备320发送的控制指令。
可选的,头戴式设备310与控制设备320通过数据线、无线保真(Wireless Fidelity,WiFi)热点或蓝牙等方式建立连接。
本申请实施例提供的真实环境画面中虚拟道具的显示方法包括:
在真实环境画面上叠加显示场景编辑界面和虚拟射线,场景编辑界面中包含道具选择列表,道具选择列表中包含至少一种虚拟道具对应的道具选择控件;
基于射线调整数据,在真实环境画面中将虚拟射线移动至与目标道具选择控件相交;
响应于第一控制指令,在真实环境画面中显示目标道具选择控件对应的目标虚拟道具。
可选的,第一控制指令包括道具选择指令和道具放置指令;
响应于第一控制指令,在真实环境画面中显示目标道具选择控件对应的目标虚拟道具,包括:
响应于虚拟射线与目标道具选择控件相交,对目标道具选择控件进行突出显示;
响应于道具选择指令,将目标虚拟道具显示在虚拟射线与真实环境画面的交点处;
基于道具选择指令后的射线调整数据,移动虚拟射线和目标虚拟道具;
响应于道具放置指令,在道具放置指令所指示的放置位置显示目标虚拟道具。
可选的,响应于第一控制指令,在真实环境画面中显示目标道具选择控件对应的目标虚拟道具之后,方法还包括:
基于所述射线调整数据,在真实环境画面中将虚拟射线移动至与真实环境画面中的已添加道具相交;
响应于第二控制指令,将已添加道具显示在虚拟射线与真实环境画面的交点处;
基于第二控制指令后的射线调整数据,移动虚拟射线和已添加道具。
可选的,基于在真实环境画面上叠加显示场景编辑界面和虚拟射线之后,方法还包括:
基于射线调整数据,在真实环境画面中将虚拟射线移动至与目标虚拟道具相交;
响应于第三控制指令,显示目标虚拟道具对应的编辑控件;
基于射线调整数据,在真实环境画面中将虚拟射线移动至与目标编辑控件相交;
响应于第四控制指令,基于目标编辑控件对应的编辑方式对目标虚拟道具进行编辑。
可选的,道具编辑控件包括删除控件、放大控件和缩小控件中的至少一种;
响应于第四控制指令,基于目标编辑控件对应的编辑方式对目标虚拟道具进行编辑,包括:
响应于目标编辑控件为删除控件,且接收到第四控制指令,删除目标虚拟对象;
响应于目标编辑控件为放大控件,且接收到第四控制指令,将目标虚拟对象放大至预设放大倍数;
响应于目标编辑控件为缩小控件,且接收到第四控制指令,将目标虚拟对象缩小至预设缩小倍数。
可选的,响应于第一控制指令,在真实环境画面中显示目标道具选择控件对应的目标虚拟道具之后,方法还包括:
基于射线调整数据,在真实环境画面中将虚拟射线移动至与真实环境画面上叠加显示的拍摄控件相交;
响应于第五控制指令,对真实环境画面和虚拟道具进行拍摄。
可选的,响应于第五控制指令,对真实环境画面和虚拟道具进行拍摄,包括:
响应于虚拟射线与拍摄控件相交,且接收到第五控制指令,将拍摄控件从默认显示状态切换至拍摄显示状态;
基于第五控制指令的指令类型确定目标拍摄方式;
采用目标拍摄方式对真实环境画面和虚拟道具进行拍摄;
在场景编辑界面上叠加显示拍摄预览内容。
可选的,基于第五控制指令的指令类型确定目标拍摄方式,包括:
响应于第五控制指令为拍照指令,确定拍摄方式为拍摄图像;
响应于第五控制指令为录像指令,确定拍摄方式为录制视频,并通过拍摄控件展示录制进度。
可选的,头戴式设备与控制设备之间建立有数据连接,控制设备用于向头戴式设备发送射线调整数据以及控制指令,虚拟射线的射线方向为控制设备的设备指向。
可选的,在真实环境画面上叠加显示场景编辑界面和虚拟射线之前,方法还包括:
在真实环境画面上叠加显示场景选择界面,场景选择界面中包含至少一种主题的场景选择控件;
在真实环境画面上叠加显示场景编辑界面和虚拟射线,包括:
基于所述射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与目标场景选择控件相交;
响应于第六控制指令,在真实环境画面上叠加显示虚拟射线以及目标场景选择控件对应的场景编辑界面。
可选的,场景编辑界面中显示有场景切换控件;
在真实环境画面上叠加显示场景编辑界面和虚拟射线之后,方法还包括:
基于射线调整数据,在真实环境画面中将虚拟射线移动至与场景切换控件相交;
响应于第七控制指令,在真实环境画面上叠加显示场景选择列表,场景选择列表中包含至少一种主题的场景选择控件;
基于射线调整数据,在真实环境画面中将虚拟射线移动至与目标场景选择控件相交;
响应于第八控制指令,在真实环境画面上叠加显示目标场景选择控件对应的场景编辑界面。
图4示出了本申请一个示例性实施例提供的真实环境画面中虚拟道具的显示方法的流程图。本实施例以该方法用于头戴式设备为例进行说明,该方法包括如下步骤:
步骤401,在真实环境画面上叠加显示场景编辑界面和虚拟射线,场景编辑界面中包含道具选择列表,道具选择列表中包含至少一种虚拟道具对应的道具选择控件。
其中,虚拟射线用于指示控制操作的触发位置,头戴式视听设备实时获取包含虚拟射线指向的数据,并将虚拟射线显示在真实环境画面中。在一种可能的实施方式中,用户通过预设方式控制虚拟射线的指向,使头戴式设备获取数据,例如,头戴式设备基于眼球识别的方式获取用户视线,并将用户视线的指向作为虚拟射线的指向,用户只需转动眼镜即可改变虚拟射线的位置、方向等;或者,头戴式设备中设置有触控区域,头戴式设备基于从触控区域接收到的触控操作,确定虚拟射线的指向;或者,头戴式设备中设置有传感器,通过传感器获取设备姿态,并将当前设备姿态所指示的设备朝向确定为虚拟射线的指向用户,转动头部时场景编辑界面固定不动,头戴式设备同步调整虚拟射线的指向,达到转动头部控制虚拟射线的效果。
头戴式视听设备开启后,实时采集真实环境画面,并根据用户输入确定需要展示的虚拟信息。本申请实施例中,头戴式视听设备运行相机应用,上述虚拟信息为场景编辑界面和虚拟射线。
在一种可能的实施方式中,头戴式视听设备通过摄像头组件采集设备正前方的真实环境画面,并将场景编辑界面和虚拟射线融合在真实环境画面后通过显示屏组件进行展示,或者直接显示场景编辑界面,例如,显示屏组件位于头戴式视听设备的前部,使用户佩戴头戴式视听设备后正视前方即可观察到场景编辑界面和虚拟射线。
本申请实施例中的相机应用具有场景编辑功能,用户利用道具选择列表中所包含的虚拟道具组成虚拟场景,头戴式视听设备对虚拟道具和真实环境画面进行融合显示。用户可以利用头戴式视听设备创建以及编辑虚拟场景,并对虚拟场景和真实环境画面进行拍摄,而不是只能够拍摄头戴式视听设备所显示的预设虚拟场景和真实环境画面。
头戴式视听设备在相对于真实环境画面的预设位置处显示场景编辑界面,例如,头戴式视听设备将场景编辑界面显示在显示器的左侧区域。
示意性的,图5示出了一种头戴式视听设备的显示画面。头戴式视听设备在真实环境画面501上叠加显示有场景编辑界面502、虚拟道具504以及虚拟射线505,其中,场景编辑界面502中包含至少一种虚拟道具对应的道具选择控件503。并且,场景编辑界面502中还显示有其它功能控件,例如返回控件,用于返回显示上一虚拟界面;上翻控件、下翻控件,用于展现不同的道具选择控件;清空控件,用于一键清空当前真实环境画面中已放置的虚拟道具。
步骤402,基于射线调整数据,在真实环境画面中将虚拟射线移动至与目标道具选择控件相交。
其中,射线调整数据中包含虚拟射线的射线方向。头戴式设备基于用户操作或其它设备发送的用户操作等信息,获取射线调整数据,并基于获取到的射线调整数据在真实环境画面中移动虚拟射线。例如,头戴式设备实时进行眼球识别,并捕捉用户视线方向,将视线方向确定为虚拟射线的射线方向,此时射线调整数据是基于用户视线方向的变化情况得到的,用户转动眼球即可控制虚拟射线在真实环境画面中的指向。
头戴式设备以虚拟射线与目标道具选择控件相交作为选中目标道具选择控件的条件,即用户在真实环 境画面中添加目标虚拟道具时,需要先通过改变虚拟射线的射线方向,使虚拟射线指向目标道具选择控件,并与目标道具选择控件相交,再通过预设方式的用户操作,使头戴式设备根据指令在真实环境画面中显示目标道具选择控件对应的目标虚拟道具。
步骤403,响应于第一控制指令,在真实环境画面中显示目标道具选择控件对应的目标虚拟道具。
可选的,头戴式设备中设置有触控区域,用户通过作用于触控区域的触控操作使头戴式设备接收第一指令;或者,用户通过预设的手势,使头戴式设备在检测到手势时接收第一指令等。其中,第一指令是用于指示头戴式设备执行用户操作的指令,其中包含触发操作的操作类型和操作数据等,头戴式设备基于第一指令中所包含的具体信息(操作类型和操作数据等)以及当前虚拟射线所指向的虚拟对象(控件、虚拟道具等),确定需要执行的指令。
当虚拟射线与目标道具选择控件相交,且接收到第一控制指令时,头戴式设备在真实环境画面中添加显示目标道具选择控件所对应的目标虚拟道具,如图5所示,虚拟射线505与道具选择控件503相交,且头戴式设备接收到第一控制指令,头戴式设备则根据第一控制指令在真实环境画面501中显示道具选择控件503所对应的虚拟道具504。
在另一种可能的实施方式中,头戴式设备与控制设备之间建立有数据连接,控制设备用于向头戴式设备发送射线调整数据以及控制指令,虚拟射线的射线方向为控制设备的设备指向。在一种可能的实施方式中,控制设备实时将自身的设备指向数据(例如控制设备相对于空间中x轴、y轴和z轴的角度等)发送至头戴式设备,头戴式设备基于设备指向数据得到虚拟射线的射线方向,从而在真实环境画面中显示虚拟射线,使佩戴头戴式设备的用户能够通过观察虚拟射线掌握控制设备的设备指向,明确控制设备所指向的虚拟对象。
综上所述,本申请实施例中,通过显示道具选择列表,向用户提供虚拟道具,使用户可以根据需求选择目标虚拟道具并添加至真实环境画面中合适的位置,使得用户可以自由创建虚拟现实场景;通过实时显示虚拟射线,并基于射线调整数据移动显示虚拟射线,以指示控制操作的触发位置,使用户能够通过观察虚拟射线的位置和方向实时掌握触发位置,用户只需使虚拟射线与虚拟道具、控件等需要控制的对象相交,即可快速通过头戴式虚拟设备控制虚拟道具,提高了虚拟道具的控制效率和操作的准确性。
图6示出了本申请另一个示例性实施例提供的真实环境画面中虚拟道具的显示方法的流程图。本实施例以该方法用于头戴式设备为例进行说明,该方法包括如下步骤:
步骤601,在真实环境画面上叠加显示场景编辑界面和虚拟射线,场景编辑界面中包含道具选择列表,道具选择列表中包含至少一种虚拟道具对应的道具选择控件。
步骤602,基于射线调整数据,在真实环境画面中将虚拟射线移动至与目标道具选择控件相交。
步骤601至步骤602的具体实施方式可以参考上述步骤401至步骤402,本申请实施例在此不再赘述。
步骤603,响应于虚拟射线与目标道具选择控件相交,对目标道具选择控件进行突出显示。
场景编辑界面中通常包含多个道具选择控件以及其它类型的控件,为了进一步帮助用户快速掌握控制设备的设备指向,当虚拟射线与目标道具选择控件相交时,头戴式设备对目标道具选择控件进行突出显示,从而使用户得知当前虚拟射线所选中的道具选择控件,从而能够在需要通过该控件添加虚拟道具时快速进行操作,或者在想要添加其它道具时及时调整虚拟射线的指向。并且,在虚拟射线与目标道具选择控件相交时对目标道具选择控件进行突出显示,能够在虚拟射线与目标道具选择控件的边缘接触时立即提示用户已与目标道具选择控件相交,而无需用户仔细观察虚拟射线与场景编辑界面交点所在位置,确认是否相交,或继续移动虚拟射线使交点位于道具选择控件的中心位置才能够确保相交。
可选的,该突出显示的方式包括高亮显示、放大显示、改变颜色等方式中的至少一种。
在一种可能的实施方式中,道具选择控件包括可触发状态、选中状态和不可触发状态,当虚拟射线与目标道具选择控件相交(虚拟射线与场景编辑界面的交点位于目标道具选择控件的边缘或内部),且道具选择控件处于可触发状态时,对目标道具选择控件进行突出显示,使目标道具选择控件从可触发状态切换至选中状态。对于不可添加的虚拟道具,其对应的道具选择控件处于不可触发状态,例如由于应用版本未更新等原因导致的虚拟道具不可添加。不可触发状态下的道具选择控件,其显示状态与可触发状态下的道具选择控件不同,例如可触发状态下的道具选择控件包含虚拟道具的缩略图,而不可触发状态下的道具选择控件中包含对应虚拟道具的阴影等,本申请实施例对此不作限定。
如图5所示,虚拟射线505与道具选择控件503相交,道具选择控件503处于选中状态,头戴式设备对其进行放大和高亮显示,而道具选择控件506处于不可触发状态,头戴式设备仅在该控件中显示对应虚拟道具的阴影图,其它道具选择控件处于可触发状态,其中显示有对应虚拟道具的缩略图。
步骤604,响应于道具选择指令,将目标虚拟道具显示在虚拟射线与真实环境画面的交点处。
头戴式设备在虚拟射线与目标道具选择控件相交,且接收到道具选择指令时,在真实环境画面中显示 目标虚拟道具,并将目标虚拟道具显示在虚拟射线与真实环境画面的交点处,达到目标虚拟道具“吸附”在虚拟射线上的显示效果,使目标虚拟道具能够随虚拟射线移动。当用户改变虚拟射线的射线方向时,目标虚拟道具能够自动随虚拟射线基于射线调整数据而移动,从而使用户可以通过控制虚拟射线将目标虚拟道具移动至真实环境画面中的任意位置。
在一种可能的实施方式中,道具选择指令是控制设备在接收到道具选择操作时生成的指令,可选的,道具选择操作为单击操作、双击操作或长按操作等,本申请实施例对此不作限定。
例如,当头戴式设备通过控制设备接收控制指令和射线调整数据时,道具选择操作为长按操作,则用户可以通过控制虚拟射线与目标道具选择控件相交,并在控制设备的触控区域进行长按操作,使头戴式设备将目标虚拟道具显示在虚拟射线与真实环境画面的交点处,这一过程中,控制设备在接收到长按操作时,向头戴式设备发送接收到长按操作的指令,头戴式设备根据该指令所指示的操作类型,以及当前虚拟射线所指向的对象,判断出需要将目标虚拟道具显示在虚拟射线与真实环境画面的交点处,并进行相应的显示。
步骤605,基于道具选择指令后的射线调整数据,移动虚拟射线和目标虚拟道具。
当用户想要使目标虚拟道具在真实环境画面中移动时,通过用户操作使头戴式设备接收到射线调整数据,头戴式设备基于射线调整数据移动虚拟射线和目标虚拟道具,并实时显示。
在一种可能的实施方式中,当头戴式设备通过控制设备接收控制指令和射线调整数据时,用户通过作用于控制设备的长按操作使头戴式设备选中目标虚拟道具,即使目标虚拟道具显示在虚拟射线与真实环境画面的交点处,并在长按操作未结束时,改变控制设备的设备姿态,例如移动或转动控制设备等,控制设备将包含移动方向和移动距离的射线调整数据发送至头戴式设备,头戴式设备基于控制设备的移动方向确定虚拟射线和目标虚拟道具的移动方向,并基于控制设备的移动距离以及距离映射关系,确定目标虚拟道具的移动距离,其中,距离映射关系为控制设备的移动距离与在真实环境画面中的映射距离之间的关系。
在另一种可能的实施方式中,用户可以单独通过头戴式设备实现对目标虚拟道具的添加操作,头戴式设备基于自身的传感器获取设备姿态,进而在设备姿态改变时确定射线调整数据。头戴式设备中设置有触控区域,用户通过作用在该触控区域内的触控操作(例如长按操作),使目标虚拟道具“吸附”在虚拟射线与真实环境画面的交点处,并在触控操作未结束时,调整头戴式设备的设备姿态,使虚拟射线以及目标虚拟道具跟随用户的头部动作而移动。
步骤606,响应于道具放置指令,在道具放置指令所指示的放置位置显示目标虚拟道具。
在一种可能的实施方式中,当头戴式设备接收到道具放置指令时,将目标虚拟道具脱离虚拟射线,固定放置在当前位置。其中,道具放置指令是头戴式设备或其它设备在接收到道具放置操作时生成的指令,可选的,道具放置操作为单击操作、双击操作或长按操作等,或者头戴式设备在第二控制指令结束时确定接收到道具放置指令,即用户停止将目标虚拟道具显示在虚拟射线与真实环境画面的交点处即可放置目标虚拟道具。头戴式设备基于自身接收到的触发操作或基于检测到的用户手势生成道具放置指令,或者,头戴式设备接收其它设备所发送的道具放置指令,本申请实施例对此不作限定。
例如,当头戴式设备通过控制设备接收控制指令和射线调整数据时,若控制设备接收到长按操作,则向头戴式设备发送道具选择指令,当控制设备检测到长按操作结束时,向头戴式设备发送道具放置指令,即用户在真实环境画面中添加虚拟道具时,通过长按并移动控制设备,移动虚拟道具,在将虚拟道具移动至想要放置的位置时,松手即可使虚拟道具固定显示在当前位置处。
示意性的,图7示出了添加并防止目标虚拟对象过程的示意图。当虚拟射线702与道具选择控件701相交,且头戴式设备接收到控制设备发送的道具选择指令时,将虚拟道具703显示在虚拟射线702与真实环境画面的交点处,并且基于控制设备发送的射线调整数据,移动虚拟道具703和虚拟射线702,图中虚线部分表示移动过程中的虚拟道具703和虚拟射线702,当虚拟道具703和虚拟射线702移动至图7中实线所在位置时,头戴式设备接收到控制设备发送的道具放置指令,则将虚拟道具703显示在射线调整数据所指示的显示位置处,将虚拟道具703固定显示在该位置,并且后续虚拟射线702移动时虚拟道具703并不会随之移动。
为了方便用户快速掌握添加虚拟道具的交互方式,头戴式设备在真实环境画面上叠加显示操作提示信息,如图7中的“长按拖动模型,松开放置”等。
步骤607,基于射线调整数据,在真实环境画面中将虚拟射线移动至与真实环境画面中的已添加道具相交。
步骤608,响应于第二控制指令,将已添加道具显示在虚拟射线与真实环境画面的交点处。
在一种可能的实施方式中,用户可以通过头戴式设备在添加虚拟道具时移动虚拟道具,还可以对已经放置在真实环境画面中的虚拟道具(即已添加道具)进行移动。当虚拟射线与真实环境画面中的已添加道具相交,且头戴式设备接收到第二控制指令时,将已添加道具显示在虚拟射线与真实环境画面的交点处。其中,第二控制指令是头戴式设备或其它设备在接收到道具移动操作时生成的指令,可选的,道具移动操 作为单击操作、双击操作、长按操作或者预设手势等,头戴式设备基于自身接收到的触发操作或基于检测到的用户手势生成第二控制指令,或者,头戴式设备接收其它设备所发送的第二控制指令,本申请实施例对此不作限定。
步骤609,基于第二控制指令后的射线调整数据,移动虚拟射线和已添加道具。
当已添加道具显示在虚拟射线与真实环境画面的交点处时,头戴式设备基于接收到的射线调整数据对二者进行移动显示。在一种可能的实施方式中,该射线调整数据包括移动方向和移动距离等。
步骤610,基于射线调整数据,在真实环境画面中将虚拟射线移动至与目标虚拟道具相交。
步骤611,响应于第三控制指令,显示目标虚拟道具对应的编辑控件。
头戴式设备除了能够对虚拟道具进行添加和移动之外,还可以对虚拟道具进行编辑操作,用户通过编辑虚拟道具使其达到期望的显示效果。在一种可能的实施方式中,当虚拟射线与目标虚拟道具相交,且头戴式设备接收到第三控制指令时,显示目标虚拟道具对应的编辑控件,即用户可以通过控制虚拟射线与目标虚拟道具相交并进行道具编辑操作,使头戴式设备接收第三控制指令,从而使虚拟道具处于编辑状态。其中,道具编辑操作为单击操作、双击操作或长按操作等,本申请实施例对此不作限定。
示意性的,如图8所示,头戴式设备通过控制设备接收控制指令和射线调整数据,虚拟射线802与虚拟道具801相交,当头戴式设备接收到控制设备发送的第三控制指令时,显示虚拟道具801所对应的编辑控件。
可选的,真实环境画面中的多个虚拟道具可以同时处于编辑状态,各个虚拟道具都对应有编辑控件,例如,目标虚拟道具所对应的编辑控件与目标虚拟道具的相对位置固定(比如编辑控件显示在目标虚拟道具的正面),并随目标虚拟道具的移动而移动,方便用户灵活编辑多个虚拟道具,简化用户操作。当虚拟射线与目标虚拟道具相交,且目标虚拟道具处于编辑状态时,若接收到第三控制指令,则取消目标虚拟道具的编辑状态,即取消显示目标虚拟道具对应的编辑控件。
步骤612,基于射线调整数据,将虚拟射线移动至与目标编辑控件相交。
步骤613,响应于第四控制指令,基于目标编辑控件对应的编辑方式对目标虚拟道具进行编辑。
在一种可能的实施方式中,目标虚拟道具对应有不同功能的编辑控件,用户通过控制虚拟射线与目标编辑控件相交,并通过触发操作使头戴式视听设备接收第四控制指令,使头戴式设备按照目标编辑控件对应的编辑方式对目标虚拟道具进行编辑。例如,编辑控件包括删除控件、放大控件和缩小控件中的至少一种,步骤613包括如下步骤:
步骤613a,响应于目标编辑控件为删除控件,且接收到第四控制指令,删除目标虚拟对象。
如图8所示,当虚拟射线802与删除控件803相交,且头戴式设备接收到第三控制指令时,将目标虚拟对象从真实环境画面中删除。当头戴式设备通过控制设备接收控制指令和射线调整数据时,第四控制指令是控制设备在接收到编辑控件触发操作时生成的指令,编辑控件触发操作可以是单击操作、双击操作、长按操作等,本申请实施例对此不作限定。
步骤613b,响应于目标编辑控件为放大控件,且接收到第四控制指令,将目标虚拟对象放大至预设放大倍数。
如图8所示,当虚拟射线802与放大控件804相交,且头戴式设备接收到第四控制指令时,将目标虚拟对象放大至预设放大倍数。当头戴式设备通过控制设备接收控制指令和射线调整数据时,第四控制指令为控制设备接收到编辑控件触发操作时生成的指令,其中编辑控件触发操作可能包含不同的操作类型,例如,当编辑控件触发操作为单击操作时,头戴式设备基于第四控制指令触发一次放大控件804,即对目标虚拟道具801进行一次放大操作;当编辑控件触发操作为长按操作时,头戴式设备基于第四控制指令持续触发放大控件804,即对目标虚拟道具801进行连续放大,当长按操作停止时,控制设备向头戴式设备发送编辑结束指令,使头戴式设备停止放大目标虚拟道具。本申请实施例对此不作限定。
步骤613c,响应于目标编辑控件为缩小控件,且接收到第四控制指令,将目标虚拟对象缩小至预设缩小倍数。
相应的,如图8所示,当虚拟射线802与缩小控件805相交,且头戴式设备接收到第四控制指令时,将目标虚拟对象缩小至预设缩小倍数。
本申请实施例中,当虚拟射线与目标道具选择控件相交时,通过对目标道具选择控件进行突出显示,使用户得知当前虚拟射线所选中的道具选择控件,无需仔细观察虚拟射线与目标道具选择控件的位置以确保相交,方便用户操作;头戴式设备在虚拟射线与目标道具选择控件相交,且接收到道具选择指令时,将目标虚拟道具显示在虚拟射线与真实环境画面的交点处,用户只需改变虚拟射线的射线方向即可移动目标虚拟道具,提高了虚拟道具的控制效率和操作的准确性。
上述实施例示出了用户通过控制设备和头戴式设备添加以及编辑虚拟道具的过程,在一种可能的实施 方式中,头戴式设备还可以基于虚拟射线交互和其它控制指令完成拍摄功能。在图4的基础上,图9示出了本申请另一个示例性实施例提供的真实环境画面中虚拟道具的显示方法的流程图。本实施例以该方法用于头戴式设备为例进行说明,该方法在步骤403之后,还包括如下步骤:
步骤404,基于射线调整数据,在真实环境画面中将虚拟射线移动至与真实环境画面上叠加显示的拍摄控件相交。
步骤405,响应于第五控制指令,对真实环境画面和虚拟道具进行拍摄。
头戴式设备以虚拟射线与拍摄控件相交作为选中拍摄控件的条件,即用户在需要进行拍摄时,先通过改变虚拟射线的射线方向,使虚拟射线指向拍摄控件,并与拍摄控件相交,再通过用户操作使头戴式设备接收到第五控制指令,从而使头戴式设备对真实环境画面和虚拟道具进行拍摄。
在一种可能的实施方式中,头戴式设备完成拍摄后,将拍摄得到的图像或视频等内容自动存储至预设存储位置,便于用户将拍摄的到的内容传输至其它设备。
可选的,头戴式设备拍摄完成后,展示拍摄得到的内容;或者,头戴式设备拍摄完成后,继续在真实环境画面中叠加显示接收到第五控制指令时的场景编辑界面、虚拟道具的虚拟射线。本申请实施例对此不作限定。
在一种可能的实施方式中,步骤405包括如下步骤:
步骤405a,响应于第五控制指令,将拍摄控件从默认显示状态切换至拍摄显示状态。
为了方便用户确认当前头戴式设备是否正在拍摄,当虚拟射线与拍摄控件相交(虚拟射线与场景编辑界面的交点位于拍摄控件边缘或内部)且接收到第五控制指令时,头戴式设备将拍摄控件从默认显示状态切换至拍摄显示状态。可选的,拍摄显示状态下的拍摄控件相对于默认显示状态下的拍摄控件,控件图形、控件尺寸、控件显示颜色以及控件显示效果等至少一种元素不同。拍摄期间,拍摄控件始终保持拍摄显示状态,直至拍摄结束。
步骤405b,基于第五控制指令的指令类型确定目标拍摄方式。
在一种可能的实施方式中,头戴式设备中的拍摄方式包括拍摄图像和录制视频,为了对两种拍摄方式的交互方式进行区分,方便用户根据需求快速选择目标拍摄方式,两种拍摄方式对应的触发操作不同,其对应的第五控制指令的类型也不同,头戴式设备或控制设备基于接收到的触发操作的操作类型生成对应的第五控制指令,头戴式设备基于第五控制指令的指令类型确定目标拍摄方式,例如用户对拍摄控件的单击操作触发图像拍摄,对拍摄控件的长按操作触发视频录制。步骤405b包括如下步骤:
步骤一,响应于第五控制指令为拍照指令,确定拍摄方式为拍摄图像。
拍照指令为控制设备接收到拍照操作时生成的指令,当头戴式设备确定虚拟射线与拍摄控件相交,且基于第五控制指令确定接收到拍照操作时,确定拍摄方式为拍摄图像。其中,拍照操作包括单击操作、双击操作、长按操作或预设手势等。
步骤二,响应于第五控制指令为录像指令,确定拍摄方式为录制视频,并通过拍摄控件展示录制进度。
录像指令为控制设备接收到录像操作时生成的指令,当头戴式设备确定虚拟射线与拍摄控件相交,且基于第五控制指令确定控制设备接收到录像操作时,确定拍摄方式为录制视频。其中,录像操作包括单击操作、双击操作、长按操作或预设手势等。
拍照操作与录像操作的操作类型不同,例如,拍照操作为单击操作,录像操作为长按操作,且录像时长等于长按操作的按压时长。
示意性的,如图10所示,不同拍摄方式下拍摄控件的拍摄显示状态不同。当拍摄方式为拍摄图像时,头戴式设备在拍摄瞬间将默认显示状态下的拍摄控件1001a切换为拍摄显示状态下的拍摄控件1001b,并在拍摄结束的瞬间将其恢复至默认显示状态下的拍摄控件1001a;当拍摄方式为录像时,头戴式设备,在拍摄瞬间将默认显示状态下的拍摄控件1001a切换为拍摄显示状态下的拍摄控件1001c,并且录制期间拍摄控件中显示有录制进度条,拍摄控件上显示有拍摄时长。
步骤405c,采用目标拍摄方式对真实环境画面和虚拟道具进行拍摄。
在一种可能的实施方式中,头戴式设备采用目标拍摄方式对真实环境画面和虚拟道具进行拍摄,即拍摄得到的图像或视频中,既包含真实环境画面也包含虚拟道具,但并不包含虚拟射线、场景编辑界面以及其它控件、虚拟内容等。
步骤405d,在场景编辑界面上叠加显示拍摄预览内容。
为了方便用户及时查看拍摄效果,头戴式设备在拍摄完成后,在场景编辑界面上叠加显示拍摄预览内容,例如在场景编辑界面上叠加显示预览图像。在一种可能的实施方式中,头戴式设备在预设时长内显示拍摄预览内容,并在拍摄预览内容的显示时长达到预设时长后,自动取消显示拍摄预览内容,并返回场景编辑界面,无需用户手动关闭预览内容,简化用户操作。
示意性的,如图11所示,当虚拟射线1101与拍摄控件1102相交,且接收到拍照指令时,头戴式设 备对真实环境画面和虚拟道具进行拍摄,得到相应的图像,并在场景编辑界面1103上叠加显示拍摄预览内容1104。
本申请实施例中,当虚拟射线与拍摄控件相交且接收到第五控制指令时,头戴式设备将拍摄控件从默认显示状态切换至拍摄显示状态,从而方便用户确认当前头戴式设备是否正在拍摄;对于不同拍摄方式对应的交互方式进行区分,方便用户根据需求快速选择目标拍摄方式;并且,拍摄完成后在场景编辑界面上叠加显示拍摄预览内容,方便用户及时查看拍摄效果。
上述实施例说明了在同一场景下进行虚拟道具的添加、编辑以及拍摄过程的交互操作,在一种可能的实施方式中,本申请实施例提供的相机应用中包含不同主题的虚拟场景,用户可以根据自己的喜好选择相应的场景进行体验。图12示出了本申请另一个示例性实施例提供的真实环境画面中虚拟道具的显示方法的流程图。本实施例以该方法用于头戴式设备为例进行说明,该方法包括如下步骤:
步骤1201,在真实环境画面上叠加显示场景选择界面,场景选择界面中包含至少一种主题的场景选择控件。
在一种可能的实施方式中,头戴式设备开启相机应用后,首先在真实环境画面上叠加显示场景选择界面,其中包含至少一种主题的场景选择控件,提示用户选择其中一种主题进行场景设置和拍摄体验,不同主题的场景中包含的虚拟道具不同。
步骤1202,基于射线调整数据,在真实环境画面中将虚拟射线移动至与目标场景选择控件相交。
步骤1203,响应于第六控制指令,在真实环境画面上叠加显示虚拟射线以及目标场景选择控件对应的场景编辑界面。
在一种可能的实施方式中,第六控制指令为控制设备或头戴式设备接收到场景选择操作时生成的指令,当头戴式设备确定虚拟射线与目标场景选择控件相交,且基于第六控制指令确定接收到场景选择操作时,在真实环境画面上叠加显示虚拟射线以及目标场景选择控件对应的场景编辑界面。其中,场景选择操作包括单击操作、双击操作、长按操作或预设手势等,本申请实施例对此不作限定。
示意性的,如图13所示,真实环境画面上叠加显示有3种不同主题的场景选择控件,虚拟射线1301与场景选择控件1302相交,且头戴式设备接收到第六控制指令时,触发场景选择控件1302,将场景选择界面切换为场景编辑界面。
步骤1204,基于射线调整数据,在真实环境画面中将虚拟射线移动至与目标道具选择控件相交。
步骤1205,响应于第一控制指令,在真实环境画面中显示目标道具选择控件对应的目标虚拟道具。
步骤1204至步骤1205的具体实施方式可以参考上述步骤402至步骤403,本申请实施例在此不再赘述。
步骤1206,基于射线调整数据,在真实环境画面中将虚拟射线移动至与场景切换控件相交。
步骤1207,响应于第七控制指令,在真实环境画面上叠加显示场景选择列表,场景选择列表中包含至少一种主题的场景选择控件。
为了便于用户在体验某一主题的虚拟场景过程中切换至其它主题的虚拟场景,头戴式设备所显示的场景编辑界面中还包含场景切换控件,用户可以通过触发场景切换控件,使头戴式设备显示场景选择列表,并通过场景选择列表中的场景选择控件开启其它场景,无需返回至场景选择界面进行重新选择,简化了用户操作。
示意性的,如图14所示,头戴式设备通过控制设备接收控制指令和射线调整数据,头戴式设备在场景编辑界面中显示场景切换控件1401,当虚拟射线1402与场景切换控件1401相交,且接收到控制设备发送的第七控制指令时,头戴式设备在真实环境画面上叠加显示场景选择列表1403。
步骤1208,基于射线调整数据,在真实环境画面中将虚拟射线移动至与目标场景选择控件相交。
步骤1209,响应于第八控制指令,在真实环境画面上叠加显示目标场景选择控件对应的场景编辑界面。
在一种可能的实施方式中,头戴式设备从将场景编辑界面切换至场景选择列表后,保留当前真实环境画面中已放置的虚拟道具,从而使用户可以利用不同主题场景中的虚拟道具设置虚拟场景。
如图14所示,头戴式设备通过控制设备接收控制指令和射线调整数据,当虚拟射线1402与场景选择控件1404相交,且接收到控制设备发送的第八控制指令时,头戴式设备在真实环境画面上叠加显示目标场景选择控件对应的场景编辑界面。
本申请实施例中的相机应用中包含不同主体的虚拟场景,用户可以根据自己的喜好选择相应主题的场景进行体验;头戴式设备所显示的场景编辑界面中还包含场景切换控件,当虚拟射线与场景切换控件相交,且接收到第七控制指令时,头戴式设备显示场景选择列表,头戴式设备通过触发场景选择列表中的场景选择控件开启其它场景,从而满足用户在体验某一主题的虚拟场景过程中切换至其它主题的虚拟场景的需求,无需返回至场景选择界面进行重新选择,简化了用户操作。
图15示出了本申请一个示例性实施例提供的真实环境画面中虚拟道具的显示装置的结构框图。该装置可以通过软件、硬件或者两者的结合实现成为终端的全部或一部分。该装置包括:
第一显示模块1501,用于在真实环境画面上叠加显示场景编辑界面和虚拟射线,所述场景编辑界面中包含道具选择列表,所述道具选择列表中包含至少一种虚拟道具对应的道具选择控件;
第一调整模块1502,用于基于射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与目标道具选择控件相交;
第二显示模块1503,用于响应于第一控制指令,在所述真实环境画面中显示所述目标道具选择控件对应的目标虚拟道具。
可选的,所述第一控制指令包括道具选择指令和道具放置指令;
所述第二显示模块1503,包括:
第一显示单元,用于响应于所述虚拟射线与所述目标道具选择控件相交,对所述目标道具选择控件进行突出显示;
第二显示单元,用于响应于所述道具选择指令,将所述目标虚拟道具显示在所述虚拟射线与所述真实环境画面的交点处;
移动单元,用于基于所述道具选择指令后的所述射线调整数据,移动所述虚拟射线和所述目标虚拟道具;
解除单元,用于响应于所述道具放置指令,在所述道具放置指令所指示的放置位置显示所述目标虚拟道具。
可选的,所述装置还包括:
第二调整模块,用于基于所述射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与所述真实环境画面中的已添加道具相交;
第三显示模块,用于响应于第二控制指令,将所述已添加道具显示在所述虚拟射线与所述真实环境画面的交点处;移动模块,用于基于所述第二控制指令后的所述射线调整数据,移动所述虚拟射线和所述已添加道具。
可选的,所述装置还包括:
第三调整模块,用于基于所述射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与所述目标虚拟道具相交;
第四显示模块,用于响应于第三控制指令,显示所述目标虚拟道具对应的编辑控件;
第四调整模块,用于基于所述射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与目标编辑控件相交;
编辑模块,用于响应于第四控制指令,基于所述目标编辑控件对应的编辑方式对所述目标虚拟道具进行编辑。
可选的,所述道具编辑控件包括删除控件、放大控件和缩小控件中的至少一种;
所述编辑模块,包括:
第一编辑单元,用于响应于所述目标编辑控件为所述删除控件,且接收到所述第四控制指令,删除所述目标虚拟对象;
第二编辑单元,用于响应于所述目标编辑控件为所述放大控件,且接收到所述第四控制指令,将所述目标虚拟对象放大至预设放大倍数;
第三编辑单元,用于响应于所述目标编辑控件为所述缩小控件,且接收到所述第四控制指令,将所述目标虚拟对象缩小至预设缩小倍数。
可选的,所述装置还包括:
第五调整模块,用于基于射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与所述真实环境画面上叠加显示的拍摄控件相交;
拍摄模块,用于响应于第五控制指令,对所述真实环境画面和所述虚拟道具进行拍摄。
可选的,所述拍摄模块,包括:
第三显示单元,用于响应于所述第五控制指令,将所述拍摄控件从默认显示状态切换至拍摄显示状态;
确定单元,用于基于所述第五控制指令的指令类型确定目标拍摄方式;
拍摄单元,用于采用所述目标拍摄方式对所述真实环境画面和所述虚拟道具进行拍摄;
第四显示单元,用于在所述场景编辑界面上叠加显示拍摄预览内容。
可选的,所述确定单元,还用于:
响应于所述第五控制指令为拍照指令,确定所述拍摄方式为拍摄图像;
响应于所述第五控制指令为录像指令,确定所述拍摄方式为录制视频,并通过所述拍摄控件展示录制进度。
可选的,所述头戴式设备与控制设备之间建立有数据连接,所述控制设备用于向所述头戴式设备发送所述射线调整数据以及控制指令,所述虚拟射线的射线方向为所述控制设备的设备指向。
可选的,所述装置还包括:
第五显示模块,用于在所述真实环境画面上叠加显示场景选择界面,所述场景选择界面中包含至少一种主题的场景选择控件;
所述第一显示模块1501,包括:
第一调整单元,用于基于所述射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与目标场景选择控件相交;
第五显示单元,用于响应于第六控制指令,在所述真实环境画面上叠加显示所述虚拟射线以及所述目标场景选择控件对应的所述场景编辑界面。
可选的,所述场景编辑界面中显示有场景切换控件;
所述装置还包括:
第六调整模块,用于基于射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与所述场景切换控件相交;第六显示模块,用于响应于第七控制指令,在所述真实环境画面上叠加显示场景选择列表,所述场景选择列表中包含至少一种主题的场景选择控件;
第七调整模块,用于基于所述射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与目标场景选择控件相交;第七显示模块,用于响应于第八控制指令,在所述真实环境画面上叠加显示所述目标场景选择控件对应的所述场景编辑界面。
综上所述,本申请实施例中,通过实时显示虚拟射线,并基于射线调整数据移动显示虚拟射线,以指示控制操作的触发位置,使用户能够通过观察虚拟射线的位置和方向实时掌握触发位置,用户只需使虚拟射线与虚拟道具、控件等需要控制的对象相交,即可快速通过头戴式虚拟设备控制虚拟道具,提高了虚拟道具的控制效率和操作的准确性。
如图16所示,本申请实施例提供一种真实环境画面中虚拟道具的显示***的结构框图,真实环境画面中虚拟道具的显示***包括头戴式设备1600和控制设备1700,所述头戴式设备1600可以包括以下一个或多个组件:处理器1601,存储器1602,电源组件1603,多媒体组件1604,音频组件1605,输入/输出(Input/Output,I/O)接口1606,传感器组件1607以及通信组件1608。
处理器1601通常控制头戴式设备的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理器1601可以包括一个或者多个处理核心。处理器1601利用各种接口和线路连接整个设备1600内的各个部分,通过运行或执行存储在存储器1602内的指令、程序、代码集或指令集,以及调用存储在存储器1602内的数据,执行终端1600的各种功能和处理数据。可选地,处理器1601可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器1601可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作***、用户界面和应用程序等;GPU用于负责屏幕所需要显示的内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器1601中,单独通过一块通信芯片进行实现。
存储器1602被配置为存储各种类型的数据以支持在头戴式设备的操作。这些数据的示例包括用于在头戴式设备上操作的任何应用程序或方法的指令,模型,联系人数据,电话簿数据,消息,图像,视频等。存储器1602可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory,ROM)。可选地,该存储器1602包括非瞬时性计算机可读介质(non-transitory computer-readable storage medium)。存储器1602可用于存储指令、程序、代码、代码集或指令集。存储器1602可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作***的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现上述各个方法实施例的指令等,该操作***可以是安卓(Android)***(包括基于Android***深度开发的***)、苹果公司开发的IOS***(包括基于IOS***深度开发的***)或其它***。存储数据区还可以存储终端1600在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。
电源组件1603为头戴式设备1600的各种组件提供电力。电源组件1603可以包括电源管理***,一个或多个电源,及其他与为头戴式设备1600生成、管理和分配电力相关联的组件。
多媒体组件1604包括在所述头戴式设备1600和用户之间的提供一个输出接口的屏幕。在一些实施例 中,屏幕可以包括液晶显示器(Liquid Crystal Display,LCD)和触摸面板(Touch Panel,TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件1604包括一个前置摄像头和/或后置摄像头。当头戴式视听设备1600处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜***或具有焦距和光学变焦能力。
音频组件1605被配置为输出和/或输入音频信号。例如,音频组件1605包括一个麦克风(Microphone,MIC),当头戴式设备1600处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器1602或经由通信组件1608发送。在一些实施例中,音频组件1605还包括一个扬声器,用于输出音频信号。
I/O接口1606为处理器1601和***接口模块之间提供接口,上述***接口模块可以是键盘,点击轮,按钮、触控面板等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件1607包括一个或多个传感器,用于为头戴式设备1600提供各个方面的状态评估。例如,传感器组件1607可以检测到头戴式设备1600的打开/关闭状态,组件的相对定位,例如所述组件为头戴式设备1600的显示屏和小键盘,传感器组件1607还可以检测头戴式设备1600或头戴式视听设备1600一个组件的位置改变,用户与头戴式设备1600接触的存在或不存在,头戴式设备1600方位或加速/减速和头戴式设备1600的温度变化。传感器组件1607可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件1607还可以包括光传感器,用于在成像应用中使用。在一些实施例中,该传感器组件1607还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。例如,头戴式视听设备1600通过压力传感器确定控制操作的操作类型。
通信组件1608被配置为便于头戴式设备1600和其它设备(例如控制设备)之间有线或无线方式的通信。头戴式设备1600可以接入基于通信标准的无线网络。在一个示例性实施例中,通信组件1608经由广播信道接收来自外部广播管理***的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件1608还包括近场通信(Near Field Communication,NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(Radio Frequency Identification,RFID)技术,红外数据协会(Infrared Data Association,IrDA)技术,超宽带(Ultra Wide Band,UWB)技术,蓝牙(Blue Tooth,BT)技术和其他技术来实现。头戴式设备1600通过通信组件1608同步接收控制设备发送的信息,例如控制设备接收到的作用于触控区域的触控操作。
除此之外,本领域技术人员可以理解,上述附图所示出的设备1600的结构并不构成对设备1600的限定,设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
头戴式设备1600和控制设备1700之间通过数据线、WiFi热点或蓝牙等方式建立连接。
所述控制设备1700可以包括以下一个或多个组件:处理器1710、存储器1720和显示屏1730。
处理器1710可以包括一个或者多个处理核心。处理器1710利用各种接口和线路连接整个终端1700内的各个部分,通过运行或执行存储在存储器1720内的指令、程序、代码集或指令集,以及调用存储在存储器1720内的数据,执行终端1700的各种功能和处理数据。可选地,处理器1710可以采用DSP、FPGA、PLA中的至少一种硬件形式来实现。处理器1710可集成CPU、GPU、神经网络处理器(Neural-network Processing Unit,NPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作***、用户界面和应用程序等;GPU用于负责触摸显示屏1730所需要显示的内容的渲染和绘制;NPU用于实现人工智能(Artificial Intelligence,AI)功能;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器1710中,单独通过一块芯片进行实现。
存储器1720可以包括RAM,也可以包括ROM。可选地,该存储器1720包括non-transitory computer-readable storage medium。存储器1720可用于存储指令、程序、代码、代码集或指令集。存储器1720可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作***的指令、用于至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现上述述各个方法实施例的指令等;存储数据区可存储根据终端1700的使用所创建的数据(比如音频数据、电话本)等。
显示屏1730是用于显示用户界面的显示组件。可选的,该显示屏1730还具有触控功能,通过触控功能,用户可以使用手指、触摸笔等任何适合的物体在显示屏1730上进行触控操作。
显示屏1730通常设置在终端1730的前面板。显示屏1730可被设计成为全面屏、曲面屏、异型屏、双面屏或折叠屏。显示屏1730还可被设计成为全面屏与曲面屏的结合,异型屏与曲面屏的结合,本实施例对此不加以限定。
除此之外,本领域技术人员可以理解,上述附图所示出的终端1700的结构并不构成对终端1700的限定,终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。比如,终端1700 中还包括摄像组件、麦克风、扬声器、射频电路、输入单元、传感器(比如加速度传感器、角速度传感器、光线传感器等等)、音频电路、WiFi模块、电源、蓝牙模块等部件,在此不再赘述。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质存储有至少一条指令,所述至少一条指令由处理器加载并执行以实现如上各个实施例所述的真实环境画面中虚拟道具的显示方法。
根据本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。头戴式设备或控制设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该头戴式设备或控制设备执行上述方面的各种可选实现方式中提供的真实环境画面中虚拟道具的显示方法。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读存储介质中或者作为计算机可读存储介质上的一个或多个指令或代码进行传输。计算机可读存储介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种真实环境画面中虚拟道具的显示方法,所述方法用于头戴式设备,所述方法包括:
    在真实环境画面上叠加显示场景编辑界面和虚拟射线,所述场景编辑界面中包含道具选择列表,所述道具选择列表中包含至少一种虚拟道具对应的道具选择控件;
    基于射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与目标道具选择控件相交;
    响应于第一控制指令,在所述真实环境画面中显示所述目标道具选择控件对应的目标虚拟道具。
  2. 根据权利要求1所述的方法,其中,所述第一控制指令包括道具选择指令和道具放置指令;
    所述响应于第一控制指令,在所述真实环境画面中显示所述目标道具选择控件对应的目标虚拟道具,包括:
    响应于所述虚拟射线与所述目标道具选择控件相交,对所述目标道具选择控件进行突出显示;
    响应于所述道具选择指令,将所述目标虚拟道具显示在所述虚拟射线与所述真实环境画面的交点处;
    基于所述道具选择指令后的所述射线调整数据,移动所述虚拟射线和所述目标虚拟道具;
    响应于所述道具放置指令,在所述道具放置指令所指示的放置位置显示所述目标虚拟道具。
  3. 根据权利要求1所述的方法,其中,所述响应于第一控制指令,在所述真实环境画面中显示所述目标道具选择控件对应的目标虚拟道具之后,所述方法包括:
    基于所述射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与所述真实环境画面中的已添加道具相交;
    响应于第二控制指令,将所述已添加道具显示在所述虚拟射线与所述真实环境画面的交点处;
    基于所述第二控制指令后的所述射线调整数据,移动所述虚拟射线和所述已添加道具。
  4. 根据权利要求1所述的方法,其中,所述在真实环境画面上叠加显示场景编辑界面和虚拟射线之后,所述方法还包括:
    基于所述射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与所述目标虚拟道具相交;
    响应于第三控制指令,显示所述目标虚拟道具对应的编辑控件;
    基于所述射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与目标编辑控件相交;
    响应于第四控制指令,基于所述目标编辑控件对应的编辑方式对所述目标虚拟道具进行编辑。
  5. 根据权利要求4所述的方法,其中,所述道具编辑控件包括删除控件、放大控件和缩小控件中的至少一种;
    所述响应于第四控制指令,基于所述目标编辑控件对应的编辑方式对所述目标虚拟道具进行编辑,包括:
    响应于所述目标编辑控件为所述删除控件,且接收到所述第四控制指令,删除所述目标虚拟对象;
    响应于所述目标编辑控件为所述放大控件,且接收到所述第四控制指令,将所述目标虚拟对象放大至预设放大倍数;
    响应于所述目标编辑控件为所述缩小控件,且接收到所述第四控制指令,将所述目标虚拟对象缩小至预设缩小倍数。
  6. 根据权利要求1至5任一所述的方法,其中,所述响应于第一控制指令,在所述真实环境画面中显示所述目标道具选择控件对应的目标虚拟道具之后,所述方法还包括:
    基于射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与所述真实环境画面上叠加显示的拍摄控件相交;
    响应于第五控制指令,对所述真实环境画面和虚拟道具进行拍摄。
  7. 根据权利要求6所述的方法,其中,所述响应于第五控制指令,对所述真实环境画面和所述虚拟道具进行拍摄,包括:
    响应于所述第五控制指令,将所述拍摄控件从默认显示状态切换至拍摄显示状态;
    基于所述第五控制指令的指令类型确定目标拍摄方式;
    采用所述目标拍摄方式对所述真实环境画面和所述虚拟道具进行拍摄;
    在所述场景编辑界面上叠加显示拍摄预览内容。
  8. 根据权利要求7所述的方法,其中,所述基于所述第五控制指令的指令类型确定目标拍摄方式,包括:
    响应于所述第五控制指令为拍照指令,确定所述拍摄方式为拍摄图像;
    响应于所述第五控制指令为录像指令,确定所述拍摄方式为录制视频,并通过所述拍摄控件展示录制进度。
  9. 根据权利要求1至5任一所述的方法,其中,所述头戴式设备与控制设备之间建立有数据连接,所述控制设备用于向所述头戴式设备发送所述射线调整数据以及控制指令,所述虚拟射线的射线方向为所述控制设备的设备指向。
  10. 根据权利要求1至5任一所述的方法,其中,所述在真实环境画面上叠加显示场景编辑界面和虚拟射线之前,所述方法还包括:
    在所述真实环境画面上叠加显示场景选择界面,所述场景选择界面中包含至少一种主题的场景选择控件;
    所述在真实环境画面上叠加显示场景编辑界面和虚拟射线,包括:
    基于所述射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与目标场景选择控件相交;
    响应于第六控制指令,在所述真实环境画面上叠加显示所述虚拟射线以及所述目标场景选择控件对应的所述场景编辑界面。
  11. 根据权利要求1至5任一所述的方法,其中,所述场景编辑界面中显示有场景切换控件;
    所述在真实环境画面上叠加显示场景编辑界面和虚拟射线之后,所述方法还包括:
    基于所述射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与所述场景切换控件相交;
    响应于第七控制指令,在所述真实环境画面上叠加显示场景选择列表,所述场景选择列表中包含至少一种主题的场景选择控件;
    基于所述射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与目标场景选择控件相交;
    响应于第八控制指令,在所述真实环境画面上叠加显示所述目标场景选择控件对应的所述场景编辑界面。
  12. 一种真实环境画面中虚拟道具的显示装置,所述装置包括:
    第一显示模块,用于在真实环境画面上叠加显示场景编辑界面和虚拟射线,所述场景编辑界面中包含道具选择列表,所述道具选择列表中包含至少一种虚拟道具对应的道具选择控件;
    第一调整模块,用于基于射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与目标道具选择控件相交;
    第二显示模块,用于响应于第一控制指令,在所述真实环境画面中显示所述目标道具选择控件对应的目标虚拟道具。
  13. 根据权利要求12所述的装置,其中,所述第一控制指令包括道具选择指令和道具放置指令;
    所述第二显示模块,包括:
    第一显示单元,用于响应于所述虚拟射线与所述目标道具选择控件相交,对所述目标道具选择控件进行突出显示;
    第二显示单元,用于响应于所述道具选择指令,将所述目标虚拟道具显示在所述虚拟射线与所述真实环境画面的交点处;
    移动单元,用于基于所述道具选择指令后的所述射线调整数据,移动所述虚拟射线和所述目标虚拟道具;
    解除单元,用于响应于所述道具放置指令,在所述道具放置指令所指示的放置位置显示所述目标虚拟道具。
  14. 根据权利要求13所述的装置,其中,所述装置还包括:
    第二调整模块,用于基于所述射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与所述真实环境画面中的已添加道具相交;
    第三显示模块,用于响应于第二控制指令,将所述已添加道具显示在所述虚拟射线与所述真实环境画 面的交点处;
    移动模块,用于基于所述第二控制指令后的所述射线调整数据,移动所述虚拟射线和所述已添加道具。
  15. 根据权利要求12所述的装置,其中,所述装置还包括:
    第三调整模块,用于基于所述射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与所述目标虚拟道具相交;
    第四显示模块,用于响应于第三控制指令,显示所述目标虚拟道具对应的编辑控件;
    第四调整模块,用于基于所述射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与目标编辑控件相交;
    编辑模块,用于响应于第四控制指令,基于所述目标编辑控件对应的编辑方式对所述目标虚拟道具进行编辑。
  16. 根据权利要求15所述的装置,其中,所述道具编辑控件包括删除控件、放大控件和缩小控件中的至少一种;
    所述编辑模块,包括:
    第一编辑单元,用于响应于所述目标编辑控件为所述删除控件,且接收到所述第四控制指令,删除所述目标虚拟对象;
    第二编辑单元,用于响应于所述目标编辑控件为所述放大控件,且接收到所述第四控制指令,将所述目标虚拟对象放大至预设放大倍数;
    第三编辑单元,用于响应于所述目标编辑控件为所述缩小控件,且接收到所述第四控制指令,将所述目标虚拟对象缩小至预设缩小倍数。
  17. 根据权利要求12至16任一所述的装置,其中,所述装置还包括:
    第五调整模块,用于基于射线调整数据,在所述真实环境画面中将所述虚拟射线移动至与所述真实环境画面上叠加显示的拍摄控件相交;
    拍摄模块,用于响应于第五控制指令,对所述真实环境画面和所述虚拟道具进行拍摄。
  18. 一种真实环境画面中虚拟道具的显示***,所述真实环境画面中虚拟道具的显示***包括头戴式设备和控制设备,所述头戴式设备与所述控制设备之间建立有数据连接;
    所述控制设备用于向所述头戴式设备发送控制指令以及射线调整数据;
    所述头戴式设备包括处理器和存储器;所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至11任一所述的真实环境画面中虚拟道具的显示方法。
  19. 一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条程序代码,所述程序代码由处理器加载并执行以实现如权利要求1至11任一所述的真实环境画面中虚拟道具的显示方法。
  20. 一种计算机程序产品或计算机程序,所述计算机程序产品或计算机程序包括计算机指令,所述计算机指令存储在计算机可读存储介质中;头戴式设备的处理器从所述计算机可读存储介质读取所述计算机指令,所述处理器执行所述计算机指令,使得所述头戴式设备执行如权利要求1至11任一所述的真实环境画面中虚拟道具的显示方法。
PCT/CN2021/130445 2020-11-16 2021-11-12 真实环境画面中虚拟道具的显示方法、***及存储介质 WO2022100712A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21891232.7A EP4246287A1 (en) 2020-11-16 2021-11-12 Method and system for displaying virtual prop in real environment image, and storage medium
US18/318,575 US20230289049A1 (en) 2020-11-16 2023-05-16 Method and system for displaying virtual prop in real environment image, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011282179.6A CN112286362B (zh) 2020-11-16 2020-11-16 真实环境画面中虚拟道具的显示方法、***及存储介质
CN202011282179.6 2020-11-16

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/318,575 Continuation US20230289049A1 (en) 2020-11-16 2023-05-16 Method and system for displaying virtual prop in real environment image, and storage medium

Publications (1)

Publication Number Publication Date
WO2022100712A1 true WO2022100712A1 (zh) 2022-05-19

Family

ID=74399104

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/130445 WO2022100712A1 (zh) 2020-11-16 2021-11-12 真实环境画面中虚拟道具的显示方法、***及存储介质

Country Status (4)

Country Link
US (1) US20230289049A1 (zh)
EP (1) EP4246287A1 (zh)
CN (1) CN112286362B (zh)
WO (1) WO2022100712A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272630A (zh) * 2022-09-29 2022-11-01 南方科技大学 数据处理方法、装置、虚拟现实眼镜及存储介质
CN117424969A (zh) * 2023-10-23 2024-01-19 神力视界(深圳)文化科技有限公司 灯光控制方法、装置、移动终端及存储介质
WO2024032176A1 (zh) * 2022-08-12 2024-02-15 腾讯科技(深圳)有限公司 虚拟道具的处理方法、装置、电子设备、存储介质及程序产品

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112286362B (zh) * 2020-11-16 2023-05-12 Oppo广东移动通信有限公司 真实环境画面中虚拟道具的显示方法、***及存储介质
CN113672087A (zh) * 2021-08-10 2021-11-19 Oppo广东移动通信有限公司 远程交互方法、装置、***、电子设备以及存储介质
CN113680067A (zh) * 2021-08-19 2021-11-23 网易(杭州)网络有限公司 游戏中控制虚拟道具的方法、装置、设备及存储介质
CN113687721A (zh) * 2021-08-23 2021-11-23 Oppo广东移动通信有限公司 设备控制方法、装置、头戴显示设备及存储介质
CN114089836B (zh) * 2022-01-20 2023-02-28 中兴通讯股份有限公司 标注方法、终端、服务器和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105912110A (zh) * 2016-04-06 2016-08-31 北京锤子数码科技有限公司 一种在虚拟现实空间中进行目标选择的方法、装置及***
US20200193938A1 (en) * 2018-12-14 2020-06-18 Samsung Electronics Co., Ltd. Systems and methods for virtual displays in virtual, mixed, and augmented reality
CN111589132A (zh) * 2020-04-26 2020-08-28 腾讯科技(深圳)有限公司 虚拟道具展示方法、计算机设备及存储介质
CN111782053A (zh) * 2020-08-10 2020-10-16 Oppo广东移动通信有限公司 模型编辑方法、装置、设备及存储介质
CN112286362A (zh) * 2020-11-16 2021-01-29 Oppo广东移动通信有限公司 真实环境画面中虚拟道具的显示方法、***及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109240484A (zh) * 2017-07-10 2019-01-18 北京行云时空科技有限公司 一种增强现实***中的交互方法、装置及设备
US11250641B2 (en) * 2019-02-08 2022-02-15 Dassault Systemes Solidworks Corporation System and methods for mating virtual objects to real-world environments
CN110115838B (zh) * 2019-05-30 2021-10-29 腾讯科技(深圳)有限公司 虚拟环境中生成标记信息的方法、装置、设备及存储介质
CN110310224B (zh) * 2019-07-04 2023-05-30 北京字节跳动网络技术有限公司 光效渲染方法及装置
CN110694273A (zh) * 2019-10-18 2020-01-17 腾讯科技(深圳)有限公司 控制虚拟对象使用道具的方法、装置、终端及存储介质
CN111035924B (zh) * 2019-12-24 2022-06-28 腾讯科技(深圳)有限公司 虚拟场景中的道具控制方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105912110A (zh) * 2016-04-06 2016-08-31 北京锤子数码科技有限公司 一种在虚拟现实空间中进行目标选择的方法、装置及***
US20200193938A1 (en) * 2018-12-14 2020-06-18 Samsung Electronics Co., Ltd. Systems and methods for virtual displays in virtual, mixed, and augmented reality
CN111589132A (zh) * 2020-04-26 2020-08-28 腾讯科技(深圳)有限公司 虚拟道具展示方法、计算机设备及存储介质
CN111782053A (zh) * 2020-08-10 2020-10-16 Oppo广东移动通信有限公司 模型编辑方法、装置、设备及存储介质
CN112286362A (zh) * 2020-11-16 2021-01-29 Oppo广东移动通信有限公司 真实环境画面中虚拟道具的显示方法、***及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024032176A1 (zh) * 2022-08-12 2024-02-15 腾讯科技(深圳)有限公司 虚拟道具的处理方法、装置、电子设备、存储介质及程序产品
CN115272630A (zh) * 2022-09-29 2022-11-01 南方科技大学 数据处理方法、装置、虚拟现实眼镜及存储介质
CN115272630B (zh) * 2022-09-29 2022-12-23 南方科技大学 数据处理方法、装置、虚拟现实眼镜及存储介质
CN117424969A (zh) * 2023-10-23 2024-01-19 神力视界(深圳)文化科技有限公司 灯光控制方法、装置、移动终端及存储介质

Also Published As

Publication number Publication date
CN112286362A (zh) 2021-01-29
US20230289049A1 (en) 2023-09-14
CN112286362B (zh) 2023-05-12
EP4246287A1 (en) 2023-09-20

Similar Documents

Publication Publication Date Title
WO2022100712A1 (zh) 真实环境画面中虚拟道具的显示方法、***及存储介质
US11099704B2 (en) Mobile terminal and control method for displaying images from a camera on a touch screen of the mobile terminal
CN111541845B (zh) 图像处理方法、装置及电子设备
KR102227087B1 (ko) 글래스 타입 단말기 및 그것의 제어방법
KR102092330B1 (ko) 촬영 제어 방법 및 그 전자 장치
CN111970456B (zh) 拍摄控制方法、装置、设备及存储介质
KR102146858B1 (ko) 촬영 장치 및 촬영 장치의 비디오 생성방법
CN107977083B (zh) 基于vr***的操作执行方法及装置
EP2887648B1 (en) Method of performing previewing and electronic device for implementing the same
WO2023134583A1 (zh) 视频录制方法、装置及电子设备
CN112954209B (zh) 拍照方法、装置、电子设备及介质
CN112637495B (zh) 拍摄方法、装置、电子设备及可读存储介质
CN111782053B (zh) 模型编辑方法、装置、设备及存储介质
EP4387253A1 (en) Video recording method and apparatus, and storage medium
EP4380175A1 (en) Video recording method and apparatus, and storage medium
WO2023226699A1 (zh) 录像方法、装置及存储介质
WO2024131821A1 (zh) 拍照方法、装置及电子设备
KR101324809B1 (ko) 휴대용 단말기 및 그 제어방법
CN114500852B (zh) 拍摄方法、拍摄装置、电子设备和可读存储介质
WO2023231696A1 (zh) 一种拍摄方法及相关设备
WO2024131669A1 (zh) 拍摄处理方法和电子设备
CN116156305A (zh) 拍摄方法、装置、电子设备及可读存储介质
JP2023125345A (ja) 電子機器、その制御方法およびプログラム
CN117499750A (zh) 评论发布方法、装置、终端及存储介质
CN114615362A (zh) 相机控制方法、装置和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21891232

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021891232

Country of ref document: EP

Effective date: 20230616