WO2024114584A1 - 信息提示方法、信息提示装置、电子设备和可读存储介质 - Google Patents

信息提示方法、信息提示装置、电子设备和可读存储介质 Download PDF

Info

Publication number
WO2024114584A1
WO2024114584A1 PCT/CN2023/134357 CN2023134357W WO2024114584A1 WO 2024114584 A1 WO2024114584 A1 WO 2024114584A1 CN 2023134357 W CN2023134357 W CN 2023134357W WO 2024114584 A1 WO2024114584 A1 WO 2024114584A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
wearer
wearable device
facial
image
Prior art date
Application number
PCT/CN2023/134357
Other languages
English (en)
French (fr)
Inventor
冀文彬
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2024114584A1 publication Critical patent/WO2024114584A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Definitions

  • the present application belongs to the technical field of extended reality, and specifically relates to an information prompting method, an information prompting device, an electronic device and a readable storage medium.
  • Extended Reality (XR) technology includes Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR).
  • XR technology can create a real-virtual, interactive environment through computer technology and wearable devices. Through multi-source information fusion, interactive three-dimensional dynamic vision, and system simulation of entity behavior, users can immerse themselves in the virtual environment and experience the feeling of being in the real world.
  • Head-mounted VR devices use head-mounted displays to block the wearer's vision of the outside world, guiding the wearer to feel like they are in a virtual environment. Since the wearer needs to wear the VR device on his head and fit it closely to his face, the non-wearers around the wearer do not know the use status of the wearer's VR device. That is, the non-wearers cannot be sure whether the wearer can see people or objects in the real environment, let alone whether they are being secretly photographed. Therefore, the non-wearers will face the risk of privacy leakage.
  • the purpose of the embodiments of the present application is to provide an information prompting method, an information prompting device, an electronic device and a readable storage medium, which can intuitively prompt the non-wearers around the wearer that the see-through function of the wearable device of the wearer is turned on, so as to facilitate the non-wearers to protect their personal privacy.
  • an embodiment of the present application provides an information prompting method, the method comprising:
  • target prompt information is displayed on an outer display screen of the wearable device, where the target prompt information is used to indicate that a perspective function of the wearable device has been turned on.
  • an information prompting device comprising:
  • a receiving module configured to receive a first input, wherein the first input is used to activate a perspective function of the wearable device
  • the display module is used to display the target prompt information on the outer display screen of the wearable device.
  • the information is used to indicate that the perspective function of the wearable device has been turned on.
  • an embodiment of the present application provides an electronic device, which includes a processor and a memory, wherein the memory stores programs or instructions that can be run on the processor, and when the program or instructions are executed by the processor, the steps of the method described in the first aspect are implemented.
  • an embodiment of the present application provides a readable storage medium, on which a program or instruction is stored, and when the program or instruction is executed by a processor, the steps of the method described in the first aspect are implemented.
  • an embodiment of the present application provides a chip, comprising a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run a program or instruction to implement the method described in the first aspect.
  • an embodiment of the present application provides a computer program product, which is stored in a storage medium and is executed by at least one processor to implement the method described in the first aspect.
  • the wearable device by displaying target prompt information on the outer display screen of the wearable device when the perspective function is activated on the wearable device, it can be intuitively presented that the perspective function of the wearable device has been activated, so that non-wearers in the same real scene as the wearer of the wearable device can know the usage status of the wearable device in a timely manner through the outer display screen, thereby maximizing the personal privacy security of the non-wearers.
  • FIG1 is a schematic flow chart of an information prompting method provided by some embodiments of the present application.
  • FIG2 is a schematic diagram of a display area of an information prompt method provided by some embodiments of the present application.
  • FIG3 is a schematic diagram of a simulation display of an information prompt method provided by some embodiments of the present application.
  • FIG4 is a schematic diagram of a simulation display of an information prompt method provided by some embodiments of the present application.
  • FIG5 is a schematic diagram of a simulation display of an information prompt method provided by some embodiments of the present application.
  • FIG6 is a schematic diagram of a simulation display of an information prompt method provided by some embodiments of the present application.
  • FIG7 is a schematic diagram of a simulation display of an information prompt method provided by some embodiments of the present application.
  • FIG8 is a schematic diagram of a simulation display of an information prompt method provided by some embodiments of the present application.
  • FIG9 is a schematic diagram of environmental information collection of an information prompting method provided by some embodiments of the present application.
  • FIG10 is a schematic diagram of a simulation display of an information prompt method provided by some embodiments of the present application.
  • FIG11 is a schematic diagram of the structure of an information prompting device provided in some embodiments of the present application.
  • FIG12 is a schematic diagram of the structure of an electronic device provided in some embodiments of the present application.
  • FIG. 13 is a hardware schematic diagram of an electronic device provided in some embodiments of the present application.
  • first, second and the like in the specification and claims of this application are used to distinguish similar objects. It is not used to describe a specific order or sequence. It should be understood that the data used in this way can be interchanged where appropriate, so that the embodiments of the present application can be implemented in an order other than those illustrated or described herein, and the objects distinguished by "first”, “second”, etc. are generally of the same type, and the number of objects is not limited.
  • the first object can be one or more.
  • “and/or” in the specification and claims means at least one of the connected objects, and the character “/" generally indicates that the objects associated with each other are in an "or” relationship.
  • the information prompt method provided in the embodiment of the present application may include but is not limited to a wearable device, and the type of the wearable device may be a VR device, an AR device, or an MR device, etc., which is not specifically limited here.
  • the wearable device is a head-mounted display device that needs to block the user's vision when in use.
  • a wearable device is provided with an inner display screen, which is a display area visible to the wearer of the wearable device.
  • a display device can be added to the outside of the wearable device, that is, an outer display screen is provided on the outside of the wearable device, which is a display area visible to non-wearers.
  • Fig. 1 is a flow chart of an information prompting method provided by some embodiments of the present application. As shown in Fig. 1 , the information prompting method includes: step 110 and step 120 .
  • Step 110 Receive a first input, where the first input is used to activate a perspective function of the wearable device.
  • the first input is used to activate the perspective function of the wearable device.
  • the perspective function allows the user to interact with the external real world without removing the wearable device.
  • Video See-Through captures a real-time view of the real world through a camera, and then combines it with computer image technology to display the real-time view on an opaque display, which eventually enters the user's field of view.
  • the perspective function of wearable devices can give the wearer a feeling that the human eye can see the real world around them directly through the wearable device. Therefore, this function is called "perspective".
  • the first input may be an input from the user to the wearable device, an input from the user to other electronic devices connected to the wearable device, or an input from other electronic devices to the wearable device.
  • the user can be the wearer of the wearable device or the non-wearer, which is not specifically limited here; other electronic devices can be smart watches, smart bracelets, handles, mobile phones or computers, etc.
  • the above-mentioned first input includes but is not limited to: touch input of the user to the wearable device through a touch device such as a touch screen or a physical button, or a voice command input by the user, or a specific gesture input by the user, or a startup command input by other electronic devices to the wearable device, or other feasible inputs, which can be determined based on actual usage requirements and are not limited in the embodiments of the present application.
  • a touch device such as a touch screen or a physical button
  • a voice command input by the user or a specific gesture input by the user
  • a startup command input by other electronic devices to the wearable device or other feasible inputs, which can be determined based on actual usage requirements and are not limited in the embodiments of the present application.
  • the touch input in the embodiments of the present application includes but is not limited to: click input, sliding input, pressing input on the wearable device, or touch input on other control devices connected to the wearable device, etc.
  • the above-mentioned click input can be a single click input, a double click input, or any number of click inputs, etc., and can also be a long press input or a short press input.
  • a target control corresponding to the perspective function is set on the touch screen of the wearable device or other connected electronic devices.
  • the above-mentioned first input can be: a user's click input on the target control; or it can be: a physical button corresponding to the perspective function or a physical button that can select the perspective function is set on the wearable device or other connected electronic device, and the above-mentioned first input can be: a user's pressing input on the physical button, etc.
  • the voice instructions in the embodiments of the present application include but are not limited to: when the wearable device receives a voice command such as "start perspective” or "open perspective", the perspective function of the wearable device is triggered.
  • the specific gestures in the embodiments of the present application include but are not limited to: any one of a single-click gesture, a sliding gesture, a drag gesture, a pressure recognition gesture, a long press gesture, an area change gesture, a double-press gesture, and a double-click gesture.
  • the above-mentioned predetermined gestures may correspond to the perspective function; for example, the above-mentioned first input may be: when the wearable device detects the user's predetermined gesture, the perspective function is activated.
  • the start-up instruction input to the wearable device by other electronic devices may be that the other electronic devices are provided with a start-up instruction, which can directly trigger the wearable device to start the perspective function;
  • the above-mentioned first input may be: a certain electronic device is provided with a timed start-up instruction, and when the specified time is reached, the electronic device automatically sends a start-up instruction to the wearable device, and the wearable device will start the perspective function after receiving the start-up instruction.
  • the first input may also be in other forms, including but not limited to character input, fingerprint input, or iris input, etc., which can be determined according to actual needs and is not limited in this embodiment of the present application.
  • the perspective function can be completed by starting the shooting function of the wearable device, and the shooting function can be turned on by a camera set on the outer display screen of the wearable device.
  • the wearer of the wearable device After activating the perspective function of the wearable device, the wearer of the wearable device can directly see people and objects in the real environment, and can take images or record videos of the real environment, and even collect facial information.
  • the wearer can use the wearable device to arbitrarily shoot people or objects that enter the lens, and other non-wearers around may be secretly photographed without knowing it.
  • the greater risk of privacy leakage is that the captured images or facial recognition information will be uploaded to the server corresponding to the wearable device, and the information of non-wearers may be used at will. It is understandable that after turning off the perspective function of the wearable device, the wearer can only see the content displayed on the inner display screen.
  • the perspective function of the wearable device can be associated with the camera of the outer display screen. That is, it can monitor in real time whether the camera of the outer display screen is turned on. If the camera of the outer display screen is turned on, it is determined that the perspective function of the wearable device is also turned on; if the camera of the outer display screen is turned off or asleep, it is determined that the perspective function of the wearable device is not turned on;
  • the perspective function refers to collecting real-time views of the surrounding environment through a camera and displaying them on the screen, giving people the feeling that the human eye can see the real world around them directly through the headset, so it is called the "perspective" function.
  • Step 120 In response to the first input, target prompt information is displayed on the outer display screen of the wearable device, where the target prompt information is used to indicate that the perspective function of the wearable device has been turned on.
  • the wearable device since the wearable device is provided with an external display screen, other users can judge the working status of the wearable device through the target prompt information displayed on the external display screen.
  • FIG. 2 is a schematic diagram of a display area of an information prompt method provided in some embodiments of the present application.
  • the wearable device may include an outer display screen 210, an inner display screen 220, and a fixing structure. 240.
  • the outer display screen 210 may be an outer screen display area for displaying content to non-wearers
  • the inner display screen 220 may be an inner screen display area for displaying content to the wearer.
  • the fixing structure is used to fix the outer display screen 210 and the inner display screen 220.
  • the wearable device can detect in real time whether the perspective function is turned on. If it is determined that the perspective function is turned on after receiving the first input, the turn-on status information can be transmitted to the outer display screen of the wearable device in response to the first input, and then the target prompt information indicating that the perspective function is turned on can be displayed on the outer display screen.
  • the outer display screen can close the content display interface, or display a blank image, or display a text message "Perspective function is not turned on", etc., which is not specifically limited here.
  • the wearable device includes an outer display screen 310 and a head fixing device 320.
  • the head fixing device 320 is used to fix the wearable device to the head of the wearer.
  • the head fixing device 320 may include a horizontal headband connecting the two horizontal ends of the outer display screen 310 and a vertical headband connecting the upper part of the outer display screen 310 and the horizontal headband.
  • the head fixing device 320 in FIG3 is only used as an example, and this application does not specifically limit this structure.
  • the outer display screen 310 When the perspective function of the wearable device is not turned on, the outer display screen 310 does not display target prompt information or displays a blank image.
  • the outer display screen of the VR equipment will display target prompt information. Then, when other people pass by the wearer, they can immediately know whether the wearer is using the perspective function of the VR equipment. In order to prevent themselves from being secretly photographed, other people can choose to stay away from the wearer.
  • the information prompt method provided in the embodiment of the present application, by displaying the target prompt information on the outer display screen of the wearable device when the perspective function is activated on the wearable device, it can be intuitively presented that the perspective function of the wearable device has been activated, so that the non-wearer who is in the same real scene as the wearer of the wearable device can know the usage status of the wearable device in time through the outer display screen, thereby maximally protecting the privacy and security of the non-wearer.
  • the target prompt information includes at least one of the following:
  • a facial image of a wearer of the wearable device wherein the facial image includes a partial facial image or a complete facial image of the wearer
  • Scene image The scene image is a two-dimensional scene image or a three-dimensional scene image of the real scene where the wearable device is located.
  • the target prompt information may be a combination of at least one of the above information.
  • the target prompt information may be a combination of text prompt information and a facial image of a virtual object, or a combination of text prompt information and a facial image of a wearer of a wearable device, or a combination of text prompt information and a scene image. Interest, etc.
  • At least two types of information can be spliced and combined through image processing technology. Too many combinations of information may affect the prompt effect of the target prompt information. Usually, one type of target prompt information or target prompt information combining two types of information can be used.
  • the outer display screen of the wearable device simultaneously displays text prompt information and a partial facial image of the virtual object.
  • the text prompt information may be preset text information representing the meaning of “the perspective function is turned on”, that is, a non-wearer may directly obtain the prompt of “the perspective function is turned on” through the text prompt information.
  • the preset text information can be a default text information, for example, the default text information can be "The perspective function is turned on”; or it can be a personalized text information set according to user needs, for example, the text prompt information can be set to "The camera is on, I'm looking at you”; or "The camera is on and scanning the surrounding environment”; or other text information that can indicate that the perspective function is turned on, which is not specifically limited here.
  • the embodiments of the present application do not specifically limit the text display format of the text prompt information.
  • a text prompt message “The camera is on, I am watching you” is displayed on the outer display screen of the wearable device.
  • the preset identification image may be an image with a preset specific meaning, and the image with a specific meaning may be an identification image used to represent that the perspective function is turned on.
  • the preset logo image may use a "camera” logo image to indicate that the perspective function is turned on, or an "open” logo image to indicate that the perspective function is turned on, such as the logo " ⁇ "; if the perspective function is not turned on for the wearable device, the preset logo image may use a "closed” logo image, such as the logo " ⁇ ".
  • a “camera” logo image is displayed on the outer display screen of the wearable device.
  • the facial image of the wearer may include a partial facial image or a complete facial image of the wearer.
  • a partial facial image of the wearer is a facial image that includes at least one facial part of the wearer
  • a complete facial image of the wearer is a facial image that includes all facial parts of the wearer.
  • a partial facial image of the wearer may be an eye image, a mouth image, or a side face image of the wearer
  • a complete facial image is a full face image that includes the five features of the wearer.
  • the facial image of the wearer may be an image generated based on the facial features of the wearer.
  • the wearable device may further include a facial tracking sensor 230.
  • the facial tracking sensor 230 may be composed of a camera group, an infrared light device, or a structured light device, and is used to collect facial feature data.
  • the camera group can be used to track the wearer's face, and can collect and locate feature points of each part of the face, and can also collect face color information and face light and shadow information.
  • the infrared light device can collect face infrared images
  • the structured light device can collect face depth images.
  • the data collected by the camera group, infrared light device or structured light device together constitute the facial feature data.
  • the graphics processing unit (GPU) in the wearable device can process the facial feature data collected by the facial tracking sensor 230 to generate a partial facial image or a complete facial image of the wearer.
  • the GPU can also render and reconstruct facial feature data in real time to obtain a facial image containing expression and movement information, making the wearer's facial image displayed on the external display screen more vivid.
  • the outer display screen of the wearable device displays the facial image of the wearer
  • other non-wearers can directly see the facial image of the wearer, and then they can know that the wearable device has turned on the perspective function.
  • the virtual object may be a three-dimensional virtual character, a two-dimensional virtual character, a three-dimensional virtual animal or a two-dimensional virtual animal, which is not specifically limited herein.
  • the partial facial image or the complete facial image of the virtual object may be a two-dimensional facial image or a three-dimensional facial image.
  • the virtual object may be a cartoon character, a virtual digital person, or a fictional character.
  • the partial facial image or complete facial image of the virtual object can be a facial image of a fictional person or animal, or it can be a facial image generated based on the facial feature data in the above-mentioned embodiments, that is, according to the facial feature data of the wearer, it can be converted into a partial facial image or complete facial image of the virtual object, and the facial feature data of the virtual object corresponds to the facial feature data of the wearer.
  • the partial facial image of the virtual object is a facial image including at least one facial part of the virtual object
  • the complete facial image is a facial image including all facial parts of the virtual object.
  • Perspective originally means “sight penetration”.
  • the partial facial image of the virtual object can at least include the eyes, so that non-wearers can more easily understand the meaning of the target prompt information.
  • an eye image of a two-dimensional virtual character is displayed on the outer display screen of the wearable device.
  • the two-dimensional virtual character is a fictional character image.
  • the target prompt information includes a partial facial image of a wearer of the wearable device
  • facial feature points in the partial facial image are matched with facial feature points in a target facial region of the wearer
  • the facial contour in the partial facial image is the same as the facial contour of the target facial region of the wearer
  • the target face area is the face area where the wearer's face is blocked by the outer display screen of the wearable device.
  • the wearer When the wearable device is actually used, the wearer will wear the wearable device on his head, and the outer display screen of the wearable device will partially cover the wearer's face.
  • a partial facial image of the wearer can be displayed on the outer display screen according to the facial area of the wearer's face blocked by the outer display screen of the wearable device.
  • the partial facial image is a facial image corresponding to a target facial area
  • the target facial area is the facial area where the wearer's face is blocked by the outer display screen of the wearable device.
  • the partial facial image of the wearer meets the following conditions:
  • the facial feature points in the partial facial image match the facial feature points in the target facial region of the wearer; and the facial outer contour in the partial facial image is the same as the facial outer contour of the target facial region of the wearer.
  • the matching of facial feature points indicates the facial parts and targets contained in the partial face image.
  • the facial parts and the positions of the parts in the facial area correspond to each other.
  • the facial outer contour in the partial facial image is also the same as the facial outer contour of the target facial area of the wearer.
  • the partial facial image of the wearer presented on the outer display screen can be completely consistent with the facial image corresponding to the target facial area.
  • the combination of the wearer's partial facial image and the unobstructed facial area can provide the non-wearer with a complete face visual experience.
  • the target face area includes the eyes, nose and ears.
  • the face parts in the partial face image displayed on the outer display screen include the eyes, nose and ears, that is, the facial feature points in the partial face image match the facial feature points in the target face area of the wearer.
  • the facial outer contour of the target face area is the same as the facial outer contour in the partial face image.
  • the partial face image of the wearer is combined with the unobstructed face area to show a smooth and complete face.
  • the display effect of the target prompting information can be improved, making the face seen by the non-wearer more realistic.
  • the scene image is a two-dimensional scene image or a three-dimensional scene image of the real scene where the wearable device is located.
  • the real scene is the real environment where the wearable device is located.
  • a two-dimensional scene image or a three-dimensional scene image can be generated based on the real scene.
  • the non-wearer can directly see the real environment from the same perspective as the wearer of the wearable device.
  • the scene image may include all people or objects in the real environment. The wearer can know that the wearable device has turned on the perspective function through the scene image.
  • Figure 9 shows the real scene that the wearer sees the wearable device in.
  • the outer display screen of the wearable device displays a two-dimensional scene image in the real scene.
  • the information prompt method provided in the embodiment of the present application, by displaying different target display information on the outer display screen, non-wearers can simply and clearly obtain the prompt that the perspective function has been turned on. While ensuring the prompt effect, a certain visual experience can also be guaranteed.
  • the target prompt information when the target prompt information includes a facial image of a wearer of the wearable device, the target prompt information is displayed on an outer display screen of the wearable device, including:
  • the facial image is updated.
  • the information display mode of the target prompt information can be static display or dynamic display.
  • Static display means that the content of the target prompt information remains unchanged when the perspective function is turned on. For example, when the perspective function of the wearable device is turned on, the wearer's facial image is generated and displayed on the outer display screen of the wearable device, and the image content can remain unchanged.
  • dynamic display refers to the facial image of the wearer or the facial image of the virtual object generated based on the facial feature data of the wearer, and the displayed facial image can be changed according to the expression and action information of the face.
  • the image is updated in real time and dynamically, so that the wearer's current expression and action information can be displayed in real time, making it easier for other non-wearers around to know the wearer's current true emotional state, thereby improving the realism of the target prompt information.
  • the dynamic display may also be to set the text prompt information to an animation effect such as a flashing effect or a scrolling display effect when the perspective function is turned on, or to set a natural blinking effect for the eye image or an opening and closing effect for the mouth image when displaying the facial image of a two-dimensional virtual character, etc., which are not specifically limited here.
  • an animation effect such as a flashing effect or a scrolling display effect when the perspective function is turned on, or to set a natural blinking effect for the eye image or an opening and closing effect for the mouth image when displaying the facial image of a two-dimensional virtual character, etc.
  • different facial images can be generated in real time, so that multiple frames of static facial images can be obtained. Then, the continuous multiple frames of facial images can be played on the external display screen, and the facial images with dynamically changing expression and action information can be seen.
  • the facial image by dynamically updating the facial image on the external display screen when changes in the wearer's facial expression and movement information are detected, the facial image can be updated in real time according to changes in facial expressions, so that the target display information is close to the real facial expression and the display effect is more realistic.
  • the information prompt method when a change in the expression and action information of the wearer is detected, before updating the facial image, the information prompt method further includes:
  • the wearable device can collect the wearer's facial feature points in real time through the face tracker to track the wearer's facial feature points.
  • the wearable device can determine the position of the facial feature points in each frame of the continuous multiple frames of facial images generated in real time, and compare the positions of the same feature points in facial images of different frames to determine whether the positions of the facial feature points have changed.
  • the position of at least part of the wearer's facial feature points is detected to have changed, it means that the motion form of at least one part of the wearer's face is changing, so that the wearer's facial expression and motion information can be determined to have changed.
  • the position of the wearer's eye feature points is detected to have changed, it can be determined that the wearer's facial expression and motion information has changed, and the facial image displayed on the outer display screen shows that the wearer has changed from looking left to looking right.
  • the information prompting method by collecting the facial feature points of the wearer for detection, it is possible to detect in real time whether the expression and movement information of the wearer has changed, and update the facial image in real time when the expression and movement change.
  • the information prompt method when the target prompt information includes a scene image, before the target prompt information is displayed on the outer display screen of the wearable device, the information prompt method further includes:
  • the target prompt information is displayed on the outer display of the wearable device, including:
  • a two-dimensional scene image or a three-dimensional scene image is displayed on the outer display screen of the wearable device.
  • the scene information of the real scene in which the wearable device is located can be directly captured by the camera of the wearable device to generate a two-dimensional scene image.
  • the wearer can see the shooting preview interface of the real scene through the inner display screen, so the wearer can directly see the scene information of the real scene, and the non-wearer can see the shooting preview image corresponding to the two-dimensional scene image through the outer display screen, so that the non-wearer can know that the perspective function of the wearable device has been turned on.
  • the wearer can move around freely in the real scene, and the scene information of the real scene collected by the wearable device is changing dynamically. Therefore, the real scene can be continuously photographed to generate continuous two-dimensional scene images, which are continuously displayed on the external display screen, and the two-dimensional scene image from the wearer's current perspective is updated in time.
  • the wearer can capture two-dimensional scene images of the real scene where the wearable device is located at different positions and using different angles to obtain a two-dimensional scene image sequence of the real scene.
  • the GPU can extract and match feature points from a two-dimensional scene image sequence, and perform three-dimensional reconstruction of the scene image based on the Structure From Motion (SFM) algorithm or the Shape From Silbouette (SFS) algorithm, thereby obtaining a three-dimensional scene image of the real scene, and the three-dimensional scene image can be displayed on an external display screen.
  • SFM Structure From Motion
  • SFS Shape From Silbouette
  • the wearer turns on the perspective function of the wearable device and shoots the real scene in front of it, thereby obtaining scene information of the real environment where the wearable device is located, and transmits the scene information to the GPU.
  • the GPU reconstructs the three-dimensional image based on the real environment information to generate a three-dimensional scene image.
  • the information prompt method by collecting scene information of the real scene in which the wearable device is located, a two-dimensional scene image or a three-dimensional scene image of the real scene is generated, and the two-dimensional scene image or the three-dimensional scene image is displayed on the outer display screen of the wearable device to prompt that the perspective function of the wearable device has been turned on, so that non-wearers can obtain the target prompt information in time, thereby facilitating the non-wearers to protect their privacy.
  • the information prompting method provided in the embodiment of the present application can be executed by an information prompting device.
  • a method for an information prompting device to execute content display is taken as an example to illustrate the information prompting device provided in the embodiment of the present application.
  • the embodiment of the present application also provides an information prompting device.
  • Fig. 11 is a schematic diagram of the structure of an information prompting device provided by some embodiments of the present application. As shown in Fig. 11 , the information prompting device includes: a receiving module 1110 and a display module 1120 .
  • a receiving module 1110 configured to receive a first input, where the first input is used to activate a perspective function of the wearable device
  • the display module 1120 is used to display target prompt information on the outer display screen of the wearable device, where the target prompt information is used to indicate that the perspective function of the wearable device has been turned on.
  • the information prompt device provided in the embodiment of the present application, by displaying the target prompt information on the outer display screen of the wearable device when the perspective function of the wearable device is activated, it can be intuitively presented that the perspective function of the wearable device has been turned on, so that the non-wearer who is in the same real scene as the wearer of the wearable device can know the usage status of the wearable device in time through the outer display screen, thereby maximally protecting the privacy and security of the non-wearer.
  • the target prompt information includes at least one of the following:
  • a facial image of a wearer of the wearable device wherein the facial image includes a partial facial image or a complete facial image of the wearer
  • a scene image where the scene image is a two-dimensional scene image or a three-dimensional scene image of the real scene in which the wearable device is located.
  • the target prompt information includes a partial facial image of a wearer of the wearable device
  • facial feature points in the partial facial image are matched with facial feature points in a target facial region of the wearer
  • the facial contour in the partial facial image is the same as the facial contour of the target facial area of the wearer;
  • the target face area is the face area where the wearer's face is blocked by the outer display screen of the wearable device.
  • the display module 1120 is further configured to:
  • the facial image is updated.
  • the apparatus further comprises:
  • a first acquisition module used to acquire facial feature points of the wearer
  • the first processing module is used to determine that the expression and action information of the wearer has changed when it is detected that the positions of at least some facial feature points of the wearer have changed.
  • the device when the target prompt information includes the scene image, the device further includes:
  • a second acquisition module is used to collect scene information of the real scene where the wearable device is located;
  • a second processing module used to generate a two-dimensional scene image or a three-dimensional scene image of the real scene based on the scene information
  • the display module 1120 is further used for:
  • the two-dimensional scene image or the three-dimensional scene image is displayed on an outer display screen of the wearable device.
  • the information prompt device in the embodiment of the present application can be an electronic device or a component in the electronic device, such as an integrated circuit or a chip.
  • the electronic device can be a terminal or other devices other than a terminal.
  • the electronic device can be a mobile phone, a tablet computer, a laptop computer, a PDA, a vehicle-mounted electronic device, a mobile Internet device (Mobile Internet Device, MID), an augmented reality (augmented reality, AR)/virtual reality (virtual reality, VR) device, a robot, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant, PDA), etc.
  • NAS Network Attached Storage
  • PC personal computer
  • TV television
  • teller machine a self-service machine
  • the information prompting device in the embodiment of the present application may be a device having an operating system.
  • the operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiment of the present application.
  • the information prompting device provided in the embodiment of the present application can implement each process implemented by the method embodiments of Figures 1 to 10, and will not be described again here to avoid repetition.
  • an embodiment of the present application also provides an electronic device 1200, including a processor 1201, a memory 1202, and a program or instruction stored in the memory 1202 and executable on the processor 1201.
  • a processor 1201 a memory 1202
  • the program or instruction is executed by the processor 1201
  • each process of the above-mentioned information prompt method embodiment is implemented, and the same technical effect can be achieved. To avoid repetition, it will not be repeated here.
  • the electronic devices in the embodiments of the present application include the mobile electronic devices and non-mobile electronic devices mentioned above.
  • FIG. 13 is a schematic diagram of the hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 1300 includes but is not limited to: a radio frequency unit 1301, a network module 1302, an audio output unit 1303, an input unit 1304, a sensor 1305, a display unit 1306, a user input unit 1307, an interface unit 1308, a memory 1309 and a processor 1310 and other components.
  • the electronic device 1300 may also include a power source (such as a battery) for supplying power to each component, and the power source may be logically connected to the processor 1310 through a power management system, so that the power management system can manage charging, discharging, and power consumption management.
  • a power source such as a battery
  • the electronic device structure shown in FIG13 does not constitute a limitation on the electronic device, and the electronic device may include more or fewer components than shown, or combine certain components, or arrange components differently, which will not be described in detail here.
  • the user input unit 1307 is used to receive a first input, where the first input is used to activate the perspective function of the wearable device;
  • the processor 1310 is used to display target prompt information on the outer display screen of the wearable device in response to the first input, where the target prompt information is used to indicate that the perspective function of the wearable device is turned on.
  • the electronic device by displaying target prompt information on the outer display screen of the wearable device when the perspective function of the wearable device is activated, it can intuitively present that the perspective function of the wearable device has been turned on, so that non-wearers who are in the same real scene as the wearer of the wearable device can know the usage status of the wearable device in time through the outer display screen, thereby maximally protecting the privacy and security of the non-wearers.
  • the target prompt information includes at least one of the following:
  • a facial image of a wearer of the wearable device wherein the facial image includes a partial facial image or a complete facial image of the wearer
  • a scene image where the scene image is a two-dimensional scene image or a three-dimensional scene image of the real scene in which the wearable device is located.
  • the target prompt information includes a partial facial image of a wearer of the wearable device
  • facial feature points in the partial facial image are matched with facial feature points in a target facial region of the wearer
  • the facial contour in the partial facial image is the same as the facial contour of the target facial area of the wearer;
  • the target face area is the face area where the wearer's face is blocked by the outer display screen of the wearable device.
  • the processor 1310 is further configured to update the facial image when a change in facial expression and movement information of the wearer is detected.
  • the processor 1310 is further configured to collect facial feature points of the wearer
  • the processor 1310 is further configured to collect scene information of a real scene in which the wearable device is located;
  • the target prompt information is displayed on the outer display screen of the wearable device, including:
  • the two-dimensional scene image or the three-dimensional scene image is displayed on an outer display screen of the wearable device.
  • the input unit 1304 may include a graphics processing unit (GPU) 13041 and a microphone 13042, and the graphics processor 13041 processes the image data of the static picture or video obtained by the image capture device (such as a camera) in the video capture mode or the image capture mode.
  • the display unit 1306 may include a display panel 13061, and the display panel 13061 may be configured in the form of a liquid crystal display, an organic light emitting diode, etc.
  • the user input unit 1307 includes a touch panel 13071 and at least one of other input devices 13072.
  • the touch panel 13071 is also called a touch screen.
  • the touch panel 13071 may include two parts: a touch detection device and a touch controller.
  • Other input devices 13072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control key, a switch key, etc.), a trackball, a mouse, and a joystick, which will not be repeated here.
  • the memory 1309 can be used to store software programs and various data.
  • the memory 1309 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or instructions required for at least one function (such as a sound playback function, an image playback function, etc.), etc.
  • the memory 1309 may include a volatile memory or a non-volatile memory, or the memory 1309 may include both volatile and non-volatile memories.
  • the non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
  • the volatile memory may be a random access memory (RAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), a synchronous dynamic random access memory (SDRAM), a double data rate synchronous dynamic random access memory (DDRSDRAM), an enhanced synchronous dynamic random access memory (ESDRAM), a synchronous link dynamic random access memory (SLDRAM) and a direct memory bus random access memory (DRRAM).
  • the memory 1309 in the embodiment of the present application includes but is not limited to these and any other suitable types of memory.
  • the processor 1310 may include one or more processing units; optionally, the processor 1310 integrates an application processor and a modem processor, wherein the application processor mainly processes operations related to an operating system, a user interface, and application programs, etc.
  • the modem processor mainly processes wireless communication signals, such as a baseband processor. It is understandable that the modem processor may not be integrated into the processor 1310.
  • An embodiment of the present application also provides a readable storage medium, on which a program or instruction is stored.
  • a program or instruction is stored.
  • the various processes of the above-mentioned information prompt method embodiment are implemented, and the same technical effect can be achieved. To avoid repetition, it will not be repeated here.
  • the processor is the processor in the electronic device described in the above embodiment.
  • the readable storage medium includes a computer readable storage medium, such as a computer read-only memory ROM, a random access memory RAM, a magnetic disk or an optical disk.
  • An embodiment of the present application further provides a chip, which includes a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the various processes of the above-mentioned information prompt method embodiment, and can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • the chip mentioned in the embodiments of the present application can also be called a system-level chip, a system chip, a chip system or a system-on-chip chip, etc.
  • the technical solution of the present application can be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, a magnetic disk, or an optical disk), and includes a number of instructions for a terminal (which can be a mobile phone, a computer, a server, or a network device, etc.) to execute the methods described in each embodiment of the present application.
  • a storage medium such as ROM/RAM, a magnetic disk, or an optical disk
  • a terminal which can be a mobile phone, a computer, a server, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种信息提示方法、信息提示装置、电子设备和可读存储介质,属于通信技术领域。该方法包括:接收第一输入,所述第一输入用于启动穿戴设备的***;响应于所述第一输入,在所述穿戴设备的外侧显示屏,显示目标提示信息,所述目标提示信息用于指示所述穿戴设备的***已开启。

Description

信息提示方法、信息提示装置、电子设备和可读存储介质
相关申请的交叉引用
本申请要求于2022年11月30日提交的申请号为202211528363.3,发明名称为“信息提示方法、信息提示装置、电子设备和可读存储介质”的中国专利申请的优先权,其通过引用方式全部并入本申请。
技术领域
本申请属于扩展现实技术领域,具体涉及一种信息提示方法、信息提示装置、电子设备和可读存储介质。
背景技术
扩展现实(Extended Reality,XR)技术包括虚拟现实(VR,Virtual Reality)技术、增强现实(Augmented Reality,AR)技术和混合现实(Mixed Reality,MR)技术等。XR技术可以通过计算机技术和穿戴设备产生一个真实与虚拟结合、可人机交互的环境,通过多源信息融合的、交互式的三维动态视景和实体行为的***仿真,可以使用户沉浸到虚拟环境中,体验如临真境的感觉。
以头戴式VR设备为例,头戴式VR设备是利用头戴式显示器将穿戴者对外界的视觉进行封闭,引导穿戴者产生一种身在虚拟环境中的感觉。由于穿戴者需要将该VR设备佩戴在头上,并使VR设备与人脸紧密贴合,而此时穿戴者周围的非穿戴者并不知道穿戴者的VR设备的使用状态,即非穿戴者无法确定穿戴者是否可以看到真实环境中的人或物,更无法确定自己是否正在被偷拍,因此,非穿戴者将会面临隐私泄露的风险。
发明内容
本申请实施例的目的是提供一种信息提示方法、信息提示装置、电子设备和可读存储介质,能够将穿戴者的穿戴设备的***开启状态直观的提示给穿戴者周围的非穿戴者,便于非穿戴者保护个人隐私。
第一方面,本申请实施例提供了一种信息提示方法,该方法包括:
接收第一输入,所述第一输入用于启动穿戴设备的***;
响应于所述第一输入,在所述穿戴设备的外侧显示屏,显示目标提示信息,所述目标提示信息用于指示所述穿戴设备的***已开启。
第二方面,本申请实施例提供了一种信息提示装置,该装置包括:
接收模块,用于接收第一输入,所述第一输入用于启动穿戴设备的***;
显示模块,用于在所述穿戴设备的外侧显示屏,显示目标提示信息,所述目标提示信 息用于指示所述穿戴设备的***已开启。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
第六方面,本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如第一方面所述的方法。
在本申请实施例中,通过在穿戴设备启动***的情况下,在穿戴设备的外侧显示屏显示目标提示信息,可以直观地呈现穿戴设备已开启***,使得与穿戴设备的穿戴者处于同一真实场景下的非穿戴者通过外侧显示屏,能够及时知道穿戴设备的使用状态,从而最大程度地保障了非穿戴者的个人隐私安全。
附图说明
图1是本申请的一些实施例提供的信息提示方法的流程示意图;
图2是本申请的一些实施例提供的信息提示方法的显示区域示意图;
图3是本申请的一些实施例提供的信息提示方法的模拟显示示意图;
图4是本申请的一些实施例提供的信息提示方法的模拟显示示意图;
图5是本申请的一些实施例提供的信息提示方法的模拟显示示意图;
图6是本申请的一些实施例提供的信息提示方法的模拟显示示意图;
图7是本申请的一些实施例提供的信息提示方法的模拟显示示意图;
图8是本申请的一些实施例提供的信息提示方法的模拟显示示意图;
图9是本申请的一些实施例提供的信息提示方法的环境信息采集示意图;
图10是本申请的一些实施例提供的信息提示方法的模拟显示示意图;
图11是本申请的一些实施例提供的信息提示装置的结构示意图;
图12是本申请的一些实施例提供的电子设备的结构示意图;
图13是本申请的一些实施例提供的电子设备的硬件示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象, 而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的信息提示方法、信息提示装置、电子设备和可读存储介质进行详细地说明。
本申请实施例提供的信息提示方法,该信息提示方法的执行主体可以包括但不限于穿戴设备,穿戴设备的类型可以是VR设备、AR设备或MR设备等,在此不做具体限定。
需要说明的是,穿戴设备为在使用时需要封闭用户视觉的头戴式显示设备。
通常穿戴设备设置有内侧显示屏,内侧显示屏为穿戴设备的穿戴者可以看到的显示区域。在本实施例中,可以在穿戴设备外侧增加显示装置,即在穿戴设备外侧设置外侧显示屏,外侧显示屏为非穿戴者可以看到的显示区域。
图1是本申请的一些实施例提供的信息提示方法的流程示意图。如图1所示,该信息提示方法包括:步骤110和步骤120。
步骤110、接收第一输入,第一输入用于启动穿戴设备的***。
在本申请实施例中,上述第一输入用于启动穿戴设备的***。***可以让用户在不摘除穿戴设备的情况下,实现与外部现实世界的行为交互。
本申请中的***指的是视频透视(VST,Video See-Through),视频透视是通过拍摄装置捕捉到真实世界的实时视图,然后与计算机图像技术结合在一起,将实时视图展示在不透明的显示器上,最终进入用户视野。
穿戴设备的***可以给穿戴者提供一种人眼能够直接透过穿戴设备看到周围真实世界的感觉,因此,该功能被称为“透视”。
在本步骤中,第一输入可以是用户对穿戴设备的输入,也可以是用户对穿戴设备连接的其他电子设备的输入,还可以是其他电子设备对穿戴设备的输入。
其中,用户可以是穿戴设备的穿戴者,也可以是非穿戴者,在此不作具体限定;其他电子设备可以是智能手表、智能手环、手柄、手机或电脑等。
示例性地,上述第一输入包括但不限于:用户通过触摸屏或实体按键等触控装置对穿戴设备的触控输入,或者为用户输入的语音指令,或者为用户输入的特定手势,或者为其他电子设备对穿戴设备输入的启动指令,或者为其他可行性输入,具体的可以根据实际使用需求确定,本申请实施例不作限定。
本申请实施例中的触控输入包括但不限于:对穿戴设备的点击输入、滑动输入、按压输入或对穿戴设备连接的其他控制设备的触控输入等。上述的点击输入可以为单击输入、双击输入或任意次数的点击输入等,还可以为长按输入或短按输入。
例如,穿戴设备或连接的其他电子设备的触摸屏上设置有与***对应的目标控件, 上述的第一输入可以为:用户对目标控件的点击输入;或者可以为:穿戴设备或连接的其他电子设备上设置有与***对应的实体按键或可以选择***的实体按键,上述的第一输入可以为:用户对实体按键的按压输入等。
本申请实施例中的语音指令包括但不限于:穿戴设备在接收到语音如“启动透视”或“打开透视”时,触发启动穿戴设备的***。
本申请实施例中的特定手势包括但不限于:单击手势、滑动手势、拖动手势、压力识别手势、长按手势、面积变化手势、双按手势、双击手势中的任意一种,上述预定手势可以与***对应;例如,上述的第一输入可以为:穿戴设备在检测到用户的预定手势时,启动***。
本申请实施例中的其他电子设备对穿戴设备输入的启动指令,可以是在其他电子设备设置有启动指令,可以直接触发穿戴设备启动***;例如,上述第一输入可以为:某一电子设备设置有定时的启动指令,在到达指定时刻时,该电子设备自动向穿戴设备发送启动指令,穿戴设备在接收到启动指令后就会启动***。
当然,在其他实施例中,第一输入也可以为其他形式,包括但不限于字符输入、指纹输入或虹膜输入等,具体可根据实际需要决定,本申请实施例对此不作限定。
在一些实施例中,***可以通过启动穿戴设备的拍摄功能来完成,拍摄功能可以通过在穿戴设备的外侧显示屏设置的摄像头来开启。
在启动穿戴设备的***之后,穿戴设备的穿戴者可以直接看到真实环境中的人和物,并可以对真实环境进行拍摄图像或录制视频,甚至可以进行人脸信息的采集。穿戴者通过穿戴设备可以随意拍摄进入镜头的人或物,周围其他非穿戴者可能在不知情的情况下被偷拍。在这个过程中,更大的隐私泄露风险在于拍摄的图像或人脸识别信息会被上传至穿戴设备对应的服务器中,非穿戴者的信息可能被随意使用。可以理解的是,在关闭穿戴设备的***之后,穿戴者只能看到内侧显示屏显示的内容。
可以理解的是,穿戴设备的***可以与外侧显示屏的摄像头相关联。即可以实时监测外侧显示屏的摄像头是否处于开启状态。若外侧显示屏的摄像头开启,则确定穿戴设备的***也开启;若外侧显示屏的摄像头关闭或睡眠,则确定穿戴设备的***未开启;
***是指通过相机采集周围环境的实时视图并通过屏幕进行显示,给人一种人眼能够直接透过头显看到周围真实世界的感觉,因此被称为“透视”功能。
步骤120、响应于第一输入,在穿戴设备的外侧显示屏,显示目标提示信息,目标提示信息用于指示穿戴设备的***已开启。
而在本申请实施例中,由于穿戴设备设置有外侧显示屏,其他用户可以通过外侧显示屏显示的目标提示信息判断穿戴设备的工作状态。
图2是本申请的一些实施例提供的信息提示方法的显示区域示意图。
如图2所示,穿戴设备可以包括外侧显示屏210、内侧显示屏220以及固定结构 240。
外侧显示屏210可以为给非穿戴者展示内容的外屏幕显示区域,内侧显示屏220可以为给穿戴者展示内容的内屏幕显示区域。固定结构用于对外侧显示屏210和内侧显示屏220进行固定连接。
在实际执行中,在穿戴设备启动之后,穿戴设备可以实时检测***是否处于开启状态。若接收第一输入后,确定***处于已开启状态,则可以响应于第一输入,把开启状态信息传给穿戴设备的外侧显示屏,然后可以在外侧显示屏显示用于指示***已开启的目标提示信息。
若确定***处于未开启状态,外侧显示屏可以关闭内容显示界面,或者显示空白图像,或者显示文本信息“***未开启”等,在此不作具体限定。
如图3所示,穿戴设备包括外侧显示屏310和头部固定装置320。头部固定装置320用于将穿戴设备固定于穿戴者的头部。头部固定装置320可以包括连接外侧显示屏310横向两端的横向头带和连接外侧显示屏310上部与横向头带的纵向头带,图3中的头部固定装置320仅用作示例,本申请对此结构不作具体限定。
在穿戴设备未开启***的情况下,在外侧显示屏310不显示目标提示信息或者是显示空白图像。
在穿戴设备的穿戴者处于公共环境下,非穿戴者可以通过穿戴设备的外侧显示屏上显示的目标提示信息,及时获知该穿戴设备有没有开启***。
例如:穿戴者在商场中体验VR设备,在穿戴者开启VR设备的***时,VR设备的外侧显示屏将显示目标提示信息。那么在其他人经过穿戴者周围时,可以第一时间知道穿戴者是否正在使用VR设备的***。其他人为了防止自己被偷拍,可以选择远离该穿戴者。
根据本申请实施例提供的信息提示方法,通过在穿戴设备启动***的情况下,在穿戴设备的外侧显示屏显示目标提示信息,可以直观地呈现穿戴设备已开启***,让处于与穿戴设备的穿戴者处于同一真实场景下的非穿戴者通过外侧显示屏及时知道穿戴设备的使用状态,从而最大程度地保障了非穿戴者的隐私安全。
在一些实施例中,目标提示信息包括以下至少一项:
文本提示信息;
虚拟对象的部分面部图像或完整面部图像;
穿戴设备的穿戴者的面部图像,面部图像包括穿戴者的部分人脸图像或完整人脸图像;
场景图像,场景图像为穿戴设备所处的现实场景的二维场景图像或三维场景图像。
可以理解的是,目标提示信息可以是上述至少一种信息的组合。例如:目标提示信息可以是文本提示信息和虚拟对象的面部图像的组合信息,或者是文本提示信息和穿戴设备的穿戴者的面部图像的组合信息,或者是文本提示信息和场景图像的组合信 息等。
可以通过图像处理技术将至少两种信息进行拼接和组合,过多的信息组合可能会影响目标提示信息的提示效果,通常情况下,可以采用一种目标提示信息或两种信息组合的目标提示信息。
如图4所示,在穿戴设备的***已开启的情况下,在穿戴设备的外侧显示屏同时显示了文本提示信息和虚拟对象的部分面部图像。
在目标提示信息包括文本提示信息的情况下,文本提示信息可以是表征“***已开启”含义的预设文本信息,即非穿戴者可以直接通过文本提示信息获得“***已开启”的提示。
可以理解的是,预设文本信息可以为默认设置的文本信息,例如:默认文本信息可以是“***已开启”;或者是根据用户需求设置的个性化文本信息,例如:文本提示信息可以设置为“摄像头已开启,我在看你”;或者是“摄像头开启,正在扫视周围环境”;或者是其他可以表征***已开启的文本信息,在此不作具体限定。
本申请实施例也不对文本提示信息的文字显示形式进行具体限定。
如图5所示,在穿戴设备的***已开启的情况下,在穿戴设备的外侧显示屏显示文本提示信息“摄像头已开启,我在看你”。
在目标提示信息包括预设标识图像的情况下,预设标识图像可以是预先设置的特定含义的图像,特定含义的图像可以是用于表征***开启的标识图像。
例如:预设标识图像可以采用“相机”标识图像表示***已开启,或者“打开”标识图像,表示***已开启,例如:标识“√”;若穿戴设备未开启***,则预设标识图像可以采用“关闭”标识图像等,例如:标识“×”。
如图6所示,在穿戴设备的***已开启的情况下,在穿戴设备的外侧显示屏显示“相机”标识图像。
在目标提示信息包括穿戴设备的穿戴者的面部图像的情况下,穿戴者的面部图像可以包括穿戴者的部分人脸图像或完整人脸图像。穿戴者的部分人脸图像为包含穿戴者的至少一个人脸部位的面部图像,穿戴者的完整人脸图像为包含穿戴者的全部人脸部位的面部图像。例如:穿戴者的部分人脸图像可以是穿戴者的眼部图像、嘴部图像或侧脸图像等,完整人脸图像即为包含穿戴者五官的全脸图像。
穿戴者的面部图像可以是基于穿戴者的面部特征生成的图像。如图2所示,穿戴设备还可以包括脸部追踪感应器230。脸部追踪感应器230可以由摄像头群组、红外光装置或结构光装置组成,用于采集面部特征数据。
在实际执行中,摄像头群组可以用于追踪佩戴者的脸部,并且可以对人脸中的每个部位进行特征点采集和定位,还可以采集人脸色彩信息以及人脸光影信息,红外光装置可以采集人脸红外图像,结构光装置可以采集人脸深度图像。由摄像头群组、红外光装置或结构光装置采集的数据共同组成了面部特征数据。
穿戴设备中的图形处理器(Graphics Processing Unit,GPU)可以对脸部追踪感应器230采集的面部特征数据进行处理,生成穿戴者的部分人脸图像或完整人脸图像。
GPU还可以对面部特征数据进行实时渲染和重构,得到包含表情动作信息的面部图像,使得外侧显示屏显示的穿戴者的面部图像更加生动形象。
在实际执行中,在穿戴设备的外侧显示屏显示穿戴者的面部图像的情况下,其他非穿戴者可以直接看到穿戴者的面部图像,则可以获知穿戴设备已开启***。
在目标提示信息包括虚拟对象的部分面部图像或完整面部图像的情况下,虚拟对象可以是三维虚拟人物、二维虚拟人物、三维虚拟动物或二维虚拟动物,在此不作具体限定。
因此,虚拟对象的部分面部图像或完整面部图像可以是二维面部图像或三维面部图像。例如:虚拟对象可以是动漫人物、虚拟数字人或虚构的角色等。
需要说明的是,虚拟对象的部分面部图像或完整面部图像可以是虚构出来的人物或动物的面部图像,也可以是基于上述实施例中的面部特征数据生成的面部图像,即根据穿戴者的面部特征数据,可以转换成虚拟对象的部分面部图像或完整面部图像,该虚拟对象的面部特征数据与穿戴者的面部特征数据相对应。
虚拟对象的部分面部图像为包含虚拟对象的至少一个脸部部位的面部图像,完整面部图像为包含虚拟对象的所有脸部部位的面部图像。
“透视”本意为“视线穿透”,虚拟对象的部分面部图像可以至少包括眼部,便于非穿戴者更容易理解到目标提示信息的含义。
如图7所示,在穿戴设备的***已开启的情况下,在穿戴设备的外侧显示屏显示二维虚拟人物的眼部图像,该二维虚拟人物为虚构的人物形象。
在一些实施例中,在目标提示信息包括穿戴设备的穿戴者的部分人脸图像的情况下,部分人脸图像中的人脸特征点与穿戴者的目标人脸区域中的人脸特征点相匹配;
部分人脸图像中的人脸外轮廓与穿戴者的目标人脸区域的人脸外轮廓相同;
其中,目标人脸区域为穿戴者的人脸被穿戴设备的外侧显示屏遮挡的人脸区域。
在实际使用穿戴设备的过程中,穿戴者会将穿戴设备戴在头上,穿戴设备的外侧显示屏会对穿戴者的面部进行部分遮挡。
为了提高非穿戴者的视觉体验,可以根据穿戴者人脸被穿戴设备的外侧显示屏遮挡的人脸区域,在外侧显示屏展示穿戴者的部分人脸图像。
在本申请实施例中,部分人脸图像为目标人脸区域对应的人脸图像,目标人脸区域即为被穿戴者的人脸被穿戴设备的外侧显示屏遮挡的人脸区域。
在实际执行中,穿戴者的部分人脸图像满足以下条件:
部分人脸图像中的人脸特征点与穿戴者的目标人脸区域中的人脸特征点相匹配;且部分人脸图像中的人脸外轮廓与穿戴者的目标人脸区域的人脸外轮廓相同。
可以理解的是,人脸特征点的匹配说明了部分人脸图像所包含的人脸部位和目标 人脸区域中的人脸部位以及部位所在位置相对应。在此基础上,部分人脸图像中的人脸外轮廓与穿戴者的目标人脸区域的人脸外轮廓也相同,此时外侧显示屏呈现的穿戴者的部分人脸图像可以与目标人脸区域对应的人脸图像可以完全一致。
从非穿戴者的视角,穿戴者的部分人脸图像与未被遮挡的人脸区域的结合,可以给非穿戴者提供完整人脸的视觉体验。
如图8左图所示,目标人脸区域包括的人脸部位有眼睛、鼻子和耳朵,如图8右图所示,外侧显示屏显示的部分人脸图像中的人脸部位有眼睛、鼻子和耳朵,即有部分人脸图像中的人脸特征点与穿戴者的目标人脸区域中的人脸特征点相匹配。并且目标人脸区域的人脸外轮廓与部分人脸图像中的人脸外轮廓相同。图8右图中穿戴者的部分人脸图像与未被遮挡的人脸区域相结合可以看到一张流畅且完整的人脸。
根据本申请实施例提供的信息提示方法,通过匹配部分人脸图像和目标人脸区域的外轮廓和人脸特征点,可以提高目标提示信息显示效果,使得非穿戴者看到的人脸更为逼真。
在目标提示信息包括场景图像的情况下,场景图像为穿戴设备所处的现实场景的二维场景图像或三维场景图像。现实场景即为穿戴设备所处的真实环境。
在穿戴设备已开启***的情况下,可以根据现实场景生成二维场景图像或三维场景图像。
在穿戴设备的外侧显示屏显示场景图像的情况下,非穿戴者可以直接看到与穿戴设备的穿戴者视角相同的真实环境。场景图像可以包括真实环境中的所有人或物。穿戴者通过该场景图像可以获知穿戴设备已开启***。
如图9所示,图9展示了穿戴者所看到穿戴设备所处现实场景。如图10所示,在穿戴设备的外侧显示屏显示了该现实场景下的二维场景图像。
根据本申请实施例提供的信息提示方法,通过在外侧显示屏显示不同的目标显示信息,可以让非穿戴者简单明了地获得***已开启的提示,在保证提示效果的情况下,还可以保证一定的视觉体验。
在一些实施例中,在目标提示信息包括穿戴设备的穿戴者的面部图像的情况下,在穿戴设备的外侧显示屏,显示目标提示信息,包括:
在检测到穿戴者的表情动作信息发生变化的情况下,更新面部图像。
需要说明的是,为了提高目标显示信息的视觉显示效果,目标提示信息的信息显示方式可以采用静态显示或动态显示。
静态显示指的是目标提示信息显示的内容在***开启状态保持不变的情况下,静态显示的内容也保持不变。例如:在穿戴设备开启***时,生成穿戴者的面部图像,并显示在穿戴设备的外侧显示屏,该图像内容可以保持不变。
在本申请实施例中,动态显示指的是基于穿戴者的面部特征数据生成的穿戴者的面部图像或虚拟对象的面部图像,都可以根据人脸的表情动作信息变化而对显示的面 部图像进行实时动态更新,从而可以实时显示穿戴者当前的表情动作信息,便于周围的其他非穿戴者能知晓穿戴者当前的真实情绪状态,提高了目标提示信息的真实感。
在一些实施例中,动态显示还可以是在***开启的情况下,给文本提示信息设置为闪烁效果或滚动显示效果等动画效果,或者是在显示二维虚拟人物的面部图像时,给眼部图像设置自然眨动效果或给嘴部图像设置开合效果等,在此不作具体限定。上述动态效果的处理均可以通过GPU完成。
在实际执行中,在检测到穿戴者的表情动作信息发生变化的情况下,可以实时生成不同的面部图像,从而可以获得连续采集的多帧静态显示的面部图像。然后可以在外侧显示屏播放连续的多帧面部图像,即可看到表情动作信息动态变化的面部图像。
根据本申请实施例提供的信息提示方法,通过在检测到穿戴者的表情动作信息发生变化的情况下,在外侧显示屏动态更新面部图像,从而使得面部图像可以根据人脸表情变化而实时更新,使得目标显示信息贴近真实人脸表情,显示效果更加逼真。
在一些实施例中,在检测到穿戴者的表情动作信息发生变化的情况下,更新面部图像之前,信息提示方法还包括:
采集穿戴者的人脸特征点;
在检测到穿戴者的至少部分人脸特征点的位置发生变化的情况下,确定穿戴者的表情动作信息发生变化。
在实际执行中,穿戴设备可以通过脸部追踪器可以实时采集穿戴者的人脸特征点,实现对穿戴者的人脸特征点进行追踪。
可以理解的是,穿戴设备在实时生成的连续多帧面部图像中,可以确定每一帧图像中的人脸特征点的位置,比较不同帧的面部图像中的同一特征点的位置,即可确定人脸特征点的位置是否发生变化。
在检测到穿戴者的至少部分人脸特征点的位置发生变化的情况下,说明穿戴者人脸中的至少一个部位的动作形态在变化,从而可以确定穿戴者的表情动作信息发生变化。例如:在检测到穿戴者的眼部特征点的位置发生变化,即可确定穿戴者的表情动作信息变化,在外侧显示屏显示的面部图像展现的效果为穿戴者由向左看变成了向右看。
根据本申请实施例提供的信息提示方法,通过采集穿戴者的人脸特征点检测,可以实时检测穿戴者的表情动作信息是否发生变化,并在表情动作发生变化的情况下实时更新面部图像。
在一些实施例中,在目标提示信息包括场景图像的情况下,在穿戴设备的外侧显示屏,显示目标提示信息之前,信息提示方法还包括:
采集穿戴设备所处现实场景的场景信息;
基于场景信息,生成现实场景的二维场景图像或三维场景图像;
在穿戴设备的外侧显示屏,显示目标提示信息,包括:
在穿戴设备的外侧显示屏,显示二维场景图像或三维场景图像。
在实际执行中,可以通过穿戴设备的摄像头直接拍摄穿戴设备所处现实场景的场景信息,生成二维场景图像。
穿戴者可以通过内侧显示屏看到对现实场景的拍摄预览界面,则穿戴者可以直接看到现实场景的场景信息,非穿戴者可以通过外侧显示屏看到二维场景图像对应拍摄预览图像,从而非穿戴者可以知道该穿戴设备的***已开启。
可以理解的是,穿戴者可以在现实场景中任意走动,穿戴设备采集的所处现实场景的场景信息在动态变化,因此,可以对现实场景进行连续拍摄,生成连续的二维场景图像,并在外侧显示屏连续显示,及时更新穿戴者当前视角下的二维场景图像。
在实际执行中,穿戴者可以在不同位置以及使用不同角度拍摄穿戴设备所处现实场景的二维场景图像,获得现实场景的二维场景图像序列。
GPU可以根据对二维场景图像序列进行特征点提取和特征点匹配,并基于运动恢复结构算法(Structure From Motion,SFM)或由轮廓重构物体三维模型算法(Shape From Silbouette,SFS)进行场景图像的三维重建,从而得到现实场景的三维场景图像,并且可以在外侧显示屏显示三维场景图像。
如图9所示,穿戴者开启穿戴设备的***,并拍摄正对面的现实场景,从而获取穿戴设备所处真实环境的场景信息,并把场景信息传给GPU。GPU根据真实环境信息进行三维图像重建,生成三维场景图像。
根据本申请实施例提供的信息提示方法,通过采集穿戴设备所处现实场景的场景信息,生成现实场景的二维场景图像或三维场景图像,并在穿戴设备的外侧显示屏显示二维场景图像或三维场景图像,来提示该穿戴设备的***已开启,使得非穿戴者及时得到该目标提示信息,从而便于非穿戴者保护隐私。
本申请实施例提供的信息提示方法,执行主体可以为信息提示装置。本申请实施例中以信息提示装置执行内容显示的方法为例,说明本申请实施例提供的信息提示装置。
本申请实施例还提供一种信息提示装置。
图11是本申请的一些实施例提供的信息提示装置的结构示意图。如图11所示,该信息提示装置包括:接收模块1110和显示模块1120。
接收模块1110,用于接收第一输入,所述第一输入用于启动穿戴设备的***;
显示模块1120,用于在所述穿戴设备的外侧显示屏,显示目标提示信息,所述目标提示信息用于指示所述穿戴设备的***已开启。
根据本申请实施例提供的信息提示装置,通过在穿戴设备启动***的情况下,在穿戴设备的外侧显示屏显示目标提示信息,可以直观地呈现穿戴设备已开启***,让处于与穿戴设备的穿戴者处于同一真实场景下的非穿戴者通过外侧显示屏及时知道穿戴设备的使用状态,从而最大程度地保障了非穿戴者的隐私安全。
在一些实施例中,所述目标提示信息包括以下至少一项:
文本提示信息;
虚拟对象的部分面部图像或完整面部图像;
所述穿戴设备的穿戴者的面部图像,所述面部图像包括所述穿戴者的部分人脸图像或完整人脸图像;
场景图像,所述场景图像为所述穿戴设备所处的现实场景的二维场景图像或三维场景图像。
在一些实施例中,在所述目标提示信息包括所述穿戴设备的穿戴者的部分人脸图像的情况下,所述部分人脸图像中的人脸特征点与所述穿戴者的目标人脸区域中的人脸特征点相匹配;
所述部分人脸图像中的人脸外轮廓与所述穿戴者的目标人脸区域的人脸外轮廓相同;
其中,所述目标人脸区域为所述穿戴者的人脸被所述穿戴设备的外侧显示屏遮挡的人脸区域。
在一些实施例中,在所述目标提示信息包括所述穿戴设备的穿戴者的面部图像的情况下,所述显示模块1120,还用于:
在检测到所述穿戴者的表情动作信息发生变化的情况下,更新所述面部图像。
在一些实施例中,所述装置还包括:
第一采集模块,用于采集所述穿戴者的人脸特征点;
第一处理模块,用于在检测到所述穿戴者的至少部分人脸特征点的位置发生变化的情况下,确定所述穿戴者的表情动作信息发生变化。
在一些实施例中,在所述目标提示信息包括所述场景图像的情况下,所述装置还包括:
第二采集模块,用于采集所述穿戴设备所处现实场景的场景信息;
第二处理模块,用于基于所述场景信息,生成所述现实场景的二维场景图像或三维场景图像;
所述显示模块1120,还用于:
在所述穿戴设备的外侧显示屏,显示所述二维场景图像或所述三维场景图像。
本申请实施例中的信息提示装置可以是电子设备,也可以是电子设备中的部件,例如集成电路或芯片。该电子设备可以是终端,也可以为除终端之外的其他设备。示例性地,电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、移动上网装置(Mobile Internet Device,MID)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、机器人、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,还可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的信息提示装置可以为具有操作***的装置。该操作***可以为安卓(Android)操作***,可以为iOS操作***,还可以为其他可能的操作***,本申请实施例不作具体限定。
本申请实施例提供的信息提示装置能够实现图1至图10的方法实施例实现的各个过程,为避免重复,这里不再赘述。
可选地,如图12所示,本申请实施例还提供一种电子设备1200,包括处理器1201,存储器1202,存储在存储器1202上并可在所述处理器1201上运行的程序或指令,该程序或指令被处理器1201执行时实现上述信息提示方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图13为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备1300包括但不限于:射频单元1301、网络模块1302、音频输出单元1303、输入单元1304、传感器1305、显示单元1306、用户输入单元1307、接口单元1308、存储器1309以及处理器1310等部件。
本领域技术人员可以理解,电子设备1300还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理***与处理器1310逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。图13中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,用户输入单元1307,用于接收第一输入,所述第一输入用于启动穿戴设备的***;
处理器1310,用于响应于所述第一输入,在所述穿戴设备的外侧显示屏,显示目标提示信息,所述目标提示信息用于指示所述穿戴设备的***已开启。
根据本申请实施例提供的电子设备,通过在穿戴设备启动***的情况下,在穿戴设备的外侧显示屏显示目标提示信息,可以直观地呈现穿戴设备已开启***,让处于与穿戴设备的穿戴者处于同一真实场景下的非穿戴者通过外侧显示屏及时知道穿戴设备的使用状态,从而最大程度地保障了非穿戴者的隐私安全。
可选地,所述目标提示信息包括以下至少一项:
文本提示信息;
虚拟对象的部分面部图像或完整面部图像;
所述穿戴设备的穿戴者的面部图像,所述面部图像包括所述穿戴者的部分人脸图像或完整人脸图像;
场景图像,所述场景图像为所述穿戴设备所处的现实场景的二维场景图像或三维场景图像。
可选地,在所述目标提示信息包括所述穿戴设备的穿戴者的部分人脸图像的情况下,所述部分人脸图像中的人脸特征点与所述穿戴者的目标人脸区域中的人脸特征点相匹配;
所述部分人脸图像中的人脸外轮廓与所述穿戴者的目标人脸区域的人脸外轮廓相同;
其中,所述目标人脸区域为所述穿戴者的人脸被所述穿戴设备的外侧显示屏遮挡的人脸区域。
可选地,处理器1310,还用于在检测到所述穿戴者的表情动作信息发生变化的情况下,更新所述面部图像。
可选地,处理器1310,还用于采集所述穿戴者的人脸特征点;
在检测到所述穿戴者的至少部分人脸特征点的位置发生变化的情况下,确定所述穿戴者的表情动作信息发生变化。
可选地,处理器1310,还用于采集所述穿戴设备所处现实场景的场景信息;
基于所述场景信息,生成所述现实场景的二维场景图像或三维场景图像;
所述在所述穿戴设备的外侧显示屏,显示目标提示信息,包括:
在所述穿戴设备的外侧显示屏,显示所述二维场景图像或所述三维场景图像。
应理解的是,本申请实施例中,输入单元1304可以包括图形处理器(Graphics Processing Unit,GPU)13041和麦克风13042,图形处理器13041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元1306可包括显示面板13061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板13061。用户输入单元1307包括触控面板13071以及其他输入设备13072中的至少一种。触控面板13071,也称为触摸屏。触控面板13071可包括触摸检测装置和触摸控制器两个部分。其他输入设备13072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
存储器1309可用于存储软件程序以及各种数据。存储器1309可主要包括存储程序或指令的第一存储区和存储数据的第二存储区,其中,第一存储区可存储操作***、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器1309可以包括易失性存储器或非易失性存储器,或者,存储器1309可以包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请实施例中的存储器1309包括但不限于这些和任意其它适合类型的存储器。
处理器1310可包括一个或多个处理单元;可选地,处理器1310集成应用处理器和调制解调处理器,其中,应用处理器主要处理涉及操作***、用户界面和应用程序等的操作, 调制解调处理器主要处理无线通信信号,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器1310中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述信息提示方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述信息提示方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为***级芯片、***芯片、芯片***或片上***芯片等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (16)

  1. 一种信息提示方法,包括:
    接收第一输入,所述第一输入用于启动穿戴设备的***;
    响应于所述第一输入,在所述穿戴设备的外侧显示屏,显示目标提示信息,所述目标提示信息用于指示所述穿戴设备的***已开启。
  2. 根据权利要求1所述的信息提示方法,其中,所述目标提示信息包括以下至少一项:
    文本提示信息;
    虚拟对象的部分面部图像或完整面部图像;
    所述穿戴设备的穿戴者的面部图像,所述面部图像包括所述穿戴者的部分人脸图像或完整人脸图像;
    场景图像,所述场景图像为所述穿戴设备所处的现实场景的二维场景图像或三维场景图像。
  3. 根据权利要求2所述的信息提示方法,其中,在所述目标提示信息包括所述穿戴设备的穿戴者的部分人脸图像的情况下,所述部分人脸图像中的人脸特征点与所述穿戴者的目标人脸区域中的人脸特征点相匹配;
    所述部分人脸图像中的人脸外轮廓与所述穿戴者的目标人脸区域的人脸外轮廓相同;
    其中,所述目标人脸区域为所述穿戴者的人脸被所述穿戴设备的外侧显示屏遮挡的人脸区域。
  4. 根据权利要求2所述的信息提示方法,其中,在所述目标提示信息包括所述穿戴设备的穿戴者的面部图像的情况下,所述在所述穿戴设备的外侧显示屏,显示目标提示信息,包括:
    在检测到所述穿戴者的表情动作信息发生变化的情况下,更新所述面部图像。
  5. 根据权利要求4所述的信息提示方法,其中,所述在检测到所述穿戴者的表情动作信息发生变化的情况下,更新所述面部图像之前,所述方法还包括:
    采集所述穿戴者的人脸特征点;
    在检测到所述穿戴者的至少部分人脸特征点的位置发生变化的情况下,确定所述穿戴者的表情动作信息发生变化。
  6. 根据权利要求2所述的信息提示方法,其中,在所述目标提示信息包括场景图像的情况下,所述在所述穿戴设备的外侧显示屏,显示目标提示信息之前,所述方法还包括:
    采集所述穿戴设备所处现实场景的场景信息;
    基于所述场景信息,生成所述现实场景的二维场景图像或三维场景图像;
    所述在所述穿戴设备的外侧显示屏,显示目标提示信息,包括:
    在所述穿戴设备的外侧显示屏,显示所述二维场景图像或所述三维场景图像。
  7. 一种信息提示装置,包括:
    接收模块,用于接收第一输入,所述第一输入用于启动穿戴设备的***;
    显示模块,用于在所述穿戴设备的外侧显示屏,显示目标提示信息,所述目标提示信息用于指示所述穿戴设备的***已开启。
  8. 根据权利要求7所述的信息提示装置,其中,所述目标提示信息包括以下至少一项:
    文本提示信息;
    虚拟对象的部分面部图像或完整面部图像;
    所述穿戴设备的穿戴者的面部图像,所述面部图像包括所述穿戴者的部分人脸图像或完整人脸图像;
    场景图像,所述场景图像为所述穿戴设备所处的现实场景的二维场景图像或三维场景图像。
  9. 根据权利要求8所述的信息提示装置,其中,在所述目标提示信息包括所述穿戴设备的穿戴者的部分人脸图像的情况下,所述部分人脸图像中的人脸特征点与所述穿戴者的目标人脸区域中的人脸特征点相匹配;
    所述部分人脸图像中的人脸外轮廓与所述穿戴者的目标人脸区域的人脸外轮廓相同;
    其中,所述目标人脸区域为所述穿戴者的人脸被所述穿戴设备的外侧显示屏遮挡的人脸区域。
  10. 根据权利要求8所述的信息提示装置,其中,在所述目标提示信息包括所述穿戴设备的穿戴者的面部图像的情况下,所述显示模块,还用于:
    在检测到所述穿戴者的表情动作信息发生变化的情况下,更新所述面部图像。
  11. 根据权利要求10所述的信息提示装置,其中,所述装置还包括:
    第一采集模块,用于采集所述穿戴者的人脸特征点;
    第一处理模块,用于在检测到所述穿戴者的至少部分人脸特征点的位置发生变化的情况下,确定所述穿戴者的表情动作信息发生变化。
  12. 根据权利要求8所述的信息提示装置,其中,在所述目标提示信息包括所述场景图像的情况下,所述装置还包括:
    第二采集模块,用于采集所述穿戴设备所处现实场景的场景信息;
    第二处理模块,用于基于所述场景信息,生成所述现实场景的二维场景图像或三维场景图像;
    所述显示模块,还用于:
    在所述穿戴设备的外侧显示屏,显示所述二维场景图像或所述三维场景图像。
  13. 一种电子设备,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-6任一项所述的信息提示方法的步骤。
  14. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-6任一项所述的信息提示方法的步骤。
  15. 一种芯片,包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1-6任一项所述的信息提示方法。
  16. 一种计算机程序产品,被存储在非瞬态存储介质中,所述程序产品被至少一个处理器执行以实现如权利要求1-6任一项所述的信息提示方法。
PCT/CN2023/134357 2022-11-30 2023-11-27 信息提示方法、信息提示装置、电子设备和可读存储介质 WO2024114584A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211528363.3A CN115857856A (zh) 2022-11-30 2022-11-30 信息提示方法、信息提示装置、电子设备和可读存储介质
CN202211528363.3 2022-11-30

Publications (1)

Publication Number Publication Date
WO2024114584A1 true WO2024114584A1 (zh) 2024-06-06

Family

ID=85668768

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/134357 WO2024114584A1 (zh) 2022-11-30 2023-11-27 信息提示方法、信息提示装置、电子设备和可读存储介质

Country Status (2)

Country Link
CN (1) CN115857856A (zh)
WO (1) WO2024114584A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115857856A (zh) * 2022-11-30 2023-03-28 维沃移动通信有限公司 信息提示方法、信息提示装置、电子设备和可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109791346A (zh) * 2016-09-27 2019-05-21 斯纳普公司 眼镜装置模式指示
CN111699460A (zh) * 2018-02-02 2020-09-22 交互数字Ce专利控股公司 多视图虚拟现实用户界面
CN115022611A (zh) * 2022-03-31 2022-09-06 青岛虚拟现实研究院有限公司 Vr画面显示方法、电子设备及可读存储介质
TW202238222A (zh) * 2020-12-23 2022-10-01 美商元平台技術有限公司 用於擴增實境及虛擬實境裝置的反向穿透式眼鏡
CN115857856A (zh) * 2022-11-30 2023-03-28 维沃移动通信有限公司 信息提示方法、信息提示装置、电子设备和可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109791346A (zh) * 2016-09-27 2019-05-21 斯纳普公司 眼镜装置模式指示
CN111699460A (zh) * 2018-02-02 2020-09-22 交互数字Ce专利控股公司 多视图虚拟现实用户界面
TW202238222A (zh) * 2020-12-23 2022-10-01 美商元平台技術有限公司 用於擴增實境及虛擬實境裝置的反向穿透式眼鏡
CN115022611A (zh) * 2022-03-31 2022-09-06 青岛虚拟现实研究院有限公司 Vr画面显示方法、电子设备及可读存储介质
CN115857856A (zh) * 2022-11-30 2023-03-28 维沃移动通信有限公司 信息提示方法、信息提示装置、电子设备和可读存储介质

Also Published As

Publication number Publication date
CN115857856A (zh) 2023-03-28

Similar Documents

Publication Publication Date Title
US10356398B2 (en) Method for capturing virtual space and electronic device using the same
CN108712603B (zh) 一种图像处理方法及移动终端
CN110456907A (zh) 虚拟画面的控制方法、装置、终端设备及存储介质
CN111726536A (zh) 视频生成方法、装置、存储介质及计算机设备
CN111580652B (zh) 视频播放的控制方法、装置、增强现实设备及存储介质
WO2024114584A1 (zh) 信息提示方法、信息提示装置、电子设备和可读存储介质
CN111970456B (zh) 拍摄控制方法、装置、设备及存储介质
CN105320262A (zh) 操作虚拟世界里的电脑和手机的方法、装置以及使用其的眼镜
CN112396679B (zh) 虚拟对象显示方法及装置、电子设备、介质
US20220262080A1 (en) Interfaces for presenting avatars in three-dimensional environments
US20220270302A1 (en) Content distribution system, content distribution method, and content distribution program
US20230171484A1 (en) Devices, methods, and graphical user interfaces for generating and displaying a representation of a user
US20230336865A1 (en) Device, methods, and graphical user interfaces for capturing and displaying media
WO2020223140A1 (en) Capturing objects in an unstructured video stream
CN112954209B (zh) 拍照方法、装置、电子设备及介质
US11328187B2 (en) Information processing apparatus and information processing method
JP7291106B2 (ja) コンテンツ配信システム、コンテンツ配信方法、及びコンテンツ配信プログラム
CN115967854A (zh) 拍照方法、装置及电子设备
CN112749357A (zh) 基于分享内容的交互方法、装置和计算机设备
CN112037338A (zh) Ar形象的创建方法、终端设备以及可读存储介质
CN117041670B (zh) 图像处理方法及相关设备
US20240152244A1 (en) Devices, Methods, and Graphical User Interfaces for Interacting with Three-Dimensional Environments
US20240103679A1 (en) Devices, Methods, and Graphical User Interfaces for Interacting with Three-Dimensional Environments
US20240103617A1 (en) User interfaces for gaze tracking enrollment
EP4275108A1 (en) Interfaces for presenting avatars in three-dimensional environments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23896735

Country of ref document: EP

Kind code of ref document: A1