CN108922115B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN108922115B
CN108922115B CN201810672842.XA CN201810672842A CN108922115B CN 108922115 B CN108922115 B CN 108922115B CN 201810672842 A CN201810672842 A CN 201810672842A CN 108922115 B CN108922115 B CN 108922115B
Authority
CN
China
Prior art keywords
information
real
environment
equipment
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810672842.XA
Other languages
Chinese (zh)
Other versions
CN108922115A (en
Inventor
金小平
王和平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201810672842.XA priority Critical patent/CN108922115B/en
Publication of CN108922115A publication Critical patent/CN108922115A/en
Application granted granted Critical
Publication of CN108922115B publication Critical patent/CN108922115B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0469Presence detectors to detect unsafe condition, e.g. infrared sensor, microphone

Abstract

The embodiment of the application provides an information processing method and electronic equipment, wherein the method comprises the following steps: determining equipment state information; determining whether to acquire environment information of a real environment where the equipment is located according to the state information, wherein the environment information at least comprises space information of the environment where the equipment is currently located and depth information of a first real object located in the environment; and determining whether to output prompt information at least used for prompting a user to pay attention to the position of the first real object in the virtual environment presented by the equipment or not according to the state information and the environment information. The information processing method can automatically detect whether the first real object in the real environment has security threat to the user and timely send a prompt to the user, so that the user is prevented from generating potential safety hazard, and the use experience of the user on the electronic equipment is improved.

Description

Information processing method and electronic equipment
Technical Field
The embodiment of the application relates to the field of virtual reality or augmented reality equipment, in particular to an information processing method and electronic equipment applying the method.
Background
With the rapid development of technologies for AR/VR (VR: English abbreviation for Virtual Reality; AR: English abbreviation for Augmented Reality), an immersive AR/VR experience will come into the future. With the widespread use of people, the longer the time for staying and entertaining in an AR/VR at the same time, the more prominent the safety problems in use brought about by the use of such AR/VR are found by the applicant. Because the AR/VR generates a virtual object at any time along with the moving position of people in the using process so as to be mutually reflected with a real object. With the development of the technology, the simulation degree of the virtual object is stronger, people are difficult to distinguish virtual objects from real objects, the visual experience of people is met, and the danger brought by the virtual object is correspondingly caused, for example, people neglect external entities due to interaction between immersion and a virtual world, and particularly when a user walks on a street, the collision and other events are easily caused due to the fact that the entities such as vehicles and the like cannot be recognized in the first time.
Content of application
The application provides an information processing method capable of automatically detecting whether a first real object in a real environment has a security threat to a user and timely sending a prompt to the user and electronic equipment applying the method.
In order to solve the above technical problem, an embodiment of the present application provides an information processing method, including:
determining equipment state information;
determining whether to acquire environment information of a real environment where the equipment is located according to the state information, wherein the environment information at least comprises space information of the environment where the equipment is currently located and depth information of a first real object located in the environment;
and determining whether to output prompt information at least used for prompting a user to pay attention to the position of the first real object in the virtual environment presented by the equipment or not according to the state information and the environment information.
Preferably, the determining the device state information specifically includes:
at least determining application information of the device currently in a running state and real-time position information of the device in a real environment.
Preferably, the determining whether to acquire the real environment information according to the state information specifically includes:
and determining whether the application information of the equipment in the current running state is virtual reality and/or augmented reality application information or not according to the state information, and if so, acquiring the environment information of the real environment where the equipment is located.
Preferably, the determining whether to output the prompt information according to the state information and the environment information specifically includes:
determining a real coordinate system of the real environment according to the environment information;
determining a corresponding relation between the real coordinate system and a virtual coordinate system of the virtual environment;
and determining whether to output prompt information according to the corresponding relation, the real-time position information of the equipment and the depth information of the first real object.
Preferably, the determining whether to output the prompt information according to the correspondence, the real-time location information of the device, and the depth information of the first real object specifically includes:
determining motion state information of the equipment in a real environment according to the real-time position information of the equipment and the corresponding relation;
determining actual distance information between the equipment and the first real object according to the motion state information and the depth information of the first real object;
and determining whether the actual distance information meets a preset requirement, and if not, outputting the prompt information.
Preferably, the method further comprises the following steps:
determining the structural information of the first real object according to the depth information of the first real object;
and determining whether the first real object is a preset target object according to the structural information, and if so, outputting the prompt information.
Preferably, the outputting the prompt information specifically includes:
and displaying the real position corresponding to the first real object in the virtual environment with a preset effect.
Preferably, the method further comprises the following steps:
and after the prompt information is output, determining whether the actual distance between the equipment and the first real object in the preset time is smaller than a preset minimum safe distance value, and if so, stopping the equipment from continuously presenting a virtual picture to the user.
An embodiment of the present application further provides an electronic device, including:
the processing device is used for determining state information of the equipment, determining whether to acquire environment information of a real environment where the equipment is located according to the state information, wherein the environment information at least comprises space information of the environment where the equipment is currently located and depth information of a first real object located in the environment, and determining whether to output prompt information at least for prompting a user to pay attention to the position of the first real object in a virtual environment presented by the equipment according to the state information and the environment information.
Preferably, the determining the device state information specifically includes:
at least determining application information of the device currently in a running state and real-time position information of the device in a real environment.
Based on the disclosure of the above embodiments, it can be known that the embodiments of the present application have the following beneficial effects:
when the user is immersed in the virtual environment manufactured by the electronic equipment, the electronic equipment can automatically detect whether the first object in the real environment has safety threat to the user or not, and timely sends a prompt to the user, so that the potential safety hazard of the user caused by contact with the first object or collision and the like is avoided, the entertainment requirement of the user is met, the safety of the user is protected, and the use experience of the user on the electronic equipment is improved.
Drawings
Fig. 1 is a flowchart of an information processing method in the embodiment of the present application.
Fig. 2 is a flowchart of an information processing method in another embodiment of the present application.
Fig. 3 is a flowchart of an information processing method in another embodiment of the present application.
Fig. 4 is a flowchart of an information processing method in another embodiment of the present application.
Fig. 5 is a block diagram of an electronic device in the embodiment of the present application.
Detailed Description
Specific embodiments of the present application will be described in detail below with reference to the accompanying drawings, but the present application is not limited thereto.
It will be understood that various modifications may be made to the embodiments disclosed herein. Accordingly, the foregoing description should not be construed as limiting, but merely as exemplifications of embodiments. Other modifications will occur to those skilled in the art within the scope and spirit of the disclosure.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and, together with a general description of the disclosure given above, and the detailed description of the embodiments given below, serve to explain the principles of the disclosure.
These and other characteristics of the present application will become apparent from the following description of preferred forms of embodiment, given as non-limiting examples, with reference to the attached drawings.
It should also be understood that, although the present application has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of application, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The above and other aspects, features and advantages of the present disclosure will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present disclosure are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely examples of the disclosure that may be embodied in various forms. Well-known and/or repeated functions and structures have not been described in detail so as not to obscure the present disclosure with unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the disclosure.
Hereinafter, embodiments of the present application will be described in detail with reference to the accompanying drawings.
With the rapid development of the technology of AR/VR, the longer the people stay and entertain in AR/VR, the more the safety problem of AR/VR is raised. Because the AR/VR generates a virtual object at any time along with the moving position of people in the using process so as to be mutually reflected with a real object. With the development of the technology, the simulation degree of the virtual object is stronger, people are difficult to distinguish the virtual object from the real object, the visual experience of people is met, and the danger brought by the virtual object is correspondingly caused, for example, people neglect external entities due to interaction between immersion and the virtual world, and particularly when a user walks on a street, the collision and other events are easily caused due to the fact that the entities such as vehicles and the like cannot be recognized for the first time.
To solve the above technical problem, as shown in fig. 1, an embodiment of the present application provides an information processing method, for example, applicable to an AR/VR device, the method including:
the method comprises the following steps: determining equipment state information;
step two: determining whether to acquire environment information of a real environment where the equipment is located according to the state information, wherein the environment information at least comprises space information of the environment where the equipment is currently located and depth information of a first real object located in the environment;
step three: and determining whether to output prompt information at least for prompting the user to pay attention to the position of the first real object in the virtual environment presented by the equipment according to the state information and the environment information.
By adopting the method, when the user is immersed in a virtual environment manufactured by electronic equipment such as AR/VR and the like, the electronic equipment can automatically detect whether the first object in the real environment has safety threat to the user or not, and timely send a prompt to the user, so that potential safety hazards caused by the contact or collision of the user with the first object and the like are avoided, the entertainment requirements of the user are met, the use safety of the user is protected, and the use experience of the user on the electronic equipment is improved.
Further, the first step: determining equipment state information;
the step is set to determine the current operation state, whether the electronic device, such as an AR/VR device, is in an on state or an off state; or what kind of application program the current device runs, whether the connection with the wireless network is stable, etc., so as to determine whether the user needs the device to execute step 2 currently according to the obtained information about the current state of the device or the state corresponding to the state within a time threshold, that is, whether the safety reminding function needs to be started, so as to protect the use safety of the user.
Specifically, in this embodiment, the step one is specifically executed as follows:
at least determining application information of the electronic equipment in a running state and real-time position information of the equipment.
The obtained information of the application program currently in the running state is to determine whether the user has potential safety hazards currently or not, and safety reminding needs to be performed; that is, it is determined whether the user is currently immersed in the virtual environment and cannot correctly recognize the physical information in the real environment. The electronic device may determine, for example, a current security index of the user based on the real-time location information of the device, where the current security index is lower when the user is outdoors and higher when the user is outdoors. For example, if the application information and the location information acquired by the electronic device indicate that the user uses the electronic device outdoors to manufacture a virtual environment and immerses the electronic device in the virtual environment, the electronic device immediately performs the subsequent steps. If the application information of the electronic device is the same as above, but the obtained location information indicates that the electronic device is indoors, whether to perform the subsequent steps may be determined according to actual circumstances, for example, according to a selection of a user. Of course, the real-time position information of the equipment is obtained, the functions are not limited to the above functions, and a foundation is laid for safety reminding of the user in the subsequent steps.
Further, as shown in fig. 2, after determining that the second step needs to be executed, the electronic device needs to determine whether to acquire the real environment information according to the acquired state information specifically as follows:
and determining whether the application information of the equipment in the current running state is the application information of virtual reality and/or augmented reality or not according to the state information, and if so, acquiring the environment information of the real environment where the equipment is located.
That is, when the currently running application information of the electronic device indicates that the currently running application is a virtual reality and/or augmented reality type application, it may be determined that a certain potential safety hazard may exist in the user currently, and a safety prompt may need to be performed. Then, the electronic device obtains the environment information of the real environment where the electronic device is located, that is, obtains the environment information of the real environment where the user is located. For example, if the state information acquired by the electronic device is that the user is watching a virtual football game by using a first application program at present, the environment information of the real environment where the electronic device is located is immediately acquired; or, if the state information acquired by the electronic device is that the user is currently using the second application program to participate in the life simulation game, immediately acquiring the environment information of the real environment where the device is located. The obtained environment information of the real environment may be obtained in real time or at regular time, however, the obtaining manner may be determined by detecting whether the user moves within a time threshold and/or according to an interaction mode between the current application and the user, for example, if the user only uses the electronic device to watch the virtual ball game and does not generate displacement within a period of time, the real environment information where the electronic device is located may be obtained at regular time. If it is detected that the user is participating in the virtual game using the electronic device and frequently moves, it is preferable to acquire real environment information of the electronic device in real time for the sake of user security.
Further, with continuing to refer to fig. 2, after acquiring the current state information of the electronic device and the environment information of the real environment where the electronic device is located, it is necessary to determine whether the user currently has a potential safety hazard and needs to perform safety reminding according to the acquired information, that is, it is necessary to perform step three in this embodiment: determining whether to output prompt information according to the state information and the environment information, which specifically comprises the following steps:
determining a real coordinate system of a real environment according to the environment information;
determining a corresponding relation between a real coordinate system and a virtual coordinate system of a virtual environment;
and determining whether to output prompt information according to the corresponding relation, the real-time position information of the equipment and the depth information of the first real object.
When the real coordinate system of the real environment is determined, the real coordinate system of the environment can be determined through the stereoscopic image of the environment where the equipment is located, which is acquired in real time or at regular time or only when the environment where the electronic equipment is located or the position or the orientation change is detected. Then, the virtual coordinate system of the virtual environment is acquired from the related information about the virtual environment stored in the application program, or alternatively, a virtual screen currently presented to the user may be acquired so that the virtual coordinate system of the virtual environment is determined based on the virtual screen. After the two coordinate systems are determined, calculation processing may be performed on the two coordinate systems to obtain a corresponding relationship between the two coordinate systems, for example, there is a tree in the real environment, and the position of the tree in the real coordinate system is a, and in order to determine the position of the tree in the virtual environment, the determination may be performed according to the calculated corresponding relationship and information of the position a. After the corresponding relation is determined, the electronic equipment can detect whether a first real object which possibly threatens the safety of the user exists in the real environment where the user is located through the image shot by the depth-of-field camera, if so, the depth information of the first real object is obtained, and the depth information can be obtained by specifically shooting the first real object through the depth-of-field camera. And then, the electronic equipment judges whether the threat of the first real object to the safety of the user is large enough or not through the calculated corresponding relation among the coordinate systems, the position information reflecting the real-time position of the user and the depth information of the first real object, so that the user immersed in the virtual environment needs to be reminded, and the entertainment experience of the user is influenced.
Specifically, as shown in fig. 3, when the determining step is executed, the embodiment specifically includes:
determining the motion state information of the equipment in the real environment according to the real-time position information of the equipment and the corresponding relation;
determining actual distance information between the equipment and the first real object according to the motion state information and the depth information of the first real object;
and determining whether the actual distance information meets the preset requirement, and if not, outputting prompt information.
For example, when the electronic device detects that the user is immersed in a virtual environment on, for example, only one lawn, the electronic device obtains real-time position information of the user in the virtual environment by obtaining the real-time position information of the electronic device in the virtual environment, and then the electronic device determines motion state information of the user in the real environment according to the real-time position information and the corresponding relationship, so as to determine whether the user actually moves, and the speed and the direction of the movement. If the determined result indicates that the user is moving, the electronic device may determine, according to the obtained motion state information and the depth information of the first real object that may pose a security threat to the user, actual distance information between the user and the first real object in the real environment. For example, if the user is currently using the electronic device indoors, the first object is an indoor wardrobe, and the electronic device may comprehensively and real-timely determine the actual distance information between the user and the wardrobe according to the real-time acquired moving speed, moving direction, current position information of the user in the real environment and the actual position information of the wardrobe. For another example, if the user is currently using the electronic device on an outdoor street, the first real object is a vehicle running at a low speed on the street, and the electronic device may determine the actual distance information between the user and the vehicle according to the motion state information of the user in the real environment acquired in real time and the depth information of the vehicle acquired according to the stereoscopic image of the vehicle captured in real time, which includes the moving speed, direction, and position information of the vehicle.
And then, comparing each calculated actual distance information with a preset requirement, wherein the preset requirement can be a minimum safe distance value between the user and the first real object, and if the calculated actual distance information is adjacent to or reaches the minimum safe distance value, determining that prompt information needs to be output to the user so as to prompt the user to pay attention to the first real object and avoid injury. Of course, the output of the prompt message may also be divided into multiple times, for example, multiple preset requirements that present a progressive relationship are set, such as a first safe distance value, a second safe distance value, and a third safe distance value, where specific values of the three safe distance values decrease in sequence, when it is detected that the actual distance value between the user and the first real object reaches the first safe distance value, the first prompt message is output, if the user does not respond, the distance between the user and the first real object is reduced until the actual distance value reaches the second safe distance value, the electronic device outputs the second prompt message, and if the user does not respond, the third prompt message is output when the distance between the user and the first real object reaches the third safe distance value. On the contrary, if the user changes the moving direction to be far away from the first real object after the first prompt message or the second prompt message is output by the electronic equipment, the next prompt message is not output any more.
Further, the first substance may be one or more than one substance. When the electronic device determines the first real object, as shown in fig. 4, the following steps in this embodiment may be specifically adopted:
determining the structural information of the first real object according to the depth information of the first real object;
and determining whether the first real object is a preset target object according to the structural information, and if so, outputting prompt information.
For example, the electronic device captures a stereoscopic image of a real environment, where the real environment is actually a real environment in the field of view of the user, and the electronic device identifies each real object in the image based on the acquired image, so as to determine candidate real objects that would pose a security threat to the user in advance. Then, the electronic device can shoot the candidate real objects heavily to acquire the depth of field information of each candidate real object, so that the specific structure information, such as the volume, the shape and the like, of the candidate real objects can be determined through the depth of field information of the candidate real objects. Namely, the candidate real object is accurately identified. Then, the electronic device determines, according to the determined structure information, whether the candidate real object exists in a pre-stored candidate list, that is, whether the candidate real object is a preset target object, for example, whether the candidate real object is a large object such as an automobile, a tree, a large billboard, a wall, a stereo cabinet, or the like, and if the candidate real object collides with the object, the user may be injured. If the first real object exists in the candidate list, the actual distance between the user and the first real object needs to be determined according to the steps, and when the actual distance does not meet the preset requirement, prompt information is output.
Specifically, when the prompt information is output, the following steps in this embodiment may be adopted:
and displaying the real position corresponding to the first real object in the virtual environment with a preset effect.
For example, taking the first real object as a tree, when the electronic device detects that the actual distance from the user to the tree approaches the minimum safe distance value, the electronic device determines the specific position of the tree in the virtual environment according to the predetermined position information of the tree in the virtual environment, and then displays the prompt information at the position with the preset display effect. The prompt message may be a text message, such as "danger", or a highlight and flashing form to display the actual outline of the tree, or may be displayed in a user-defined form, or may be prompted by a sound and a picture, which is not limited in detail.
Preferably, if the user is drowned too deeply in the virtual environment, which results in a failure to make a normal response at a time, or the user fails to make a response to the displayed prompt message at the first time due to being concentrated on or being in the virtual environment to do something, which results in a shorter and shorter distance between the user and the first physical object, the method of the embodiment of the present application further includes:
and after the prompt information is output, determining whether the actual distance between the equipment and the first real object in the preset time is smaller than a preset minimum safe distance value, and if so, stopping the equipment to continue presenting the virtual picture to the user.
That is, after the user outputs the prompt information, the user continues to monitor the distance between the user and the first real object within a certain time, and if the distance is smaller than the minimum safe distance value, the user immediately stops continuing to present the virtual picture to the user, so that the scene in front of the user immediately returns to the actual scene, and the user is prevented from colliding with the first real object and being injured.
As shown in fig. 5, an embodiment of the present application further provides an electronic device, including:
and the processing device is used for determining the state information of the equipment, determining whether to acquire the environment information of the real environment where the equipment is located according to the state information, wherein the environment information at least comprises the space information of the environment where the equipment is currently located and the depth information of the first real object located in the environment, and determining whether to output prompt information at least for prompting a user to pay attention to the position of the first real object in the virtual environment presented by the equipment according to the state information and the environment information.
Electronic equipment in this application, can be electronic equipment such as AR/VR, it can present virtual picture for the user and make the user immerse in the virtual environment by virtual picture formation, whether the first object that simultaneously can the automated inspection real environment exists the security threat to the user, and in good time send the suggestion to the user, avoid the user because of contacting with first object or producing the collision potential safety hazard that appears, the messenger has both satisfied user's amusement demand, user's safety in utilization has also been protected, promote user experience to electronic equipment's use.
Further, the processing device executes the step one: determining equipment state information;
the step is set to determine the current operation state, whether the electronic device, such as an AR/VR device, is in an on state or an off state; or what kind of application program the current device runs, whether the connection with the wireless network is stable, etc., so as to determine whether the user needs the device to execute step 2 currently according to the obtained information about the current state of the device or the state corresponding to the state within a time threshold, that is, whether the safety reminding function needs to be started, so as to protect the use safety of the user.
Specifically, the step one of the processing apparatus in this embodiment is specifically:
at least determining application information of the electronic equipment in a running state and real-time position information of the equipment.
The obtained information of the application program currently in the running state is to determine whether the user has potential safety hazards currently or not, and safety reminding needs to be performed; that is, it is determined whether the user is currently immersed in the virtual environment and cannot correctly recognize the physical information in the real environment. The electronic device may determine, for example, a current security index of the user based on the real-time location information of the device, where the current security index is lower when the user is outdoors and higher when the user is outdoors. For example, if the application information and the location information acquired by the electronic device indicate that the user uses the electronic device outdoors to manufacture a virtual environment and immerses the electronic device in the virtual environment, the electronic device immediately performs the subsequent steps. If the application information of the electronic device is the same as above, but the obtained location information indicates that the electronic device is indoors, whether to perform the subsequent steps may be determined according to actual circumstances, for example, according to a selection of a user. Of course, the real-time position information of the equipment is obtained, the functions are not limited to the above functions, and a foundation is laid for safety reminding of the user in the subsequent steps.
Further, as shown in fig. 2, when the processing device determines that the second step needs to be executed, the electronic device needs to determine whether to acquire the real environment information according to the acquired state information, specifically:
and determining whether the application information of the equipment in the current running state is the application information of virtual reality and/or augmented reality or not according to the state information, and if so, acquiring the environment information of the real environment where the equipment is located.
That is, when the currently running application information of the electronic device indicates that the currently running application is a virtual reality and/or augmented reality type application, it may be determined that a certain potential safety hazard may exist in the user currently, and a safety prompt may need to be performed. Then, the electronic device obtains the environment information of the real environment where the electronic device is located, that is, obtains the environment information of the real environment where the user is located. For example, if the state information acquired by the electronic device is that the user is watching a virtual football game by using a first application program at present, the environment information of the real environment where the electronic device is located is immediately acquired; or, if the state information acquired by the electronic device is that the user is currently using the second application program to participate in the life simulation game, immediately acquiring the environment information of the real environment where the device is located. The obtained environment information of the real environment may be obtained in real time or at regular time, however, the obtaining manner may be determined by detecting whether the user moves within a time threshold and/or according to an interaction mode between the current application and the user, for example, if the user only uses the electronic device to watch the virtual ball game and does not generate displacement within a period of time, the real environment information where the electronic device is located may be obtained at regular time. If it is detected that the user is participating in the virtual game using the electronic device and frequently moves, it is preferable to acquire real environment information of the electronic device in real time for the sake of user security.
Further, with continuing to refer to fig. 2, after the processing device obtains the current state information of the electronic device and the environment information of the real environment where the electronic device is located, it needs to determine whether the user currently has a potential safety hazard and needs to perform safety reminding according to the obtained information, that is, it needs to perform the third step in this embodiment: determining whether to output prompt information according to the state information and the environment information, which specifically comprises the following steps:
determining a real coordinate system of a real environment according to the environment information;
determining a corresponding relation between a real coordinate system and a virtual coordinate system of a virtual environment;
and determining whether to output prompt information according to the corresponding relation, the real-time position information of the equipment and the depth information of the first real object.
When the real coordinate system of the real environment is determined, the real coordinate system of the environment can be determined through the stereoscopic image of the environment where the equipment is located, which is acquired in real time or at regular time or only when the environment where the electronic equipment is located or the position or the orientation change is detected. Then, the virtual coordinate system of the virtual environment is acquired from the related information about the virtual environment stored in the application program, or alternatively, a virtual screen currently presented to the user may be acquired so that the virtual coordinate system of the virtual environment is determined based on the virtual screen. After the two coordinate systems are determined, calculation processing may be performed on the two coordinate systems to obtain a corresponding relationship between the two coordinate systems, for example, there is a tree in the real environment, and the position of the tree in the real coordinate system is a, and in order to determine the position of the tree in the virtual environment, the determination may be performed according to the calculated corresponding relationship and information of the position a. After the corresponding relation is determined, the electronic equipment can detect whether a first real object which possibly threatens the safety of the user exists in the real environment where the user is located through the image shot by the depth-of-field camera, if so, the depth information of the first real object is obtained, and the depth information can be obtained by specifically shooting the first real object through the depth-of-field camera. And then, the electronic equipment judges whether the threat of the first real object to the safety of the user is large enough or not through the calculated corresponding relation among the coordinate systems, the position information reflecting the real-time position of the user and the depth information of the first real object, so that the user immersed in the virtual environment needs to be reminded, and the entertainment experience of the user is influenced.
Specifically, as shown in fig. 3, when the processing device in this embodiment executes the above determining step, it specifically includes:
determining the motion state information of the equipment in the real environment according to the real-time position information of the equipment and the corresponding relation;
determining actual distance information between the equipment and the first real object according to the motion state information and the depth information of the first real object;
and determining whether the actual distance information meets the preset requirement, and if not, outputting prompt information.
For example, when the electronic device detects that the user is immersed in a virtual environment on, for example, only one lawn, the electronic device obtains real-time position information of the user in the virtual environment by obtaining the real-time position information of the electronic device in the virtual environment, and then the electronic device determines motion state information of the user in the real environment according to the real-time position information and the corresponding relationship, so as to determine whether the user actually moves, and the speed and the direction of the movement. If the determined result indicates that the user is moving, the electronic device may determine, according to the obtained motion state information and the depth information of the first real object that may pose a security threat to the user, actual distance information between the user and the first real object in the real environment. For example, if the user is currently using the electronic device indoors, the first object is an indoor wardrobe, and the electronic device may comprehensively and real-timely determine the actual distance information between the user and the wardrobe according to the real-time acquired moving speed, moving direction, current position information of the user in the real environment and the actual position information of the wardrobe. For another example, if the user is currently using the electronic device on an outdoor street, the first real object is a vehicle running at a low speed on the street, and the electronic device may determine the actual distance information between the user and the vehicle according to the motion state information of the user in the real environment acquired in real time and the depth information of the vehicle acquired according to the stereoscopic image of the vehicle captured in real time, which includes the moving speed, direction, and position information of the vehicle.
And then, comparing each calculated actual distance information with a preset requirement, wherein the preset requirement can be a minimum safe distance value between the user and the first real object, and if the calculated actual distance information is adjacent to or reaches the minimum safe distance value, determining that prompt information needs to be output to the user so as to prompt the user to pay attention to the first real object and avoid injury. Of course, the output of the prompt message may also be divided into multiple times, for example, multiple preset requirements that present a progressive relationship are set, such as a first safe distance value, a second safe distance value, and a third safe distance value, where specific values of the three safe distance values decrease in sequence, when it is detected that the actual distance value between the user and the first real object reaches the first safe distance value, the first prompt message is output, if the user does not respond, the distance between the user and the first real object is reduced until the actual distance value reaches the second safe distance value, the electronic device outputs the second prompt message, and if the user does not respond, the third prompt message is output when the distance between the user and the first real object reaches the third safe distance value. On the contrary, if the user changes the moving direction to be far away from the first real object after the first prompt message or the second prompt message is output by the electronic equipment, the next prompt message is not output any more.
Further, the first substance may be one or more than one substance. When the processing device is determining the first real object, as shown in fig. 4, the following steps in this embodiment may be specifically adopted:
determining the structural information of the first real object according to the depth information of the first real object;
and determining whether the first real object is a preset target object according to the structural information, and if so, outputting prompt information.
For example, the electronic device captures a stereoscopic image of a real environment, where the real environment is actually a real environment in the field of view of the user, and the electronic device identifies each real object in the image based on the acquired image, so as to determine candidate real objects that would pose a security threat to the user in advance. Then, the electronic device can shoot the candidate real objects heavily to acquire the depth of field information of each candidate real object, so that the specific structure information, such as the volume, the shape and the like, of the candidate real objects can be determined through the depth of field information of the candidate real objects. Namely, the candidate real object is accurately identified. Then, the electronic device determines, according to the determined structure information, whether the candidate real object exists in a pre-stored candidate list, that is, whether the candidate real object is a preset target object, for example, whether the candidate real object is a large object such as an automobile, a tree, a large billboard, a wall, a stereo cabinet, or the like, and if the candidate real object collides with the object, the user may be injured. If the first real object exists in the candidate list, the actual distance between the user and the first real object needs to be determined according to the steps, and when the actual distance does not meet the preset requirement, prompt information is output.
Specifically, when the prompt information is output, the following steps in this embodiment may be adopted:
and displaying the real position corresponding to the first real object in the virtual environment with a preset effect.
For example, taking the first real object as a tree, when the electronic device detects that the actual distance from the user to the tree approaches the minimum safe distance value, the electronic device determines the specific position of the tree in the virtual environment according to the predetermined position information of the tree in the virtual environment, and then displays the prompt information at the position with the preset display effect. The prompt message may be a text message, such as "danger", or a highlight and flashing form to display the actual outline of the tree, or may be displayed in a user-defined form, or may be prompted by a sound and a picture, which is not limited in detail.
Preferably, if the user is drowned too deeply in the virtual environment, so that a normal response cannot be made at a time, or the user fails to respond to the displayed prompt information at the first time because the user is concentrating on or performing some event in the virtual environment, so that the distance between the user and the first physical object becomes shorter and shorter, the processing device of the embodiment of the application further determines whether the actual distance between the device and the first physical object is smaller than a preset minimum safe distance value within a preset time after outputting the prompt information, and if so, stops the device to continue presenting the virtual image to the user.
That is, after the user outputs the prompt information, the user continues to monitor the distance between the user and the first real object within a certain time, and if the distance is smaller than the minimum safe distance value, the user immediately stops continuing to present the virtual picture to the user, so that the scene in front of the user immediately returns to the actual scene, and the user is prevented from colliding with the first real object and being injured.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the electronic device to which the data processing method described above is applied may refer to the corresponding description in the foregoing product embodiments, and details are not repeated herein.
The above embodiments are only exemplary embodiments of the present application, and are not intended to limit the present application, and the protection scope of the present application is defined by the claims. Various modifications and equivalents may be made by those skilled in the art within the spirit and scope of the present application and such modifications and equivalents should also be considered to be within the scope of the present application.

Claims (5)

1. An information processing method comprising:
determining state information of equipment, wherein the state information at least comprises application information of the equipment in a current running state, real-time position information of the equipment in a real environment, and an interaction mode between a user and an application corresponding to the application information;
determining whether the application information of the equipment in the current running state is virtual reality and/or augmented reality application information according to the state information, if so, determining an acquisition mode according to the movement information of the equipment in a time period and the interaction mode, and acquiring environment information of a real environment where the equipment is located based on the acquisition mode, wherein the environment information at least comprises space information of the environment where the equipment is currently located and depth information of a first real object located in the environment; determining a real coordinate system of the real environment according to the environment information;
determining a corresponding relation between the real coordinate system and a virtual coordinate system of the virtual environment;
determining the structural information of the first real object according to the depth information of the first real object;
and determining whether the first real object is a preset target object according to the structure information, and determining whether to output prompt information at least used for prompting a user to pay attention to the position of the first real object in a virtual environment presented by the equipment according to the corresponding relation, the real-time position information of the equipment and the depth information of the first real object.
2. The method according to claim 1, wherein the determining whether to output the prompt information according to the correspondence, the real-time location information of the device, and the depth information of the first real object specifically includes:
determining motion state information of the equipment in a real environment according to the real-time position information of the equipment and the corresponding relation;
determining actual distance information between the equipment and the first real object according to the motion state information and the depth information of the first real object;
and determining whether the actual distance information meets a preset requirement, and if not, outputting the prompt information.
3. The method according to claim 1, wherein the outputting the prompt information specifically includes:
and displaying the real position corresponding to the first real object in the virtual environment with a preset effect.
4. The method of claim 1, further comprising:
and after the prompt information is output, determining whether the actual distance between the equipment and the first real object in the preset time is smaller than a preset minimum safe distance value, and if so, stopping the equipment from continuously presenting a virtual picture to the user.
5. An electronic device, comprising:
processing means for determining status information of a device, the status information including at least application information of the device currently in an operating state and real-time location information of the device in a real environment, and an interaction mode between a user and an application corresponding to the application information, determining whether the application information of the equipment in the running state is the application information of virtual reality and/or augmented reality according to the state information, if so, determining an acquisition mode according to the movement information of the device in a time period and the interaction mode, acquiring environment information of a real environment where the equipment is located based on the acquisition mode, wherein the environment information at least comprises space information of the current environment where the equipment is located and depth information of a first real object located in the environment, and determining a real coordinate system of the real environment according to the environment information; determining a corresponding relation between the real coordinate system and a virtual coordinate system of the virtual environment; determining the structural information of the first real object according to the depth information of the first real object; and determining whether the first real object is a preset target object according to the structure information, and determining whether to output prompt information at least used for prompting a user to pay attention to the position of the first real object in a virtual environment presented by the equipment according to the corresponding relation, the real-time position information of the equipment and the depth information of the first real object.
CN201810672842.XA 2018-06-26 2018-06-26 Information processing method and electronic equipment Active CN108922115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810672842.XA CN108922115B (en) 2018-06-26 2018-06-26 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810672842.XA CN108922115B (en) 2018-06-26 2018-06-26 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN108922115A CN108922115A (en) 2018-11-30
CN108922115B true CN108922115B (en) 2020-12-18

Family

ID=64421454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810672842.XA Active CN108922115B (en) 2018-06-26 2018-06-26 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN108922115B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241199B (en) 2019-07-19 2023-03-24 华为技术有限公司 Interaction method and device in virtual reality scene

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872559A (en) * 2010-06-08 2010-10-27 广东工业大学 Vehicle driving simulator-oriented virtual driving active safety early warning system and early warning method
CN104216520A (en) * 2014-09-09 2014-12-17 联想(北京)有限公司 Information processing method and electronic equipment
CN104916068A (en) * 2014-03-14 2015-09-16 联想(北京)有限公司 Information processing method and electronic device
CN106530620A (en) * 2016-12-26 2017-03-22 宇龙计算机通信科技(深圳)有限公司 Security monitoring method, device and system and virtual reality equipment
CN106652345A (en) * 2016-12-26 2017-05-10 宇龙计算机通信科技(深圳)有限公司 Safety monitoring method, safety monitoring device and virtual reality equipment
CN106873785A (en) * 2017-03-31 2017-06-20 网易(杭州)网络有限公司 For the safety custody method and device of virtual reality device
CN108109207A (en) * 2016-11-24 2018-06-01 中安消物联传感(深圳)有限公司 A kind of visualization solid modelling method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106484085B (en) * 2015-08-31 2019-07-23 北京三星通信技术研究有限公司 The method and its head-mounted display of real-world object are shown in head-mounted display
US10474411B2 (en) * 2015-10-29 2019-11-12 Samsung Electronics Co., Ltd. System and method for alerting VR headset user to real-world objects

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872559A (en) * 2010-06-08 2010-10-27 广东工业大学 Vehicle driving simulator-oriented virtual driving active safety early warning system and early warning method
CN104916068A (en) * 2014-03-14 2015-09-16 联想(北京)有限公司 Information processing method and electronic device
CN104216520A (en) * 2014-09-09 2014-12-17 联想(北京)有限公司 Information processing method and electronic equipment
CN108109207A (en) * 2016-11-24 2018-06-01 中安消物联传感(深圳)有限公司 A kind of visualization solid modelling method and system
CN106530620A (en) * 2016-12-26 2017-03-22 宇龙计算机通信科技(深圳)有限公司 Security monitoring method, device and system and virtual reality equipment
CN106652345A (en) * 2016-12-26 2017-05-10 宇龙计算机通信科技(深圳)有限公司 Safety monitoring method, safety monitoring device and virtual reality equipment
CN106873785A (en) * 2017-03-31 2017-06-20 网易(杭州)网络有限公司 For the safety custody method and device of virtual reality device

Also Published As

Publication number Publication date
CN108922115A (en) 2018-11-30

Similar Documents

Publication Publication Date Title
KR101741864B1 (en) Recognizing user intent in motion capture system
CN106139587B (en) Method and system for avoiding real environment obstacles based on VR game
CN106924970B (en) Virtual reality system, information display method and device based on virtual reality
US8333661B2 (en) Gaming system with saftey features
US9728011B2 (en) System and method for implementing augmented reality via three-dimensional painting
US10109065B2 (en) Using occlusions to detect and track three-dimensional objects
CN104822042B (en) A kind of pedestrains safety detection method and device based on camera
US20160098862A1 (en) Driving a projector to generate a shared spatial augmented reality experience
TW202113428A (en) Systems and methods for generating dynamic obstacle collision warnings for head-mounted displays
CN113144602B (en) Position indication method, position indication device, electronic equipment and storage medium
US11425350B2 (en) Image display system
US10008041B2 (en) Image generating device, image generating method, and image generating program
US11794112B2 (en) Information synchronization method and apparatus, and storage medium
CN106652345A (en) Safety monitoring method, safety monitoring device and virtual reality equipment
CN106814846B (en) Eye movement analysis method based on intersection point of sight line and collision body in VR
CN110270078A (en) Football match special efficacy display systems, method and computer installation
US11073902B1 (en) Using skeletal position to predict virtual boundary activation
CN108922115B (en) Information processing method and electronic equipment
Hamill et al. Perceptual evaluation of impostor representations for virtual humans and buildings
US20240149162A1 (en) In-game information prompting method and apparatus, electronic device and storage medium
US11831853B2 (en) Information processing apparatus, information processing method, and storage medium
Bang et al. Interactive experience room using infrared sensors and user's poses
EP4170594A1 (en) System and method of simultaneous localisation and mapping
CN111899350A (en) Augmented reality AR image presentation method and device, electronic device and storage medium
CN114011069A (en) Control method of virtual object, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant