WO2018149318A1 - 一种输入方法、装置、设备、***和计算机存储介质 - Google Patents

一种输入方法、装置、设备、***和计算机存储介质 Download PDF

Info

Publication number
WO2018149318A1
WO2018149318A1 PCT/CN2018/075236 CN2018075236W WO2018149318A1 WO 2018149318 A1 WO2018149318 A1 WO 2018149318A1 CN 2018075236 W CN2018075236 W CN 2018075236W WO 2018149318 A1 WO2018149318 A1 WO 2018149318A1
Authority
WO
WIPO (PCT)
Prior art keywords
input object
virtual surface
virtual
input
trajectory
Prior art date
Application number
PCT/CN2018/075236
Other languages
English (en)
French (fr)
Inventor
姚迪狄
黄丛宇
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2018149318A1 publication Critical patent/WO2018149318A1/zh
Priority to US16/542,162 priority Critical patent/US20190369735A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/018Input/output arrangements for oriental characters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/22Character recognition characterised by the type of writing
    • G06V30/228Character recognition characterised by the type of writing of three-dimensional handwriting, e.g. writing in the air
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • the present invention relates to the field of computer application technologies, and in particular, to an input method, apparatus, device, system, and computer storage medium.
  • Virtual reality technology is a computer simulation system that can create and experience virtual worlds. It uses computer to generate real-time dynamic three-dimensional realistic images, and the fusion of virtual world and real world.
  • the essence of virtual reality technology is a new revolution in human-computer interaction, and the input method is the “last mile” of human-computer interaction. Therefore, the input method of virtual reality technology is particularly important.
  • Virtual reality technology is dedicated to merging virtual worlds with the real world, making users feel as real in the virtual world as they are in the real world. For the input method in virtual reality technology, the best way is to let the user input in the virtual world just like input in the real world, but there is no good way to achieve this.
  • the present invention provides an input method, apparatus, device, system, and computer storage medium that provide an input method suitable for virtual reality technology.
  • the invention provides an input method, the method comprising:
  • the content of the input is determined based on the recorded trajectory.
  • the method further includes:
  • the virtual face is presented in a preset style.
  • the acquiring location information of the input object in the three-dimensional space includes:
  • detecting whether the input object contacts the virtual surface includes: according to the location information of the input object and the location information of the virtual surface:
  • the method further includes:
  • the tactile feedback information is displayed.
  • the displaying tactile feedback information includes at least one of the following:
  • the contact point of the input object on the virtual surface is presented.
  • determining a trajectory generated during the process of contacting the input object with the virtual surface includes:
  • the trajectory formed by each projection point in the process of the input object contacting the virtual surface is determined and recorded.
  • determining the input content according to the recorded trajectory includes:
  • the upper screen is consistent with the recorded track
  • the candidate characters matching the recorded trajectory are displayed, and the candidate characters selected by the upper screen user are displayed.
  • the method further includes:
  • the recorded track is cleared.
  • the method further includes:
  • a trajectory generated during the process of contacting the input object with the virtual surface is displayed on the virtual surface, and after the upper screen operation is completed, the trajectory displayed on the virtual surface is cleared.
  • the invention also provides an input device, the device comprising:
  • a virtual surface processing unit for determining and recording location information of the virtual surface in the three-dimensional space
  • a location acquiring unit configured to acquire location information of an input object in a three-dimensional space
  • a contact detecting unit configured to detect, according to position information of the input object and position information of the virtual surface, whether the input object contacts a virtual surface
  • a trajectory processing unit configured to determine and record a trajectory generated during the process of contacting the input object with the virtual surface
  • the input determining unit is configured to determine the input content according to the recorded trajectory.
  • the device further comprises:
  • a presentation unit for displaying the virtual surface according to a preset style.
  • the location acquiring unit is specifically configured to acquire location information of the input object detected by the spatial locator.
  • the contact detecting unit is specifically configured to determine whether a distance between a position of the input object and a position of the virtual surface is within a preset range, and if yes, determining the input object Touch the virtual face.
  • the device further comprises:
  • a presentation unit configured to display tactile feedback information if the input object is detected to contact the virtual surface.
  • the presentation unit adopts at least one of the following manners when displaying the tactile feedback information:
  • the contact point of the input object on the virtual surface is presented.
  • the trajectory processing unit is configured to: acquire a projection of position information of the input object on the virtual surface during the process of contacting the input object with the virtual surface; When the object is separated from the virtual surface, the trajectory formed by each projection point in the process of the input object contacting the virtual surface is determined and recorded.
  • the input determining unit is specifically configured to: according to the recorded track, the upper screen is consistent with the recorded track; or
  • the candidate characters matching the recorded trajectory are displayed, and the candidate characters selected by the upper screen user are displayed.
  • the trajectory processing unit is further configured to clear the recorded trajectory after the upper screen operation is completed; or, after capturing the gesture of canceling the input, clear the recorded trajectory.
  • the device further comprises:
  • a presentation unit configured to display, on the virtual surface, a trajectory generated during the process of contacting the input object with the virtual surface, and clearing the trajectory displayed on the virtual surface after the upper screen operation is completed.
  • the invention also provides an apparatus, including
  • Memory including one or more programs
  • One or more processors coupled to the memory, executing the one or more programs to perform the operations performed in the methods described above.
  • the present invention also provides a computer storage medium encoded with a computer program that, when executed by one or more computers, causes the one or more computers to perform operations performed in the above method .
  • the present invention also provides a virtual reality system including: an input object, a spatial locator, and a virtual reality device;
  • the spatial locator is configured to detect a location of an input object in a three-dimensional space and provide the virtual reality device to the virtual reality device;
  • the virtual reality device is configured to determine and record location information of the virtual surface in the three-dimensional space; and detecting, according to the location information of the input object and the location information of the virtual surface, whether the input object contacts the virtual surface; Recording a trajectory generated during the process of contacting the input object with the virtual surface; determining the input content according to the recorded trajectory.
  • the virtual reality device is further configured to display the virtual surface according to a preset style.
  • the virtual reality device when the virtual reality device detects whether the input object contacts the virtual surface according to the location information of the input object and the location information of the virtual surface, the virtual reality device performs:
  • the virtual reality device is further configured to display tactile feedback information if the input object is detected to contact the virtual surface.
  • the manner in which the virtual reality device displays the tactile feedback information includes at least one of the following:
  • the contact point of the input object on the virtual surface is presented.
  • the manner in which the virtual reality device displays the tactile feedback information includes: sending a trigger message to the input object;
  • the input object is further configured to provide vibration feedback after receiving the trigger message.
  • the virtual reality device performs: when determining a trajectory generated during the process of contacting the input object with the virtual surface:
  • the trajectory formed by each projection point in the process of the input object contacting the virtual surface is determined and recorded.
  • the virtual reality device performs specific execution when determining the input content according to the recorded trajectory:
  • the upper screen is consistent with the recorded track
  • the candidate characters matching the recorded trajectory are displayed, and the candidate characters selected by the upper screen user are displayed.
  • the virtual reality device is further configured to clear the recorded track after completing the upper screen operation; or, after capturing the gesture of canceling the input, clearing the recorded track.
  • the virtual reality device is further configured to display, on the virtual surface, a trajectory generated during the process of contacting the input object with the virtual surface, and after completing the upper screen operation, clearing the virtual surface display Track.
  • the present invention determines and records the position information of the virtual surface in the three-dimensional space, and detects whether the input object contacts the virtual surface according to the position information of the input object and the position information of the virtual surface, according to the recorded input object.
  • the trajectory generated during the process of touching the virtual surface determines the content of the input.
  • the information input in the three-dimensional space is realized, and the virtual reality technology is applied, so that the input experience of the user in the virtual reality is like in the real space.
  • FIG. 1 is a schematic structural diagram of a system according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a scenario according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a method according to an embodiment of the present invention.
  • 4a is a diagram showing an example of determining whether an input object is in contact with a contact surface according to an embodiment of the present invention
  • 4b is a schematic diagram of contact feedback according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a character input process according to an embodiment of the present invention.
  • 6a is a diagram showing an example of character input according to an embodiment of the present invention.
  • 6b is a diagram showing an example of character input according to an embodiment of the present invention.
  • FIG. 7 is a structural diagram of a device according to an embodiment of the present invention.
  • FIG. 8 is a structural diagram of a device according to an embodiment of the present invention.
  • the word “if” as used herein may be interpreted as “when” or “when” or “in response to determining” or “in response to detecting.”
  • the phrase “if determined” or “if detected (conditions or events stated)” may be interpreted as “when determined” or “in response to determination” or “when detected (stated condition or event) “Time” or “in response to a test (condition or event stated)”.
  • the system mainly includes: a virtual reality device, a space locator, and an input object.
  • the input object may be any device in the form of a brush, a glove, or the like, which can be held by the user for information input, or even a user's finger.
  • the spatial locator is a sensor for detecting the position of a moving object in a three-dimensional space.
  • the widely used methods of the spatial locator include: low-frequency magnetic field spatial positioning, ultrasonic spatial positioning, and laser spatial positioning.
  • the magnetic field emitter in the sensor generates a low frequency magnetic field in a three-dimensional space, can calculate the position and direction of the receiver relative to the transmitter, and transmit the data to the host computer (virtual in the present invention)
  • the computer or the mobile device to which the real device is connected is referred to as a virtual reality device in the embodiment of the present invention.
  • the receiver can be placed on the input object. That is, the spatial locator detects the position of the input object in the three-dimensional space and provides it to the virtual reality device.
  • laser spatial positioning As an example, several laser-emitting devices are installed in a three-dimensional space, and lasers are scanned in both directions of the horizontal and vertical directions, and a plurality of laser-sensing receivers are placed on the positioned objects, and two beams are calculated. The difference in the angle at which the light reaches the object gives the three-dimensional coordinates of the object. The three-dimensional coordinates of the object as it moves will also change, resulting in changed positional information. This principle also enables positional positioning of the input object, which allows positioning of any input object without the need to additionally install devices such as receivers on the input object.
  • a virtual reality device is a general term for a device capable of providing a virtual reality effect to a user or a receiving device.
  • virtual reality devices mainly include:
  • a three-dimensional environment acquisition device that collects three-dimensional data of objects in the physical world (ie, the real world) and recreates them in a virtual reality environment, such devices, for example, 3D printing devices;
  • Display class device displaying images of virtual reality, such devices such as virtual reality glasses, virtual reality helmets, augmented reality devices, mixed reality devices, etc.;
  • a sound device that simulates the acoustic environment of the physical world, providing a sound output in a virtual environment to a user or receiving device, such as a three-dimensional surround acoustic device;
  • the interaction device collects the interaction and/or movement behavior of the user or the receiving device in the virtual environment, and as a data input, generates feedback and changes to the virtual reality environment parameters, images, acoustics, time, etc., such devices, for example, location tracking Instruments, data gloves, 3D 3D mice (or indicators), motion capture devices, eye trackers, force feedback devices, and other interactive devices.
  • location tracking Instruments for example, location tracking Instruments, data gloves, 3D 3D mice (or indicators), motion capture devices, eye trackers, force feedback devices, and other interactive devices.
  • the executor of the method embodiment of the present invention is the virtual reality device, and in the device embodiment of the present invention, the device is disposed on the virtual reality device.
  • the embodiment of the present invention may be based on the scenario shown in FIG. 2, the user is wearing a virtual reality device such as a head mounted display, and when the user triggers the input function, a virtual surface may be "generated" in the three-dimensional space, and the user may hold the input object. Writing is performed on the virtual surface to complete information input.
  • the virtual surface is actually a reference location for user input, not real, it can be a plane or a surface.
  • the virtual face can be displayed in a certain style, for example, the virtual face is presented as a blackboard style, or is displayed as a blank sheet of paper. Style and more. This way the user's input on the virtual surface is like writing on a blackboard or white paper in the real world.
  • the method capable of implementing the above scenario will be described in detail below with reference to the embodiments.
  • FIG. 3 is a flowchart of a method according to an embodiment of the present invention. As shown in FIG. 3, the method may include the following steps:
  • position information of the virtual face in the three-dimensional space is determined and recorded.
  • This step can be executed when the user triggers the input function. For example, when the user logs in, and the user name and password are required to be input, for example, when the chat content is input through the instant messaging application, the input function is triggered, and then the step is executed to determine and Record the position information of the virtual face in three-dimensional space.
  • the virtual surface is actually a reference position input by the user, and may be a plane or a curved surface, which is a virtual virtual surface, and is not real.
  • the location of the virtual surface may be set with the location of the virtual reality device as a reference location, or may be a reference location of a computer or mobile device to which the virtual reality device is connected.
  • the position information of the input object is detected by the spatial locator, and therefore the position of the virtual surface needs to be within the detection range of the spatial locator.
  • two ways can be additionally used in the present invention to let the user perceive the existence of the virtual surface, thereby knowing where to input.
  • One way is to display tactile feedback information when the user touches the virtual surface with the input object, which will be detailed later.
  • Another way is to display the virtual face in a preset style, such as displaying the virtual face as a blackboard style, showing a white paper style, etc., so that the user can Compared with the sense of distance, knowing where the virtual surface is located, on the other hand, the user can write as well on a medium such as a blackboard or white paper, and the user experience is better.
  • the spatial locator can locate the position information of the input object during the movement process. Therefore, the step actually acquires the position information of the input object in the three-dimensional space detected by the spatial locator in real time from the spatial locator, and the position information can be three-dimensional. Coordinate value.
  • the position information of the virtual surface Since the position information of the virtual surface has been recorded and the position information of the input object is acquired, by comparing the position information of the input object with the position information of the virtual surface, it is possible to determine whether the input object is based on the distance between the two. Touch the virtual face. Specifically, it can be determined whether the distance between the position of the input object and the position of the virtual surface is within a preset range, and if so, it can be determined that the input object contacts the virtual surface. For example, when the distance between the input object and the virtual surface is within the range of [-1 cm, 1 cm], the input object is considered to be in contact with the virtual surface.
  • the virtual surface can be regarded as being composed of a plurality of points on the surface, and the spatial locator detects the position of the input object in real time.
  • the information is transmitted to the device performing the method.
  • the solid points in Fig. 4a are the points constituting the virtual plane, and the only part is shown in the figure, and the hollow point is the position of the input object.
  • the device determines the position A of the input object and the position B of the point on the virtual surface closest to the position A, and then determines whether the distance between A and B is within a preset range, for example, [-1 cm, 1 cm], if Yes, the input object is considered to be in contact with the virtual surface.
  • the tactile feedback information can be displayed when the input object contacts the virtual surface.
  • the presentation form of the tactile feedback information may include but is not limited to the following:
  • Play a tone indicating that the input object touches the virtual surface For example, once the input object touches the virtual surface, the preset music is played, and once the input object leaves the virtual surface, the music is paused.
  • the contact point of the input object on the virtual surface is presented. For example, once the input object touches the virtual surface, a water wave contact point is formed. If the distance is closer to the virtual surface, the water wave is larger, just like simulating the pressure on the medium during the user's actual writing process. As shown in Figure 4b.
  • the pattern of the contact point is not limited in the present invention, and may be a simple black dot. When the input object contacts the virtual surface, a black dot is displayed at the contact position, and when the virtual surface is separated, the black dot disappears.
  • the tactile feedback mode of the above 1) and 3) belongs to visual feedback, and the tactile feedback mode of the above 2) belongs to auditory feedback.
  • the mechanical feedback mode shown in the following 4) can also be adopted.
  • the input object needs to have the ability to receive messages and the ability to vibrate.
  • the virtual reality device discriminates whether the input object touches the virtual surface at a short time interval, and determines that the input object contacts the virtual surface, and sends a trigger message to the input object.
  • the input object provides vibration feedback after receiving the trigger message. When the input object leaves the virtual surface, the input object does not receive the trigger message, and no vibration feedback is provided. In this way, the user may have such an experience during the input process.
  • the vibration feedback is felt when the virtual surface is touched, so that the user can clearly perceive the contact state of the input object with the virtual surface.
  • the trigger message sent by the virtual reality device to the input object may be sent in a wireless manner, such as wifi, Bluetooth, NFC (Near Field Communication), or the like, or may be sent in a wired manner.
  • a wireless manner such as wifi, Bluetooth, NFC (Near Field Communication), or the like, or may be sent in a wired manner.
  • a trajectory generated during the process of contacting the virtual surface with the input object is determined and recorded.
  • the projection of the position information of the input object on the virtual surface may be acquired during the process of the input object contacting the virtual surface; when the input object is separated from the virtual surface, the trajectory formed by each projection point in the process of contacting the virtual surface with the input object is determined and recorded.
  • the track of this record can be seen as a handwriting.
  • the content of the input is determined based on the recorded trajectory.
  • the line corresponding to the recorded track can be based on the recorded track. After the upper screen is completed, the recorded track is cleared, and the current handwriting input is completed, and the detection and recording of the handwriting generated by the next input object contacting the virtual surface is resumed.
  • the user wants to input characters and the input method is also drawn, for example, if the user inputs the trajectory of the letter "a” on the virtual surface, the letter a can be obtained by matching, and the letter "a” is directly displayed on the screen. .
  • the user wants to input characters, and the input mode is coded or stroked, for example, the user inputs pinyin on the virtual surface, wants to obtain the Chinese characters corresponding to the pinyin, or the user inputs the strokes of the Chinese characters on the virtual surface, I hope to get the Chinese characters corresponding to each stroke, and so on. Then, based on the recorded trajectory, candidate characters matching the recorded trajectory are displayed. If the user does not select any candidate character, the current handwriting input is completed, and the detection and recording of the handwriting generated by the next input object contacting the virtual surface is resumed. When the second handwriting is completed, the recorded track is the track formed by the first handwriting and the second handwriting, and the recorded track is matched to display the matching candidate characters.
  • a character input process can be as shown in Figure 5.
  • the track that the user has input can be displayed on the virtual surface until the upper screen is completed, and the track displayed on the virtual surface is cleared.
  • the trajectory displayed on the virtual surface may also be deleted automatically, but manually deleted by the user, that is, cleared by a specific gesture. For example, by clicking the button of "clear track" on the virtual surface, once the click operation of the user at the button position is detected, the track displayed on the virtual surface is cleared.
  • the gesture of canceling the input can be performed. Once the gesture of the user's undo input is captured, the recorded track is cleared. The user can re-enter the current character. For example, an "undo button" can be placed on the virtual face, as shown in Figure 6b. If the click operation of the input object is captured, the recorded track is cleared, and the corresponding track displayed on the virtual surface can be cleared. Other gestures can also be used, such as quickly moving the input object to the left without touching the virtual surface, and moving the input object upwards quickly.
  • the execution body of the foregoing method embodiment may be an input device, and the device may be located in an application of a local terminal (virtual reality device), or may be a plug-in or a software development kit in an application of the local terminal ( Functional units such as the Software Development Kit, SDK).
  • a local terminal virtual reality device
  • Functional units such as the Software Development Kit, SDK
  • FIG. 7 is a structural diagram of a device according to an embodiment of the present invention. As shown in FIG. 7, the device may include: a virtual surface processing unit 01, a location acquiring unit 02, a contact detecting unit 03, a track processing unit 04, and an input determining unit 05. A presentation unit 06 can also be included. The main functions of each component are as follows:
  • the virtual surface processing unit 01 is responsible for determining and recording position information of the virtual surface in the three-dimensional space.
  • a virtual plane may be determined as a virtual plane position within a three-dimensional space touched by a user of the virtual reality device, and the user may input information by writing on the virtual surface.
  • the virtual surface is actually a reference position input by the user, and is a virtual virtual surface, which is not real.
  • the position information of the input object is detected by the spatial locator, so the position of the virtual surface needs to be within the detection range of the spatial locator.
  • the presentation unit 06 can display the virtual surface according to a preset style, for example, displaying the virtual surface as a blackboard style, displaying a white paper style, and the like, so that the user can have a distance on the one hand during the input process. Feeling, knowing where the virtual surface is located, on the other hand, the user can write as well on a medium such as a blackboard or white paper, and the user experience is better.
  • the position acquisition unit 02 is responsible for acquiring position information of an input object in a three-dimensional space. Specifically, the location information of the input object detected by the spatial locator is obtained, and the location information may be a three-dimensional coordinate value.
  • the contact detecting unit 03 is responsible for detecting whether the input object contacts the virtual surface based on the position information of the input object and the position information of the virtual surface. Since the position information of the virtual surface has been recorded and the position information of the input object is acquired, by comparing the position information of the input object with the position information of the virtual surface, it is possible to determine whether the input object is based on the distance between the two. Touch the virtual face. Specifically, it can be determined whether the distance between the position of the input object and the position of the virtual surface is within a preset range, and if so, it can be determined that the input object contacts the virtual surface. For example, when the distance between the input object and the virtual surface is within the range of [-1 cm, 1 cm], the input object is considered to be in contact with the virtual surface.
  • the trajectory processing unit 04 is responsible for determining and recording the trajectory generated during the process of the input object contacting the virtual surface.
  • the presentation unit 06 can display the tactile feedback information when the input object contacts the virtual surface.
  • the presentation form of the tactile feedback information may include but is not limited to the following:
  • Play a tone indicating that the input object touches the virtual surface For example, once the input object touches the virtual surface, the preset music is played, and once the input object leaves the virtual surface, the music is paused.
  • the contact point of the input object on the virtual surface is presented. For example, once the input object touches the virtual surface, a water wave contact point is formed. If the distance is closer to the virtual surface, the water wave is larger, just like simulating the pressure on the medium during the user's actual writing process. As shown in Figure 4.
  • the pattern of the contact point is not limited in the present invention, and may be a simple black dot. When the input object contacts the virtual surface, a black dot is displayed at the contact position, and when the virtual surface is separated, the black dot disappears.
  • the input object needs to have the ability to receive messages and the ability to vibrate.
  • the virtual reality device discriminates whether the input object touches the virtual surface at a short time interval, and determines that the input object contacts the virtual surface, and sends a trigger message to the input object.
  • the input object provides vibration feedback after receiving the trigger message. When the input object leaves the virtual surface, the input object does not receive the trigger message, and no vibration feedback is provided. In this way, the user may have such an experience during the input process.
  • the vibration feedback is felt when the virtual surface is touched, so that the user can clearly perceive the contact state of the input object with the virtual surface.
  • the trigger message sent by the virtual reality device to the input object may be sent in a wireless manner, such as wifi, Bluetooth, NFC (Near Field Communication), or the like, or may be sent in a wired manner.
  • a wireless manner such as wifi, Bluetooth, NFC (Near Field Communication), or the like, or may be sent in a wired manner.
  • the trajectory processing unit 04 can acquire the projection of the position information of the input object on the virtual surface during the process of the input object contacting the virtual surface; when the input object is separated from the virtual surface, determine and record each projection point in the process of contacting the input object with the virtual surface The trajectory of the composition.
  • the input determination unit 05 is responsible for determining the content of the input based on the recorded trajectory. Specifically, the input determining unit 05 may, according to the recorded trajectory, a line that matches the recorded trajectory on the upper screen; or, according to the recorded trajectory, a character that matches the recorded trajectory according to the recorded trajectory; or, according to the recorded A track that displays candidate characters that match the recorded trajectory and candidate characters that are selected by the upper screen user. The candidate character is presented by the presentation unit 06.
  • the trajectory processing unit 04 empties the recorded trajectory and starts the input processing of the next character. Or, after capturing the gesture of canceling the input, the recorded track is cleared, and the input processing of the current character is performed again.
  • the presentation unit 06 can display the trajectory generated during the process of the input object contacting the virtual surface on the virtual surface, and clear the trajectory displayed on the virtual surface after the upper screen operation is completed.
  • the above method and apparatus provided by the embodiments of the present invention may be embodied by a computer program that is set up and operates in a device.
  • the device may include one or more processors, and also includes a memory and one or more programs, as shown in FIG.
  • the one or more programs are stored in a memory and executed by the one or more processors to implement the method flow and/or device operations illustrated in the above-described embodiments of the present invention.
  • the method flow executed by one or more of the above processors may include:
  • the content of the input is determined based on the recorded trajectory.
  • the above method, apparatus and apparatus provided by the present invention can have the following advantages:
  • the present invention is different from the traditional input method, and requires a keyboard, a tablet, etc., on the one hand, it is necessary to carry these large-sized input devices with them; on the other hand, it is necessary to additionally observe the input devices while inputting.
  • the user may input with any input device, or even input the device, and the input may be completed by using an object such as a user's finger, a pen on the hand, a stick, or the like. And because the virtual face is in three dimensions, the user only needs to write on the virtual face without additional observation of the input device.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

一种输入方法、装置、设备、***和计算机存储介质,其中输入方法包括:确定并记录在三维空间中虚拟面的位置信息(301);获取在三维空间中输入物体的位置信息(302);依据输入物体的位置信息与虚拟面的位置信息,检测输入物体是否接触虚拟面(303);确定并记录入物体接触虚拟面过程中产生的轨迹(304);依据记录的轨迹,确定输入的内容(305)。实现三维空间内的信息输入,适用于虚拟现实技术。

Description

一种输入方法、装置、设备、***和计算机存储介质
本申请要求2017年02月17日递交的申请号为201710085422.7、发明名称为“一种输入方法、装置、设备、***和计算机存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及计算机应用技术领域,特别涉及一种输入方法、装置、设备、***和计算机存储介质。
背景技术
虚拟现实技术是一种可以创建和体验虚拟世界的计算机仿真***,它利用计算机生成实时动态的三维立体逼真图像,虚拟世界与现实世界的融合。虚拟现实技术本质就是一场人机交互方式的新革命,而输入方式则是人机交互的“最后一公里”,因此虚拟现实技术的输入方法显得尤为关键。虚拟现实技术致力于将虚拟世界与现实世界进行融合,让用户在虚拟世界中的感受就像在现实世界中一样真实。对于虚拟现实技术中的输入方式而言,最好的方式就是让用户在虚拟世界中的输入就像在现实世界中输入一样,但目前尚没有很好的方式能够达到该目的。
发明内容
有鉴于此,本发明提供了一种输入方法、装置、设备、***和计算机存储介质,提供适用于虚拟现实技术的输入方式。
具体技术方案如下:
本发明提供了一种输入方法,该方法包括:
确定并记录在三维空间中虚拟面的位置信息;
获取在三维空间中输入物体的位置信息;
依据所述输入物体的位置信息与所述虚拟面的位置信息,检测所述输入物体是否接触虚拟面;
确定并记录所述输入物体接触虚拟面过程中产生的轨迹;
依据记录的轨迹,确定输入的内容。
根据本发明一优选实施方式,该方法还包括:
按照预设的样式展现所述虚拟面。
根据本发明一优选实施方式,所述获取在三维空间中输入物体的位置信息包括:
获取空间***检测到的所述输入物体的位置信息。
根据本发明一优选实施方式,依据所述输入物体的位置信息与所述虚拟面的位置信息,检测所述输入物体是否接触虚拟面包括:
判断所述输入物体的位置与所述虚拟面的位置之间的距离是否在预设范围内,如果是,确定所述输入物体接触虚拟面。
根据本发明一优选实施方式,该方法还包括:
若检测到所述输入物体接触虚拟面,展现触感反馈信息。
根据本发明一优选实施方式,所述展现触感反馈信息包括以下至少一种:
改变虚拟面的颜色;
播放指示所述输入物体接触虚拟面的提示音;
按照预设样式,展现所述输入物体在虚拟面上的接触点。
根据本发明一优选实施方式,确定所述输入物体接触虚拟面过程中产生的轨迹包括:
在所述输入物体接触虚拟面的过程中,获取所述输入物体的位置信息在所述虚拟面上的投影;
所述输入物体与所述虚拟面分离时,确定并记录输入物体接触虚拟面的过程中各投影点构成的轨迹。
根据本发明一优选实施方式,依据记录的轨迹,确定输入的内容包括:
依据已记录的轨迹,上屏与已记录轨迹一致的线条;或者,
依据已记录的轨迹,上屏与已记录的轨迹相匹配的字符;或者,
依据已记录的轨迹,显示与所述已记录的轨迹相匹配的候选字符,上屏用户选择的候选字符。
根据本发明一优选实施方式,该方法还包括:
完成上屏操作后,清空已记录的轨迹;或者,
捕捉到撤销输入的手势后,清空已记录的轨迹。
根据本发明一优选实施方式,该方法还包括:
在所述虚拟面上展现所述输入物体接触虚拟面过程中产生的轨迹,在完成上屏操作后,清除虚拟面上展现的轨迹。
本发明还提供了一种输入装置,该装置包括:
虚拟面处理单元,用于确定并记录在三维空间中虚拟面的位置信息;
位置获取单元,用于获取在三维空间中输入物体的位置信息;
接触检测单元,用于依据所述输入物体的位置信息与所述虚拟面的位置信息,检测所述输入物体是否接触虚拟面;
轨迹处理单元,用于确定并记录所述输入物体接触虚拟面过程中产生的轨迹;
输入确定单元,用于依据记录的轨迹,确定输入的内容。
根据本发明一优选实施方式,该装置还包括:
展现单元,用于按照预设的样式展现所述虚拟面。
根据本发明一优选实施方式,所述位置获取单元,具体用于获取空间***检测到的所述输入物体的位置信息。
根据本发明一优选实施方式,所述接触检测单元,具体用于判断所述输入物体的位置与所述虚拟面的位置之间的距离是否在预设范围内,如果是,确定所述输入物体接触虚拟面。
根据本发明一优选实施方式,该装置还包括:
展现单元,用于若检测到所述输入物体接触虚拟面,展现触感反馈信息。
根据本发明一优选实施方式,所述展现单元在展现触感反馈信息时,采用以下至少一种方式:
改变虚拟面的颜色;
播放指示所述输入物体接触虚拟面的提示音;
按照预设样式,展现所述输入物体在虚拟面上的接触点。
根据本发明一优选实施方式,所述轨迹处理单元,具体用于:在所述输入物体接触虚拟面的过程中,获取所述输入物体的位置信息在所述虚拟面上的投影;所述输入物体与所述虚拟面分离时,确定并记录输入物体接触虚拟面的过程中各投影点构成的轨迹。
根据本发明一优选实施方式,所述输入确定单元,具体用于:依据已记录的轨迹,上屏与已记录轨迹一致的线条;或者,
依据已记录的轨迹,上屏与已记录的轨迹相匹配的字符;或者,
依据已记录的轨迹,显示与所述已记录的轨迹相匹配的候选字符,上屏用户选择的候选字符。
根据本发明一优选实施方式,所述轨迹处理单元,还用于上屏操作完成后,清空已记录的轨迹;或者,捕捉到撤销输入的手势后,清空已记录的轨迹。
根据本发明一优选实施方式,该装置还包括:
展现单元,用于在所述虚拟面上展现所述输入物体接触虚拟面过程中产生的轨迹,在上屏操作完成后,清除虚拟面上展现的轨迹。
本发明还提供了一种设备,包括
存储器,包括一个或者多个程序;
一个或者多个处理器,耦合到所述存储器,执行所述一个或者多个程序,以实现上述方法中执行的操作。
本发明还提供了一种计算机存储介质,所述计算机存储介质被编码有计算机程序,所述程序在被一个或多个计算机执行时,使得所述一个或多个计算机执行上述方法中执行的操作。
本发明还提供了一种虚拟现实***,该虚拟现实***包括:输入物体、空间***和虚拟现实设备;
所述空间***,用于检测在三维空间中输入物体的位置,并提供给所述虚拟现实设备;
所述虚拟现实设备,用于确定并记录在三维空间中虚拟面的位置信息;依据所述输入物体的位置信息与所述虚拟面的位置信息,检测所述输入物体是否接触虚拟面;确定并记录所述输入物体接触虚拟面过程中产生的轨迹;依据记录的轨迹,确定输入的内容。
根据本发明一优选实施方式,所述虚拟现实设备,还用于按照预设的样式展现所述虚拟面。
根据本发明一优选实施方式,所述虚拟现实设备在依据所述输入物体的位置信息与所述虚拟面的位置信息,检测所述输入物体是否接触虚拟面时,具体执行:
判断所述输入物体的位置与所述虚拟面的位置之间的距离是否在预设范围内,如果是,确定所述输入物体接触虚拟面。
根据本发明一优选实施方式,所述虚拟现实设备,还用于若检测到所述输入物体接触虚拟面,展现触感反馈信息。
根据本发明一优选实施方式,所述虚拟现实设备展现触感反馈信息的方式包括以下至少一种:
改变虚拟面的颜色;
播放指示所述输入物体接触虚拟面的提示音;
按照预设样式,展现所述输入物体在虚拟面上的接触点。
根据本发明一优选实施方式,所述虚拟现实设备展现触感反馈信息的方式包括:向所述输入物体发送触发消息;
所述输入物体,还用于收到所述触发消息后,提供振动反馈。
根据本发明一优选实施方式,所述虚拟现实设备在确定所述输入物体接触虚拟面过程中产生的轨迹时,具体执行:
在所述输入物体接触虚拟面的过程中,获取所述输入物体的位置信息在所述虚拟面上的投影;
所述输入物体与所述虚拟面分离时,确定并记录输入物体接触虚拟面的过程中各投影点构成的轨迹。
根据本发明一优选实施方式,所述虚拟现实设备在依据记录的轨迹,确定输入的内容时,具体执行:
依据已记录的轨迹,上屏与已记录轨迹一致的线条;或者,
依据已记录的轨迹,上屏与已记录的轨迹相匹配的字符;或者,
依据已记录的轨迹,显示与所述已记录的轨迹相匹配的候选字符,上屏用户选择的候选字符。
根据本发明一优选实施方式,所述虚拟现实设备,还用于完成上屏操作后,清空已记录的轨迹;或者,捕捉到撤销输入的手势后,清空已记录的轨迹。
根据本发明一优选实施方式,所述虚拟现实设备,还用于在所述虚拟面上展现所述输入物体接触虚拟面过程中产生的轨迹,在完成上屏操作后,清除虚拟面上展现的轨迹。
由以上技术方案可以看出,本发明通过在三维空间中确定并记录虚拟面的位置信息,依据输入物体的位置信息与虚拟面的位置信息,检测输入物体是否接触虚拟面,依据记录的输入物体接触虚拟面过程中产生的轨迹,确定输入的内容。实现了三维空间内的信息输入,适用于虚拟现实技术,使得用户在虚拟现实中的输入体验像是在现实空间中一样。
附图说明
图1为本发明实施例提供的***组成示意图;
图2为本发明实施例提供的一个场景示意图;
图3为本发明实施例提供的方法流程图;
图4a为本发明实施例提供的一种判断输入物体与接触面是否接触的实例图;
图4b为本发明实施例提供的一种接触反馈的示意图;
图5为本发明实施例提供的一个字符的输入过程示意图;
图6a为本发明实施例提供的字符输入的实例图;
图6b为本发明实施例提供的字符输入的实例图;
图7为本发明实施例提供的装置结构图;
图8为本发明实施例提供的设备结构图。
具体实施方式
为了使本发明的目的、技术方案和优点更加清楚,下面结合附图和具体实施例对本发明进行详细描述。
在本发明实施例中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本发明。在本发明实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。
应当理解,本文中使用的术语“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。
为了方便对本发明的理解,首先对本发明所基于的***进行简单描述。如图1中所示,该***主要包括:虚拟现实设备、空间***和输入物体。其中输入物体可以是笔刷、手套等任意形态的、可以用户手持以进行信息输入的设备,甚至可以是用户手指。
空间***是一种检测三维空间中运动物***置的传感器,目前空间***广泛采用的方式包括:低频磁场式空间定位、超声式空间定位、激光式空间定位等。以低频磁场式传感器为例,传感器中的磁场发射器在三维空间中产生低频磁场,可以计算出接收器相对于发射器的位置和方向,并将数据传输给主计算机(在本发明中为虚拟现实设备所连接的计算机或移动设备,在本发明实施例中将虚拟现实设备与其所连接的主计算机统称为虚拟现实设备)。在本发明实施例中,接收器就可以设置于输入物体上。即空间***检测三维空间中输入物体的位置,并提供给虚拟现实设备。
以激光式空间定位为例,在三维空间内安装数个可发射激光的装置,对空间发射横竖两个方向扫射的激光,被定位的物体上放置了多个激光感应接收器,通过计算两束光线到达物体的角度差,从而得到物体的三维坐标。物体在移动时三维坐标也会跟着变化,从而得到变化的位置信息。利用该原理也能够对输入物体进行位置定位,这种方式可以对任意的输入物体进行定位,无需在输入物体上额外安装诸如接收器等装置。
虚拟现实设备是能够向用户或接收设备提供虚拟现实效果的设备的总称。一般而言,虚拟现实设备主要包括:
三维环境采集设备,采集物理世界(即,现实世界)的物体的三维数据,并在虚拟现实环境下进行再创建,此类设备例如,3D打印设备;
显示类设备,显示虚拟现实的影像,此类设备例如虚拟现实眼镜、虚拟现实头盔、增强现实设备、混合现实设备等;
声音设备,模拟物理世界的声学环境,向用户或接收设备提供虚拟环境下的声音输出,此类设备例如,三维环绕声学设备;
交互设备,采集用户或接收设备在虚拟环境下的交互和/或移动行为,并作为数据输入,对虚拟现实的环境参数、影像、声学、时间等产生反馈和改变,此类设备例如,位置追踪仪、数据手套、3D三维鼠标(或指示器)、动作捕捉设备、眼动仪、力反馈设备以及其他交互设备。
本发明下述方法实施例的执行主体为该虚拟现实设备,且本发明装置实施例中,该装置设置于该虚拟现实设备。
本发明实施例可以基于如图2中所示场景,用户穿戴有诸如头戴式显示器的虚拟现实设备,当用户触发输入功能时,在三维空间可以“产生”一个虚拟面,用户可以手持输入物体在该虚拟面上进行书写,从而完成信息输入。该虚拟面实际上是用于用户输入的一个参考位置,并非真实存在,可以是平面,也可以是曲面。为了使得用户在输入过程中的输入体验像是在现实世界中输入一样,可以将该虚拟面以一定的样式进行展现,例如将虚拟面展现为一块黑板的样式,或者展现为一张白纸的样式等等。这样用户在虚拟面上的输入就像是在现实世界中的黑板或白纸上书写一样。下面结合实施例对能够实现上述场景的方法进行详细描述。
图3为本发明实施例提供的方法流程图,如图3中所示,该方法可以包括以下步骤:
在301中,确定并记录在三维空间中虚拟面的位置信息。
本步骤可以在用户触发输入功能时执行,例如用户登录,需要输入用户名和密码时, 再例如,通过即时通讯类应用输入聊天内容时,都会触发输入功能,此时就开始执行本步骤,确定并记录在三维空间中虚拟面的位置信息。
本步骤中,需要在虚拟现实设备的用户触及的三维空间范围内,确定一个虚拟平面作为虚拟面的位置,用户可以通过在该虚拟面上进行书写的方式进行信息输入。该虚拟面实际上是作为用户输入的参考位置,可以是平面,也可以是曲面,是虚拟的虚拟面,并非真实存在。虚拟面的位置可以以虚拟现实设备的位置作为参考位置设置,也可以以虚拟现实设备所连接计算机或移动设备等作为参考位置。也可以以另外,由于需要检测用户持输入物体在虚拟面上的轨迹,输入物体的位置信息是依靠空间***检测的,因此虚拟面的位置需要在空间***的检测范围内。
为了让用户对该虚拟面更加有“距离感”,在本发明中可以附加采用两种方式来让用户感知虚拟面的存在,从而知道在哪里进行输入。一种方式是当用户持输入物体接触虚拟面时,可以展现触感反馈信息,该部分内容将在后续详述。另一种方式是可以按照预设的样式展现该虚拟面,例如将虚拟面展现为一块黑板的样式,展现为一张白纸的样式,等等,这样用户在输入的过程中,一方面能够比较有距离感,知道虚拟面的位置在哪里,另一方面,用户能够像在黑板或白纸等介质上书写一样,用户体验较好。
在302中,获取在三维空间中输入物体的位置信息。
当用户持输入物体开始进行输入时,例如用户手持笔刷在“黑板”样式的虚拟面上进行书写。空间***能够定位到输入物体在移动过程中的位置信息,因此,本步骤实际上是从空间***获取空间***实时检测到的三维空间中输入物体的位置信息,该位置信息可以为三维坐标值。
在303中,依据输入物体的位置信息与虚拟面的位置信息,检测输入物体是否接触虚拟面。
由于已经记录有虚拟面的位置信息,又获取到了输入物体的位置信息,通过将输入物体的位置信息与虚拟面的位置信息进行比对,依据两者之间的距离就可以判断出输入物体是否接触虚拟面。具体地,可以判断输入物体的位置与虚拟面的位置之间的距离是否在预设范围内,如果是,可以确定输入物体接触虚拟面。例如可以将输入物体与虚拟面之间距离在[-1cm,1cm]范围内时,认为输入物体接触虚拟面。
在确定输入物体的位置与虚拟面的位置之间的距离时,可以如图4a所示,虚拟面可以看作是由很多该面上的点所构成的,空间***实时检测输入物体的位置信息并将该位置信息传送至执行本方法的装置。图4a中实心的点为构成虚拟面的各点,图中只是示 例性的示出了部分,空心的点为输入物体的位置。该装置确定输入物体的位置A以及虚拟面上距离该位置A最近的点的位置B,然后判断A和B之间的距离是否在预设范围内,例如[-1cm,1cm]范围内,如果是,就认为输入物体接触虚拟面。
当然,除了上述图4a所示的方式之外,还可以采用其他确定输入物体的位置与虚拟面的位置之间距离的方式,例如采用将输入物体的位置向虚拟面投影的方式,在此不再赘述。
接触虚拟面后,用户就可以通过保持接触虚拟面并进行移动来产生一个笔迹。上面已经提及,为了让用户更加有距离感,方便进行笔迹的输入,可以在输入物体接触虚拟面时,展现触感反馈信息。触感反馈信息的展现形式可以包括但不限于以下几种:
1)改变虚拟面的颜色。例如,输入物体未接触虚拟面时,虚拟面为白色,当输入物体接触虚拟面时,虚拟面就变成灰色以表征输入物体接触虚拟面。
2)播放指示输入物体接触虚拟面的提示音。例如,一旦输入物体接触虚拟面,就播放预设的音乐,一旦输入物体离开虚拟面,音乐就暂停播放。
3)按照预设样式,展现输入物体在虚拟面上的接触点。例如,一旦输入物体接触虚拟面,就形成一个水波式的接触点,若在接触虚拟面的距离越近,该水波越大,就像模拟用户真实书写过程中对介质所产生的压力。如图4b所示。接触点的样式本发明并不加以限制,也可以是简单的一个黑点,输入物体接触虚拟面时,就在接触位置显示一个黑点,离开虚拟面时,黑点消失。
上述1)和3)的触感反馈方式属于视觉反馈,上述2)的触感反馈方式属于听觉反馈,除了上述几种反馈方式之外,还可以采用如下4)中所示的力学反馈方式。
4)通过输入物体提供振动反馈。这种情况下,对于输入物体有一定的要求,对于普通诸如粉笔、手指等不再适用。而需要输入物体具有消息接收能力以及振动能力。
虚拟现实设备会以很短的时间间隔对输入物体是否接触虚拟面进行判别,判别出输入物体接触虚拟面时,向输入物体发送触发消息。输入物体接收到触发消息后,提供振动反馈。当输入物体离开虚拟面时,输入物体不会接收到触发消息,则不提供振动反馈。这样用户在输入过程中会存在这样的体验,在虚拟面上书写的过程中,接触虚拟面时感受到振动反馈,这样用户就能够清楚地感知输入物体与虚拟面的接触状况。
其中虚拟现实设备向输入物体发送的触发消息,可以以无线的方式发送,例如wifi、蓝牙、NFC(Near Field Communication,近场通信)等等,也可以以有线的方式发送。
在304中,确定并记录输入物体接触虚拟面过程中产生的轨迹。
由于输入物体在三维空间中的运动是三维的,因此,需要将该三维的运动(一系列位置点)转换到虚拟面上的二维运动。可以在输入物体接触虚拟面的过程中,获取输入物体的位置信息在虚拟面上的投影;当输入物体与虚拟面分离时,确定并记录输入物体接触虚拟面过程中各投影点构成的轨迹。这次记录的轨迹就可以看作是一个笔迹。
在305中,依据记录的轨迹,确定输入的内容。
如果用户采用类似“画画”的方式进行输入,即所画即所得,那么可以依据已记录的轨迹,上屏与已记录轨迹一致的线条。上屏完成后,清空已记录的轨迹,当前这一个笔迹输入完毕,重新开始检测并记录下一次输入物体接触虚拟面所产生的笔迹。
如果用户想要输入的是字符,且采用的输入方式也是所画即所得,例如用户在虚拟面上输入字母“a”的轨迹,那么通过匹配可以得到字母a,就直接上屏字母“a”。对于有些一笔可以完成的数字也同样适用,例如用户在虚拟面上输入数字“2”的轨迹,通过匹配可以得到数字2,就可以直接上屏数字“2”。上屏完成后,清空已记录的轨迹,当前这一个笔迹输入完毕,重新开始检测并记录下一次输入物体接触虚拟面所产生的笔迹。
如果用户想要输入的是字符,且采用的输入方式是编码式或者笔画等方式,例如用户在虚拟面上输入拼音,希望得到拼音对应的汉字,或者用户在虚拟面上输入汉字的各笔画,希望得到各笔画对应的汉字,等等。那么依据已记录的轨迹,显示与已记录的轨迹相匹配的候选字符。若用户未选择任一个候选字符,当前这一个笔迹输入完毕,重新开始检测并记录下一次输入物体接触虚拟面所产生的笔迹。当第二个笔迹输入完毕后,记录的轨迹就是第一个笔迹和第二个笔迹共同构成的轨迹,再对该已记录的轨迹进行匹配,显示匹配的候选字符。若用户仍未选择任一个候选字符,则继续开始检测并记录下一次输入物体接触虚拟面所产生的笔迹,直至用户从候选字符中选择一个进行上屏。上屏完成后,清空已记录的轨迹,开始下一个字符的输入。一个字符的输入过程可以如图5所示。
另外,可以将用户已输入的轨迹在虚拟面上进行显示,直至上屏完毕后,清除在虚拟面上显示的轨迹。当然,也虚拟面上显示的轨迹也可以不自动删除,而是由用户手动删除,即通过特定的手势清除。例如通过点击虚拟面上“清除轨迹”的按钮,一旦检测到用户在该按钮位置的点击操作,则清除虚拟面上显示的轨迹。
为了方便理解,举一个例子,假设用户通过输入物体先输入一个笔迹“ㄑ”,对此轨迹进行记录,然后依据记录的该轨迹,显示与已记录的轨迹相匹配的候选字符,例如 “女”、“人”、“(”等,如图6a所示。候选字符中没有用户想要输入的字符,用户继续输入一个笔迹
Figure PCTCN2018075236-appb-000001
记录该轨迹,这样已记录的轨迹就由“ㄑ”和
Figure PCTCN2018075236-appb-000002
构成,显示与已记录的轨迹相匹配的候选字符,例如“女”、“义”、“X”等。如果没有与农户想要输入的字符,用户继续输入一个笔迹“–”,这样已记录的轨迹就由“ㄑ”、
Figure PCTCN2018075236-appb-000003
和“–”构成,显示与已记录的轨迹相匹配的候选字符,例如“女”、“如”、“好”等,如图6b所示。假设此时候选字符中已有用户想要输入的字符“好”,则用户可以从候选字符中选择“好”字进行上屏。上屏完成后,清除已记录的轨迹,以及虚拟面上显示的轨迹。用户可以开始下一个字符的输入。
若用户在输入某字符的过程中,想撤销已输入的轨迹,可以执行撤销输入的手势。一旦捕捉到用户撤销输入的手势后,清空已记录的轨迹。用户可以重新进行当前字符的输入。例如,可以在虚拟面上设置一个“撤销按钮”,如图6b中所示。若捕捉到输入物体在此处的点击操作,则清空已记录的轨迹,同时可以清除虚拟面上显示的对应轨迹。也可以通过其他手势,例如不接触虚拟面情况下向左快速移动输入物体,向上快速移动输入物体等手势。
需要说明的是,上述方法实施例的执行主体可以为输入装置,该装置可以位于本地终端(虚拟现实设备端)的应用,或者还可以为位于本地终端的应用中的插件或软件开发工具包(Software Development Kit,SDK)等功能单元。
以上是对本发明所提供的方法进行的描述,下面结合实施例对本发明提供的装置进行详述。图7为本发明实施例提供的装置结构图,如图7所示,该装置可以包括:虚拟面处理单元01、位置获取单元02、接触检测单元03、轨迹处理单元04和输入确定单元05,还可以包括展现单元06。各组成单元的主要功能如下:
虚拟面处理单元01负责确定并记录在三维空间中虚拟面的位置信息。在本发明实施例中,可以在虚拟现实设备的用户触及的三维空间范围内,确定一个虚拟平面作为虚拟面的位置,用户可以通过在该虚拟面上进行书写的方式进行信息输入。该虚拟面实际上是作为用户输入的参考位置,是虚拟的虚拟面,并非真实存在。另外,由于需要检测用户持输入物体在虚拟面上的轨迹,输入物体的位置信息是依靠空间***检测的,因此虚拟面的位置需要在空间***的检测范围内。
展现单元06可以按照预设的样式展现虚拟面,例如将虚拟面展现为一块黑板的样式,展现为一张白纸的样式,等等,这样用户在输入的过程中,一方面能够比较有距离感,知道虚拟面的位置在哪里,另一方面,用户能够像在黑板或白纸等介质上书写一样, 用户体验较好。
位置获取单元02负责获取在三维空间中输入物体的位置信息。具体地,获取空间***检测到的输入物体的位置信息,该位置信息可以为三维坐标值。
接触检测单元03负责依据输入物体的位置信息与虚拟面的位置信息,检测输入物体是否接触虚拟面。由于已经记录有虚拟面的位置信息,又获取到了输入物体的位置信息,通过将输入物体的位置信息与虚拟面的位置信息进行比对,依据两者之间的距离就可以判断出输入物体是否接触虚拟面。具体地,可以判断输入物体的位置与虚拟面的位置之间的距离是否在预设范围内,如果是,可以确定输入物体接触虚拟面。例如可以将输入物体与虚拟面之间距离在[-1cm,1cm]范围内时,认为输入物体接触虚拟面。
轨迹处理单元04负责确定并记录输入物体接触虚拟面过程中产生的轨迹。
为了让用户更加有距离感,方便进行笔迹的输入,展现单元06可以在输入物体接触虚拟面时,展现触感反馈信息。触感反馈信息的展现形式可以包括但不限于以下几种:
1)改变虚拟面的颜色。例如,输入物体未接触虚拟面时,虚拟面为白色,当输入物体接触虚拟面时,虚拟面就变成灰色以表征输入物体接触虚拟面。
2)播放指示输入物体接触虚拟面的提示音。例如,一旦输入物体接触虚拟面,就播放预设的音乐,一旦输入物体离开虚拟面,音乐就暂停播放。
3)按照预设样式,展现输入物体在虚拟面上的接触点。例如,一旦输入物体接触虚拟面,就形成一个水波式的接触点,若在接触虚拟面的距离越近,该水波越大,就像模拟用户真实书写过程中对介质所产生的压力。如图4所示。接触点的样式本发明并不加以限制,也可以是简单的一个黑点,输入物体接触虚拟面时,就在接触位置显示一个黑点,离开虚拟面时,黑点消失。
4)通过输入物体提供振动反馈。这种情况下,对于输入物体有一定的要求,对于普通诸如粉笔、手指等不再适用。而需要输入物体具有消息接收能力以及振动能力。
虚拟现实设备会以很短的时间间隔对输入物体是否接触虚拟面进行判别,判别出输入物体接触虚拟面时,向输入物体发送触发消息。输入物体接收到触发消息后,提供振动反馈。当输入物体离开虚拟面时,输入物体不会接收到触发消息,则不提供振动反馈。这样用户在输入过程中会存在这样的体验,在虚拟面上书写的过程中,接触虚拟面时感受到振动反馈,这样用户就能够清楚地感知输入物体与虚拟面的接触状况。
其中虚拟现实设备向输入物体发送的触发消息,可以以无线的方式发送,例如wifi、蓝牙、NFC(Near Field Communication,近场通信)等等,也可以以有线的方式发送。
由于输入物体在三维空间中的运动是三维的,因此,需要将该三维的运动(一系列位置点)转换到虚拟面上的二维运动。轨迹处理单元04可以在输入物体接触虚拟面的过程中,获取输入物体的位置信息在虚拟面上的投影;输入物体与虚拟面分离时,确定并记录输入物体接触虚拟面的过程中各投影点构成的轨迹。
输入确定单元05负责依据记录的轨迹,确定输入的内容。具体地,输入确定单元05可以依据已记录的轨迹,上屏与已记录轨迹一致的线条;或者,依据已记录的轨迹,上屏与已记录的轨迹相匹配的字符;或者,依据已记录的轨迹,显示与已记录的轨迹相匹配的候选字符,上屏用户选择的候选字符。其中由展现单元06展现该候选字符。
更进一步地,轨迹处理单元04在上屏操作完成后,清空已记录的轨迹,开始进行下一个字符的输入处理。或者,捕捉到撤销输入的手势后,清空已记录的轨迹,重新进行当前字符的输入处理。
另外,展现单元06可以在虚拟面上展现输入物体接触虚拟面过程中产生的轨迹,在上屏操作完成后,清除虚拟面上展现的轨迹。
本发明实施例提供的上述方法和装置可以以设置并运行于设备中的计算机程序体现。该设备可以包括一个或多个处理器,还包括存储器和一个或多个程序,如图8中所示。其中该一个或多个程序存储于存储器中,被上述一个或多个处理器执行以实现本发明上述实施例中所示的方法流程和/或装置操作。例如,被上述一个或多个处理器执行的方法流程,可以包括:
确定并记录在三维空间中虚拟面的位置信息;
获取在三维空间中输入物体的位置信息;
依据所述输入物体的位置信息与所述虚拟面的位置信息,检测所述输入物体是否接触虚拟面;
确定并记录所述输入物体接触虚拟面过程中产生的轨迹;
依据记录的轨迹,确定输入的内容。
由以上描述可以看出,本发明提供的上述方法、装置和设备可以具备以下优点:
1)能够实现三维空间内的信息输入,适用于虚拟现实技术。
2)本发明有别于传统的输入方式,需要键盘、手写板等,一方面需要随身携带这些较大体积的输入设备;另一方面需要在输入的同时额外观察输入设备。而本申请提供的输入方式,用户持任意的输入设备都可能进行输入,甚至不需要输入设备,采用诸如用户手指、手边的笔、棍子等等物体都可以完成输入。且由于虚拟面在三维空间内,因此 用户只需要在虚拟面上进行书写,无需额外观察输入设备。
在本发明所提供的几个实施例中,应该理解到,所揭露的***,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明保护的范围之内。

Claims (34)

  1. 一种输入方法,其特征在于,该方法包括:
    确定并记录在三维空间中虚拟面的位置信息;
    获取在三维空间中输入物体的位置信息;
    依据所述输入物体的位置信息与所述虚拟面的位置信息,检测所述输入物体是否接触虚拟面;
    确定并记录所述输入物体接触虚拟面过程中产生的轨迹;
    依据记录的轨迹,确定输入的内容。
  2. 根据权利要求1所述的方法,其特征在于,该方法还包括:
    按照预设的样式展现所述虚拟面。
  3. 根据权利要求1所述的方法,其特征在于,所述获取在三维空间中输入物体的位置信息包括:
    获取空间***检测到的所述输入物体的位置信息。
  4. 根据权利要求1所述的方法,其特征在于,依据所述输入物体的位置信息与所述虚拟面的位置信息,检测所述输入物体是否接触虚拟面包括:
    判断所述输入物体的位置与所述虚拟面的位置之间的距离是否在预设范围内,如果是,确定所述输入物体接触虚拟面。
  5. 根据权利要求1或4所述的方法,其特征在于,该方法还包括:
    若检测到所述输入物体接触虚拟面,展现触感反馈信息。
  6. 根据权利要求5所述的方法,其特征在于,所述展现触感反馈信息包括以下至少一种:
    改变虚拟面的颜色;
    播放指示所述输入物体接触虚拟面的提示音;
    按照预设样式,展现所述输入物体在虚拟面上的接触点。
  7. 根据权利要求5所述的方法,其特征在于,所述展现触感反馈信息包括:
    通过所述输入物体提供振动反馈。
  8. 根据权利要求1所述的方法,其特征在于,确定所述输入物体接触虚拟面过程中产生的轨迹包括:
    在所述输入物体接触虚拟面的过程中,获取所述输入物体的位置信息在所述虚拟面上的投影;
    所述输入物体与所述虚拟面分离时,确定并记录输入物体接触虚拟面的过程中各投影点构成的轨迹。
  9. 根据权利要求1或8所述的方法,其特征在于,依据记录的轨迹,确定输入的内容包括:
    依据已记录的轨迹,上屏与已记录轨迹一致的线条;或者,
    依据已记录的轨迹,上屏与已记录的轨迹相匹配的字符;或者,
    依据已记录的轨迹,显示与所述已记录的轨迹相匹配的候选字符,上屏用户选择的候选字符。
  10. 根据权利要求9所述的方法,其特征在于,该方法还包括:
    完成上屏操作后,清空已记录的轨迹;或者,
    捕捉到撤销输入的手势后,清空已记录的轨迹。
  11. 根据权利要求9所述的方法,其特征在于,该方法还包括:
    在所述虚拟面上展现所述输入物体接触虚拟面过程中产生的轨迹,在完成上屏操作后,清除虚拟面上展现的轨迹。
  12. 一种输入装置,其特征在于,该装置包括:
    虚拟面处理单元,用于确定并记录在三维空间中虚拟面的位置信息;
    位置获取单元,用于获取在三维空间中输入物体的位置信息;
    接触检测单元,用于依据所述输入物体的位置信息与所述虚拟面的位置信息,检测所述输入物体是否接触虚拟面;
    轨迹处理单元,用于确定并记录所述输入物体接触虚拟面过程中产生的轨迹;
    输入确定单元,用于依据记录的轨迹,确定输入的内容。
  13. 根据权利要求12所述的装置,其特征在于,该装置还包括:
    展现单元,用于按照预设的样式展现所述虚拟面。
  14. 根据权利要求12所述的装置,其特征在于,所述位置获取单元,具体用于获取空间***检测到的所述输入物体的位置信息。
  15. 根据权利要求12所述的装置,其特征在于,所述接触检测单元,具体用于判断所述输入物体的位置与所述虚拟面的位置之间的距离是否在预设范围内,如果是,确定所述输入物体接触虚拟面。
  16. 根据权利要求12或15所述的装置,其特征在于,该装置还包括:
    展现单元,用于若检测到所述输入物体接触虚拟面,展现触感反馈信息。
  17. 根据权利要求16所述的装置,其特征在于,所述展现单元在展现触感反馈信息时,采用以下至少一种方式:
    改变虚拟面的颜色;
    播放指示所述输入物体接触虚拟面的提示音;
    按照预设样式,展现所述输入物体在虚拟面上的接触点。
  18. 根据权利要求17所述的装置,其特征在于,所述展现单元在展现触感反馈信息时,通过所述输入物体提供振动反馈。
  19. 根据权利要求12所述的装置,其特征在于,所述轨迹处理单元,具体用于:在所述输入物体接触虚拟面的过程中,获取所述输入物体的位置信息在所述虚拟面上的投影;所述输入物体与所述虚拟面分离时,确定并记录输入物体接触虚拟面的过程中各投影点构成的轨迹。
  20. 根据权利要求12或19所述的装置,其特征在于,所述输入确定单元,具体用于:依据已记录的轨迹,上屏与已记录轨迹一致的线条;或者,
    依据已记录的轨迹,上屏与已记录的轨迹相匹配的字符;或者,
    依据已记录的轨迹,显示与所述已记录的轨迹相匹配的候选字符,上屏用户选择的候选字符。
  21. 根据权利要求20所述的装置,其特征在于,所述轨迹处理单元,还用于上屏操作完成后,清空已记录的轨迹;或者,捕捉到撤销输入的手势后,清空已记录的轨迹。
  22. 根据权利要求20所述的装置,其特征在于,该装置还包括:
    展现单元,用于在所述虚拟面上展现所述输入物体接触虚拟面过程中产生的轨迹,在上屏操作完成后,清除虚拟面上展现的轨迹。
  23. 一种设备,包括
    存储器,包括一个或者多个程序;
    一个或者多个处理器,耦合到所述存储器,执行所述一个或者多个程序,以实现如权利要求1至4、8中任一权项所述方法中执行的操作。
  24. 一种计算机存储介质,所述计算机存储介质被编码有计算机程序,所述程序在被一个或多个计算机执行时,使得所述一个或多个计算机执行如权利要求1至4、8中任一权项所述方法中执行的操作。
  25. 一种虚拟现实***,其特征在于,该虚拟现实***包括:输入物体、空间***和虚拟现实设备;
    所述空间***,用于检测在三维空间中输入物体的位置,并提供给所述虚拟现实设备;
    所述虚拟现实设备,用于确定并记录在三维空间中虚拟面的位置信息;依据所述输入物体的位置信息与所述虚拟面的位置信息,检测所述输入物体是否接触虚拟面;确定并记录所述输入物体接触虚拟面过程中产生的轨迹;依据记录的轨迹,确定输入的内容。
  26. 根据权利要求25所述的虚拟现实***,其特征在于,所述虚拟现实设备,还用于按照预设的样式展现所述虚拟面。
  27. 根据权利要求25所述的虚拟现实***,其特征在于,所述虚拟现实设备在依据所述输入物体的位置信息与所述虚拟面的位置信息,检测所述输入物体是否接触虚拟面时,具体执行:
    判断所述输入物体的位置与所述虚拟面的位置之间的距离是否在预设范围内,如果是,确定所述输入物体接触虚拟面。
  28. 根据权利要求25或27所述的虚拟现实***,其特征在于,所述虚拟现实设备,还用于若检测到所述输入物体接触虚拟面,展现触感反馈信息。
  29. 根据权利要求28所述的虚拟现实***,其特征在于,所述虚拟现实设备展现触感反馈信息的方式包括以下至少一种:
    改变虚拟面的颜色;
    播放指示所述输入物体接触虚拟面的提示音;
    按照预设样式,展现所述输入物体在虚拟面上的接触点。
  30. 根据权利要求28所述的虚拟现实***,其特征在于,所述虚拟现实设备展现触感反馈信息的方式包括:向所述输入物体发送触发消息;
    所述输入物体,还用于收到所述触发消息后,提供振动反馈。
  31. 根据权利要求25所述的虚拟现实***,其特征在于,所述虚拟现实设备在确定所述输入物体接触虚拟面过程中产生的轨迹时,具体执行:
    在所述输入物体接触虚拟面的过程中,获取所述输入物体的位置信息在所述虚拟面上的投影;
    所述输入物体与所述虚拟面分离时,确定并记录输入物体接触虚拟面的过程中各投影点构成的轨迹。
  32. 根据权利要求25或31所述的虚拟现实***,其特征在于,所述虚拟现实设备在依据记录的轨迹,确定输入的内容时,具体执行:
    依据已记录的轨迹,上屏与已记录轨迹一致的线条;或者,
    依据已记录的轨迹,上屏与已记录的轨迹相匹配的字符;或者,
    依据已记录的轨迹,显示与所述已记录的轨迹相匹配的候选字符,上屏用户选择的候选字符。
  33. 根据权利要求32所述的虚拟现实***,其特征在于,所述虚拟现实设备,还用于完成上屏操作后,清空已记录的轨迹;或者,捕捉到撤销输入的手势后,清空已记录的轨迹。
  34. 根据权利要求32所述的虚拟现实***,其特征在于,所述虚拟现实设备,还用于在所述虚拟面上展现所述输入物体接触虚拟面过程中产生的轨迹,在完成上屏操作后,清除虚拟面上展现的轨迹。
PCT/CN2018/075236 2017-02-17 2018-02-05 一种输入方法、装置、设备、***和计算机存储介质 WO2018149318A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/542,162 US20190369735A1 (en) 2017-02-17 2019-08-15 Method and system for inputting content

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710085422.7A CN108459782A (zh) 2017-02-17 2017-02-17 一种输入方法、装置、设备、***和计算机存储介质
CN201710085422.7 2017-02-17

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/542,162 Continuation US20190369735A1 (en) 2017-02-17 2019-08-15 Method and system for inputting content

Publications (1)

Publication Number Publication Date
WO2018149318A1 true WO2018149318A1 (zh) 2018-08-23

Family

ID=63169125

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/075236 WO2018149318A1 (zh) 2017-02-17 2018-02-05 一种输入方法、装置、设备、***和计算机存储介质

Country Status (4)

Country Link
US (1) US20190369735A1 (zh)
CN (1) CN108459782A (zh)
TW (1) TWI825004B (zh)
WO (1) WO2018149318A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308132A (zh) * 2018-08-31 2019-02-05 青岛小鸟看看科技有限公司 虚拟现实的手写输入的实现方法、装置、设备及***
CN109872519A (zh) * 2019-01-13 2019-06-11 上海萃钛智能科技有限公司 一种头戴式遥控装备及其遥控方法
CN113963586A (zh) * 2021-09-29 2022-01-21 华东师范大学 一种可移动穿戴式授课工具及其应用

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104656890A (zh) * 2014-12-10 2015-05-27 杭州凌手科技有限公司 虚拟现实智能投影手势互动一体机及互动实现方法
CN104808790A (zh) * 2015-04-08 2015-07-29 冯仕昌 一种基于非接触式交互获取无形透明界面的方法
CN105446481A (zh) * 2015-11-11 2016-03-30 周谆 基于手势的虚拟现实人机交互方法和***
CN105929958A (zh) * 2016-04-26 2016-09-07 华为技术有限公司 一种手势识别方法,装置和头戴式可视设备
CN105975067A (zh) * 2016-04-28 2016-09-28 上海创米科技有限公司 应用于虚拟现实产品的按键输入设备及方法

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ITPI20070093A1 (it) * 2007-08-08 2009-02-09 Mario Pirchio Metodo per animare sullo schermo di un computer una pennavirtuale che scrive e disegna
CN102426509A (zh) * 2011-11-08 2012-04-25 北京新岸线网络技术有限公司 一种手写输入的显示方法、装置及***
WO2014128752A1 (ja) * 2013-02-19 2014-08-28 株式会社ブリリアントサービス 表示制御装置、表示制御プログラム、および表示制御方法
TWI753846B (zh) * 2014-09-02 2022-02-01 美商蘋果公司 用於電子訊息使用者介面之方法、系統、電子器件以及電腦可讀取媒體
US9696795B2 (en) * 2015-02-13 2017-07-04 Leap Motion, Inc. Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments
KR101661991B1 (ko) * 2015-06-05 2016-10-04 재단법인 실감교류인체감응솔루션연구단 모바일 확장 공간에서의 3d 드로잉을 지원하기 위한 hmd 장치 및 방법
CN106371574B (zh) * 2015-12-04 2019-03-12 北京智谷睿拓技术服务有限公司 触觉反馈的方法、装置和虚拟现实交互***
US11010972B2 (en) * 2015-12-11 2021-05-18 Google Llc Context sensitive user interface activation in an augmented and/or virtual reality environment
CN106200964B (zh) * 2016-07-06 2018-10-26 浙江大学 一种虚拟现实中基于移动轨迹识别进行人机交互的方法
CN106249882B (zh) * 2016-07-26 2022-07-12 华为技术有限公司 一种应用于vr设备的手势操控方法与装置
CN106406527A (zh) * 2016-09-07 2017-02-15 传线网络科技(上海)有限公司 基于虚拟现实的输入方法、装置及虚拟现实装置
US10147243B2 (en) * 2016-12-05 2018-12-04 Google Llc Generating virtual notation surfaces with gestures in an augmented and/or virtual reality environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104656890A (zh) * 2014-12-10 2015-05-27 杭州凌手科技有限公司 虚拟现实智能投影手势互动一体机及互动实现方法
CN104808790A (zh) * 2015-04-08 2015-07-29 冯仕昌 一种基于非接触式交互获取无形透明界面的方法
CN105446481A (zh) * 2015-11-11 2016-03-30 周谆 基于手势的虚拟现实人机交互方法和***
CN105929958A (zh) * 2016-04-26 2016-09-07 华为技术有限公司 一种手势识别方法,装置和头戴式可视设备
CN105975067A (zh) * 2016-04-28 2016-09-28 上海创米科技有限公司 应用于虚拟现实产品的按键输入设备及方法

Also Published As

Publication number Publication date
CN108459782A (zh) 2018-08-28
TWI825004B (zh) 2023-12-11
TW201832049A (zh) 2018-09-01
US20190369735A1 (en) 2019-12-05

Similar Documents

Publication Publication Date Title
US11614793B2 (en) Precision tracking of user interaction with a virtual input device
CN106997241B (zh) 虚拟现实环境中与真实世界互动的方法与虚拟现实***
US10511778B2 (en) Method and apparatus for push interaction
CA3051912C (en) Gesture recognition devices and methods
US10286308B2 (en) Controller visualization in virtual and augmented reality environments
US20160364138A1 (en) Front touchscreen and back touchpad operated user interface employing semi-persistent button groups
TWI476633B (zh) 傳輸觸覺資訊的系統和方法
EP2919104B1 (en) Information processing device, information processing method, and computer-readable recording medium
US20170052599A1 (en) Touch Free Interface For Augmented Reality Systems
JP2006053909A (ja) 入力テキストを生成する方法および装置
CN102915112A (zh) 用于近距离动作跟踪的***和方法
CN103529942A (zh) 基于免接触的手势的输入
WO2018149318A1 (zh) 一种输入方法、装置、设备、***和计算机存储介质
CN103631382A (zh) 一种激光投影虚拟键盘
JP2004246578A (ja) 自己画像表示を用いたインタフェース方法、装置、およびプログラム
CN111045511A (zh) 基于手势的操控方法及终端设备
JP6127564B2 (ja) タッチ判定装置、タッチ判定方法、およびタッチ判定プログラム
JP6834197B2 (ja) 情報処理装置、表示システム、プログラム
KR102462054B1 (ko) 라이브 경매의 사용자 인터페이스 구현 장치 및 방법
CN114167997B (zh) 一种模型显示方法、装置、设备和存储介质
JP6699406B2 (ja) 情報処理装置、プログラム、位置情報作成方法、情報処理システム
WO2019127325A1 (zh) 信息处理方法、装置、云处理设备及计算机程序产品
Didehkhorshid et al. Text input in virtual reality using a tracked drawing tablet
Habibi Detecting surface interactions via a wearable microphone to improve augmented reality text entry
CN107977071B (zh) 一种适用于空间***的操作方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18755097

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18755097

Country of ref document: EP

Kind code of ref document: A1