CN112445325A - Virtual touch method and device, computer equipment and storage medium - Google Patents

Virtual touch method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112445325A
CN112445325A CN201910822371.0A CN201910822371A CN112445325A CN 112445325 A CN112445325 A CN 112445325A CN 201910822371 A CN201910822371 A CN 201910822371A CN 112445325 A CN112445325 A CN 112445325A
Authority
CN
China
Prior art keywords
virtual
information
touch
wavelength
control position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910822371.0A
Other languages
Chinese (zh)
Inventor
王志
毛信贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang OFilm Biometric Identification Technology Co Ltd
Original Assignee
Nanchang OFilm Biometric Identification Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang OFilm Biometric Identification Technology Co Ltd filed Critical Nanchang OFilm Biometric Identification Technology Co Ltd
Priority to CN201910822371.0A priority Critical patent/CN112445325A/en
Publication of CN112445325A publication Critical patent/CN112445325A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/043Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using propagating acoustic waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04108Touchless 2D- digitiser, i.e. digitiser detecting the X/Y position of the input means, finger or stylus, also when it does not touch, but is proximate to the digitiser's interaction surface without distance measurement in the Z direction

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Position Input By Displaying (AREA)

Abstract

The application relates to a virtual touch method, a virtual touch device, computer equipment and a storage medium. The method comprises the following steps: acquiring action information and ultrasonic wave wavelength data of a human body in a detection medium space; analyzing a touch position from the action information, and judging whether the touch position is a virtual control position; when the touch position is detected to be a virtual control position, calculating a wavelength change characteristic corresponding to the touch position according to the ultrasonic wavelength data; and when the wavelength change characteristic is matched with the preset characteristic threshold value, converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position. By adopting the method, the flexibility of touch operation in the virtual scene can be improved.

Description

Virtual touch method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a virtual touch method, an apparatus, a computer device, and a storage medium.
Background
At present, in virtual scenes such as virtual games performed by virtual reality technologies such as VR and AR, or virtual playing of multimedia resources such as movies, users generally need to use signal collectors of wearable and handheld devices to acquire information of user control operations through signal acquisition, such as acquisition of gravity acceleration signals and electric signals.
However, when using a touch device, it is generally necessary to provide many auxiliary buttons on the touch device to guide control operation commands from a person. However, since the size of the device is limited and the device is also matched with the body movement habit of a person, the number of keys on the device is limited, which ultimately results in low flexibility of touch operation in a virtual scene and poor user experience.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a virtual touch method, an apparatus, a computer device, and a storage medium capable of improving flexibility of touch operations in a virtual scene.
A virtual touch method, the method comprising:
the method comprises the steps of collecting action information and ultrasonic wave length data of a human body in a detection medium space, wherein information collection equipment is arranged in the detection medium space and is used for collecting the action information and the ultrasonic wave length data;
analyzing a touch position from the action information, and judging whether the touch position is a virtual control position;
when the touch position is detected to be a virtual control position, calculating a wavelength change characteristic corresponding to the touch position according to the ultrasonic wavelength data;
and when the wavelength change characteristic is matched with a preset characteristic threshold value, converting virtual display information in a virtual display scene according to virtual touch operation corresponding to the virtual control position.
In one embodiment, converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position includes:
acquiring conversion scene information corresponding to the virtual control position;
generating a virtual model according to the action information;
and superposing the conversion scene information and the virtual model to generate virtual conversion information, and replacing the current virtual display information with the virtual conversion information.
In one embodiment, the converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position includes:
acquiring virtual element information corresponding to the virtual control position;
and generating a virtual conversion model according to the action information and the virtual element information, and replacing the virtual model in the current virtual display information with the virtual conversion model.
In one embodiment, the generating manner of the virtual control position includes:
acquiring a space coordinate of the human body in a detection medium space;
determining a first virtual coordinate of a virtual model corresponding to the human body in a virtual display scene according to the space coordinate;
acquiring a second virtual coordinate of a preset control in the virtual display scene, and calculating relative position information according to the first virtual coordinate and the second virtual coordinate;
and calculating the projection coordinate of the preset control in the detection medium space according to the space coordinate and the relative position information, and setting a virtual control position according to the projection coordinate.
In one embodiment, the analyzing the touch position from the action information includes:
generating a three-dimensional model of the human body according to the action information;
and identifying a triggerable part from the three-dimensional model, and analyzing the space coordinate corresponding to the triggerable part into a touch position.
In one embodiment, the detecting whether the touch position is a virtual control position includes:
detecting whether the touch position is matched with the space coordinate of each virtual control position;
when a virtual control position matched with coordinates is detected, acquiring a preset part corresponding to the matched virtual control position, and judging whether the triggerable part is matched with the preset part or not;
and when the triggerable part is matched with the preset part, judging that the touch position is a virtual control position.
In one embodiment, the calculating the wavelength variation characteristic corresponding to the touch position according to the ultrasonic wavelength data includes:
extracting wavelength variation from the ultrasonic wavelength data, wherein the wavelength variation is the wavelength variation in each acquisition direction corresponding to the matched triggerable part within a preset time;
calculating the wavelength variation characteristics in each acquisition direction according to the wavelength variation;
and obtaining the wavelength change characteristics of the triggerable part according to the wavelength change characteristics in each acquisition direction, and using the wavelength change characteristics as the wavelength change characteristics corresponding to the touch position.
A virtual touch device, the device comprising:
the system comprises an information acquisition module, a data acquisition module and a data processing module, wherein the information acquisition module is used for acquiring action information and ultrasonic wave length data of a human body in a detection medium space, and information acquisition equipment is arranged in the detection medium space and is used for acquiring the action information and the ultrasonic wave length data;
the position analyzing module is used for analyzing a touch position from the action information and detecting whether the touch position is a virtual control position;
the characteristic obtaining module is used for calculating the wavelength change characteristic corresponding to the touch position according to the ultrasonic wavelength data when the touch position is detected to be a virtual control position;
and the touch operation module is used for converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position when the wavelength change characteristic is matched with a preset characteristic threshold value.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the above method when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
According to the virtual touch method, the virtual touch device, the computer equipment and the storage medium, the non-contact information acquisition is carried out on the action information of the human body through the detection medium space, whether the touch position of the human body reaches the virtual control position is judged according to the action information, the wavelength change characteristic of the touch position is detected by emitting ultrasonic waves in the detection medium space, whether the virtual control position is triggered is judged according to the wavelength change characteristic, and therefore corresponding triggering operation is carried out according to the detection result of the wavelength characteristic. The control keys in the virtual scene are converted into the control positions in the detection medium space, so that the number of touch positions is not limited, and the user can realize touch operation through limb actions, thereby effectively improving the flexibility of the touch operation in the virtual scene.
Drawings
Fig. 1 is an application scenario diagram of a virtual touch method in an embodiment;
FIG. 2 is a flowchart illustrating a virtual touch method according to an embodiment;
FIG. 3 is a schematic flow chart illustrating the virtual control position generation step in one embodiment;
FIG. 4 is a block diagram of a virtual touch device according to an embodiment;
FIG. 5 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The virtual touch method provided by the application can be applied to the application environment shown in fig. 1. The host 102 communicates with the information collecting device 104 and the virtual information displaying device 106 via a network. A detection medium space is created according to the detection range in which the information collecting apparatus 104 emits the imaging medium and the ultrasonic wave. When a user is in a virtual game or other virtual scenes, the human body needs to be in a detection medium space. The host 102 sends the data of the virtual game or the virtual multimedia resource to the virtual information display device 106 for imaging display, and the user receives the image through the virtual information display device 106 and performs game action or control operation.
The number of the information collecting devices 104 may include a plurality of information collecting devices 104, the information collecting devices 104 obtain motion information of the human body by emitting imaging optical media such as infrared light and receiving optical signals to perform human body imaging, and the information collecting devices 104 also collect ultrasonic wave wavelength data returned from the human body in real time by transmitting ultrasonic waves and receiving ultrasonic wave return data. The information acquisition equipment 104 transmits the motion information and the ultrasonic wave length data of the human body in the detection medium space acquired in real time to the host 102 for data processing. The host 102 analyzes the touch position from the action information and detects whether the touch position is a virtual control position; when the touch position is detected to be a virtual control position, calculating a wavelength change characteristic corresponding to the touch position according to the ultrasonic wavelength data; and when the wavelength change characteristic is matched with the preset characteristic threshold value, converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position. The host 102 sends the converted virtual display information to the virtual information display device 106 for image display, and the user continues to play games or operate according to the images displayed by the virtual information display device 106.
The host 102 may be a server or a terminal, and if the host is a server, the server may be implemented by an independent server or a server cluster composed of a plurality of servers, and if the host is a terminal, the terminal may be a game machine, a personal computer, a notebook computer, a tablet computer, or the like. The information collection device 104 may include an imaging collection device, which may be an infrared imaging device or the like, which may emit and collect optical signals for human body imaging, and an ultrasonic collection device, which may emit ultrasonic waves and collect ultrasonic wave wavelength return data. The virtual information display device 106 may be a contact wearable device such as VR glasses, a helmet, or a projection device, and the projection medium of the projection device is not limited, and may be a curtain, a whiteboard, air, or the like.
In an embodiment, as shown in fig. 2, a virtual touch method is provided, which is described by taking the method as an example applied to the host 102 in fig. 1, and includes the following steps:
step 210, collecting motion information and ultrasonic wave wavelength data of the human body in the detection medium space.
The detection medium space is an emission space region where the information acquisition equipment emits imaging media and ultrasonic waves, and is also a moving range region of a human body when a user performs virtual operation. The detection medium space is a three-dimensional space, and a reasonable number of information acquisition devices can be placed at appropriate positions on the boundary of the detection medium space, so that imaging information and ultrasonic return data of all positions in the detection medium space can be acquired. The information acquisition equipment can comprise a plurality of imaging acquisition equipment and ultrasonic acquisition equipment, and the placement number and the placement position of each information acquisition equipment can be calculated and designed according to the required size of the actual detection medium space.
The information acquisition equipment continuously transmits the imaging medium and the ultrasonic waves for information detection, captures the imaging information and the ultrasonic wave return information of the human body in real time, and returns the captured information in real time to the host. The action information is collected optical signals such as infrared optical signals, imaging information of a human body is generated according to a 3D imaging principle, the action information comprises position information of each part of the human body in a detection medium space, and the ultrasonic wave wavelength data is returned ultrasonic wave wavelengths collected in real time in each space direction. The imaging processing work of the optical signal can be carried out in the information acquisition equipment, and can also be sent to the host computer for processing.
Further, a three-dimensional coordinate system may be set for the detection medium space, a certain vertex of the space, the center of the space, or other positions may be used as the origin of the three-dimensional coordinate system, and the distance between adjacent coordinate points may correspond to the information sampling interval, and may be determined according to the specific processing precision requirement.
Step 220, analyzing the touch position from the motion information, and determining whether the touch position is a virtual control position.
The host analyzes a touch position from the action information, wherein the touch position is a space position of a human body touch part which can trigger virtual scene control operation on a human body and is positioned in the detection medium space. The touch positions are set according to specific virtual scenes, all parts of a human body in some scenes can be used as the touch positions, and only limb parts such as hands, feet and heads in some scenes can be used as the touch positions.
And the host generates a three-dimensional human body model according to the spatial position information of the human body in the action information, identifies a touch part from the three-dimensional human body model, and acquires a spatial position corresponding to the touch position as the touch position.
The virtual control position is a spatial position of a projection of a touch element in an image displayed in the virtual information display device in a detection medium space, the touch element may be a touch key for performing control operation, such as a selection key, a pause key, a return key, and the like in a menu, and the touch element may also be an operable element in a scene image, such as an element in a scene, such as a knife, a gun, a stone, and the like, which can be picked up.
The host machine detects whether the touch position is a virtual control position, the detection rule can be that whether the touch position is intersected and overlapped with the virtual control position is detected, and if the touch position and the virtual control position have intersected spatial pixel points, the touch position is determined to be the virtual control position; the detection rule may also be to detect an overlap volume formed by spatial pixels where the touch position intersects with the virtual control position, and if the overlap volume exceeds a preset volume threshold, it is determined that the touch position is the virtual control position, and in other embodiments, other detection rules may also be employed.
And step 230, when the touch position is detected to be the virtual control position, calculating the wavelength change characteristic corresponding to the touch position according to the ultrasonic wavelength data.
When the host computer detects that the touch position at the current moment is the virtual control position, namely when the preset position of the human body touches the virtual touch element in the detection medium space, the host computer obtains ultrasonic wave wavelength data collected in the space direction corresponding to the touch position within the preset time, and calculates the ultrasonic wave change characteristic at the current moment according to the ultrasonic wave wavelength data, wherein the ultrasonic wave change characteristic can be the wavelength variation in unit time. In the process that the touch part acts, acceleration is generated until the touch part finally decelerates and stops, a Doppler effect is generated between the touch part and the ultrasonic wave source, different ultrasonic wave wavelength variation can be generated at different movement speeds, and the touch part can be judged to be in the movement states of acceleration, deceleration or stop movement and the like according to the wavelength variation.
And 240, when the wavelength change characteristic is matched with the preset characteristic threshold value, converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position.
In this embodiment, the preset feature threshold is a wavelength variation value used for determining whether the touch portion is in a deceleration-to-stop state. Generally, the wavelength of the ultrasonic wave at the time of the initial acceleration motion and the final deceleration stop motion is abruptly changed, the wavelength becomes smaller to a negative value during acceleration, the wavelength becomes larger to a positive value during deceleration, the preset feature threshold may be set to a positive value with a larger value, and when the wavelength variation feature is detected to be greater than or equal to the preset feature threshold, it is determined that the wavelength variation feature matches the preset feature threshold. In this embodiment, the calculated wavelength variation characteristic is a wavelength variation amount at the current time, in other embodiments, a wavelength variation characteristic that is continuous in a period of time before the current time may also be calculated, the preset characteristic threshold may also be composed of variation values of a plurality of variation points, and the variation trend of the wavelength variation characteristic is compared with the variation trend of the value in the preset characteristic threshold to determine whether the wavelength variation characteristic is matched with the variation trend.
When the host computer detects that the wavelength change characteristic is matched with the preset characteristic threshold value, the host computer obtains the virtual touch operation corresponding to the virtual control position, and converts the virtual display information currently displayed in the virtual information display device according to the virtual touch operation, so that the control conversion of the display information can be triggered by the virtual action of the human body in the detection medium space. The converting of the virtual display information may include converting a virtual scene, such as a played game picture background, a multimedia resource image frame, and the like, or may include model conversion of a virtual model generated according to the acquired human body 3D imaging data. If the touch element corresponding to the virtual control position is a touch key, a touch instruction corresponding to the touch key is executed, generally, a displayed virtual scene needs to be switched, if a next game level is entered, and the like, the host acquires switched virtual scene data pointed by the touch instruction, and generates converted virtual display information according to the virtual scene data.
According to the virtual touch method, non-contact information acquisition is carried out on motion information of a human body through a detection medium space, whether the touch position of the human body reaches a virtual control position is judged according to the motion information, the wavelength change characteristic of the touch position is detected by emitting ultrasonic waves in the detection medium space, whether the virtual control position is triggered is judged according to the wavelength change characteristic, and therefore corresponding triggering operation is carried out according to the detection result of the wavelength characteristic. The control keys in the virtual scene are converted into the control positions in the detection medium space, so that the number of touch positions is not limited, and the user can realize touch operation through limb actions, thereby effectively improving the flexibility of the touch operation in the virtual scene.
In one embodiment, the step of converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position may include: acquiring conversion scene information corresponding to the virtual control position; generating a virtual model according to the action information; and superposing the conversion scene information and the virtual model to generate virtual conversion information, and replacing the current virtual display information with the virtual conversion information.
In this embodiment, the type of the touch element corresponding to the virtual control position is a touch key, and the virtual display scene generally needs to be switched after the touch key is triggered. For example, the touch keys may be a selection key of a level, a selection key of a game character, a return key, a pause key, a fast forward key, and the like, and after a certain level is triggered and selected, the screen needs to be skipped to a virtual scene corresponding to the level, and after the return key is triggered, the screen needs to be switched to a main screen virtual scene, and the like. The host computer obtains touch key information corresponding to the virtual control position, such as identification and coding of the touch key, and searches for conversion scene information matched with the touch key information, wherein the conversion scene information comprises pixel values of a scene image to be displayed.
The host machine carries out 3D imaging according to the action information to generate a three-dimensional virtual model of the human body, calculates the model position of the virtual model in the converted virtual scene according to the space position of the human body in the action information in the detection space medium, carries out image superposition processing on the virtual model of the human body and the converted scene information according to the model position to generate final virtual conversion information, and sends the virtual conversion information to the virtual information display equipment for display, so that a user receives a new virtual scene picture, can construct a virtual model of a human body image according to the action information of the human body, fuses the virtual model and the virtual scene, and realizes interaction of the human body and the virtual information.
In one embodiment, after the touch position is analyzed from the motion information, if it is detected that the touch position is not the virtual control position, the host computer also generates a three-dimensional virtual model of the human body according to the collected motion information, and superimposes and fuses the virtual model and the currently displayed virtual scene, so that the motion track of the human body is displayed in the virtual scene in real time through continuous human body motion information capture. It should be noted that the virtual display information that is not converted by triggering is not fixed, the virtual display information may be animation information or multimedia resource information, and the like, and is composed of image frames at multiple times, and the image frames are dynamically played over time.
In one embodiment, the step of converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position may include: acquiring virtual element information corresponding to the virtual control position; and generating a virtual conversion model according to the action information and the virtual element information, and replacing the virtual model in the current virtual display information with the virtual conversion model.
The type of the touch element corresponding to the virtual control position is an operable element in the display scene, and the operable element is generally an element which can be triggered by the action of a human body to move and is located in the display scene. In this embodiment, the operable element is triggered to be combined with the virtual model of the human body to become a part of the virtual model. For example, in a virtual war game, the operable elements may be equipment scattered on the ground, such as a knife, a gun, or clothing, such as a hat, a mantle, or the like, and when a human body touches the equipment, the operation of picking up the equipment is triggered, and the equipment is combined with the human body in a manner of holding by hand, tying by waist, or the like.
The host computer obtains virtual element information corresponding to the virtual control position, and the virtual element information can be pixel information of an element three-dimensional image. The host computer generates a three-dimensional virtual model of the human body action according to the action information, and in addition, the host computer can acquire a combination type corresponding to the virtual control position, superpose the three-dimensional virtual model and the virtual element information according to the combination type to generate a virtual conversion model, and replace and update the virtual model in the virtual scene. For example, if the virtual element is a knife and the combination type is handheld, a virtual transformation model of the handheld knife image is generated.
In one embodiment, touching a virtual element may also trigger an operation on another scene element, instead of combining the virtual element with the virtual model of the human body, and if the touched virtual element is a switch, the operation on or off the scene elements such as an electric light and a door is triggered.
In one embodiment, as shown in fig. 3, the virtual touch method further includes the following steps of generating a virtual control position:
in step 310, the space coordinates of the human body in the detection medium space are obtained.
The virtual touch control element is a preset control, and the relative position of the preset control in the displayed virtual scene is determined in advance, but the specific projection position of the preset control projected in the detection medium space is set according to the relative position of the human body in the detection medium space. The host computer obtains the space coordinates of the human body in the detection medium space in real time, for example, the space coordinates of each position of the human body can be obtained, a three-dimensional coordinate system is established in advance for the detection medium space, and the space coordinates can be expressed in the form of (x, y, z). Specifically, the spatial coordinates of the human body fixed position can be detected by means of optical signal imaging.
And 320, determining a first virtual coordinate of the virtual model corresponding to the human body in the virtual display scene according to the space coordinate.
The host machine performs space mapping on the detection medium space and the virtual display scene, and maps the acquired space coordinates to first virtual coordinates in the virtual display scene, namely virtual coordinates in the virtual display scene of a virtual model obtained by performing 3D imaging according to the space position information of the human body.
And 330, acquiring a second virtual coordinate of a preset control in the virtual display scene, and calculating relative position information according to the first virtual coordinate and the second virtual coordinate.
The host computer obtains a second virtual coordinate of a plurality of preset controls in the virtual display scene, the number of the preset controls can be multiple, and one virtual display scene can contain a plurality of preset controls. The host computer calculates the relative position information according to the first virtual coordinate and the second virtual coordinate, can calculate the relative distance between each position on the preset control and a fixed position on the virtual model, such as a head, a foot or a central point, and can also calculate the relative distance in other modes to obtain the relative position information.
And 340, calculating projection coordinates of the preset control in the detection medium space according to the space coordinates and the relative position information, and setting a virtual control position according to the projection coordinates.
The host calculates the space coordinates of the projection position corresponding to the relative position, namely projection coordinates, according to the space coordinates and the relative position information of the human body, wherein the projection coordinates are a coordinate set and comprise coordinates of all points of the projection position of the preset control in the detection medium, and the host sets the space positions corresponding to the projection coordinates as virtual control positions.
In this embodiment, the projection position of the preset control in the scene in the detection medium space can be continuously adjusted according to the real-time position of the human body in the detection medium space, so as to obtain more accurate touch trigger.
In one embodiment, the step of analyzing the touch position from the motion information may include: generating a three-dimensional model of the human body according to the action information; and identifying a triggerable part from the three-dimensional model, and analyzing the space coordinate corresponding to the triggerable part into a touch position.
In this embodiment, the triggerable part is a human body part capable of triggering the virtual preset control in the detection space medium, the triggerable part is a certain part but not all of the human body, and the triggerable part may be a part that is easy to move, such as a hand, a foot, and a head. The host computer can set one or more parts as triggerable parts, for example, only the hand can be set as the triggerable part, and a plurality of parts such as the hand, the head and the like can also be set as the triggerable parts. The host computer can store three-dimensional part models which can trigger common postures of parts in advance, such as three-dimensional part models of the postures of fist making, foot kicking and the like. The host machine carries out 3D imaging on the human body according to the action information to generate a three-dimensional model of the human body, identifies the triggerable part from the three-dimensional model according to the prestored three-dimensional part model, acquires a space coordinate corresponding to the triggerable part from the action information, wherein the space coordinate is a coordinate set of each position of the triggerable part, and analyzes the space coordinate into a touch position.
In this embodiment, only a certain or some parts of the human body can be set as the touch parts, and only the touch parts are judged and detected, so that touch detection and judgment are more accurate in touch control, and interference information is eliminated.
In one embodiment, the step of detecting whether the touch position is the virtual control position may include: detecting whether the touch position is matched with the space coordinate of each virtual control position; when the virtual control position matched with the coordinates is detected, acquiring a preset part corresponding to the matched virtual control position, and judging whether the triggerable part is matched with the preset part or not; and when the triggerable part is matched with the preset part, judging that the touch position is a virtual control position.
The number of the preset controls may include a plurality, and the number of the corresponding virtual control positions may also include a plurality, and similarly, the number of the touchable portions may also include a plurality, and the corresponding touch positions may also include a plurality. Specifically, the host machine matches the identified touch positions with the spatial coordinates of the virtual control positions one by one, and when matching, the matching determination rule described in the above embodiment may be adopted, which is not described herein again.
When the host machine judges that the virtual control position matched with the touch position exists, the preset part corresponding to the virtual control position can be obtained, the preset part is a human body part which can only trigger the preset control at the virtual control position, if the virtual control position can only be triggered by the head, the preset part is the head, the host machine judges whether the triggerable part matched with the coordinates is matched with the preset part, and if the triggerable part is matched with the preset part, the touch position is judged to be the virtual control position. When the host judges that the virtual control position with the matched coordinates does not exist or the virtual control position with the matched coordinates exists but the preset positions are not matched, the touch position is judged not to be the virtual control position.
In this embodiment, the preset trigger positions of the virtual control positions are set, so that the virtual touch can be controlled more finely, and the control precision is improved.
In one embodiment, the step of calculating the wavelength variation characteristic corresponding to the touch position according to the ultrasonic wavelength data may include: extracting wavelength variation from the ultrasonic wavelength data, wherein the wavelength variation is the wavelength variation in each acquisition direction corresponding to the matched triggerable part within a preset time; calculating the wavelength variation characteristics in each acquisition direction according to the wavelength variation; and obtaining the wavelength change characteristics of the triggerable part according to the wavelength change characteristics in each acquisition direction, and using the wavelength change characteristics as the wavelength change characteristics corresponding to the touch position.
In this embodiment, when the wavelength variation feature is calculated, the extracted ultrasonic wavelength data is data corresponding to the triggerable portion within a preset time, so that a data range according to which the feature is calculated is narrowed, the position of the triggerable portion at each time can be determined with reference to the acquired imaging data, and the extraction of the wavelength data of the corresponding position is facilitated according to the position. In addition, the preset time period needs to be set to reflect the change of the wavelength. The host calculates the wavelength variation in each ultrasonic wave collecting direction within a preset time, for example, the unit time length can be set, the wavelength variation in each unit time length within the preset time is calculated, and further, the wavelength variation characteristics, such as the variation rate of the wavelength variation, are calculated according to the plurality of wavelength variations. The host carries out vector summation on the wavelength change characteristics in each acquisition direction, such as three spatial coordinate axis directions, so as to obtain the integral wavelength change characteristics of the triggerable part.
It should be understood that although the various steps in the flow charts of fig. 2-3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-3 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 4, a virtual touch device is provided, including: the information acquisition module 410, the position analysis module 420, the feature obtaining module 430 and the touch operation module 440, wherein:
the information acquisition module 410 is configured to acquire motion information and ultrasonic wavelength data of a human body located in a detection medium space, where an information acquisition device is disposed in the detection medium space, and the information acquisition device is configured to acquire the motion information and the ultrasonic wavelength data.
The position analyzing module 420 is configured to analyze the touch position from the action information and detect whether the touch position is a virtual control position.
The characteristic obtaining module 430 is configured to calculate a wavelength variation characteristic corresponding to the touch position according to the ultrasonic wavelength data when the touch position is detected to be the virtual control position.
The touch operation module 440 is configured to, when it is detected that the wavelength variation characteristic matches the preset characteristic threshold, convert the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position.
In one embodiment, the touch operation module 440 may include:
and a scene acquisition unit for acquiring transition scene information corresponding to the virtual control position.
And a model generation unit for generating a virtual model according to the motion information.
And the superposition conversion unit is used for superposing the conversion scene information and the virtual model to generate virtual conversion information and replacing the current virtual display information with the virtual conversion information.
In one embodiment, a virtual touch device may include:
and the coordinate acquisition module is used for acquiring the space coordinates of the human body in the detection medium space.
And the coordinate projection module is used for determining a first virtual coordinate of the virtual model corresponding to the human body in the virtual display scene according to the space coordinate.
And the relative calculation module is used for acquiring a second virtual coordinate of the preset control in the virtual display scene and calculating relative position information according to the first virtual coordinate and the second virtual coordinate.
And the position setting module is used for calculating the projection coordinates of the preset control in the detection medium space according to the space coordinates and the relative position information, and setting the virtual control position according to the projection coordinates.
In one embodiment, the location resolution module 420 may include:
a model generation unit for generating a three-dimensional model of the human body from the motion information;
and the part identification unit is used for identifying the triggerable part from the three-dimensional model and analyzing the space coordinate corresponding to the triggerable part into a touch position.
In one embodiment, the location resolution module 420 may further include:
and the coordinate matching unit is used for detecting whether the touch position is matched with the space coordinate of each virtual control position.
And the part matching unit is used for acquiring a preset part corresponding to the matched virtual control position when the virtual control position matched with the coordinates is detected, and judging whether the triggerable part is matched with the preset part or not.
And the matching determination unit is used for determining that the touch position is a virtual control position when the triggerable position is matched with the preset position.
In one embodiment, the feature obtaining module 430 may include:
and the wavelength extraction unit is used for extracting wavelength variation from the ultrasonic wavelength data, wherein the wavelength variation is the wavelength variation in each acquisition direction corresponding to the matched triggerable part within the preset time.
And the direction characteristic calculation unit is used for calculating the wavelength change characteristics in each acquisition direction according to the wavelength variation.
And the change characteristic calculation unit is used for obtaining the wavelength change characteristic of the triggerable part according to the wavelength change characteristic in each acquisition direction and using the wavelength change characteristic as the wavelength change characteristic corresponding to the touch position.
For specific limitations of the virtual touch device, reference may be made to the above limitations of the virtual touch method, which is not described herein again. All or part of each module in the virtual touch device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a controller, the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing virtual touch data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a virtual touch method.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer device comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program: acquiring action information and ultrasonic wave wavelength data of a human body in a detection medium space; analyzing a touch position from the action information, and judging whether the touch position is a virtual control position; when the touch position is detected to be a virtual control position, calculating a wavelength change characteristic corresponding to the touch position according to the ultrasonic wavelength data; and when the wavelength change characteristic is matched with the preset characteristic threshold value, converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position.
In one embodiment, when the processor executes the computer program, the step of converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position is further configured to: acquiring conversion scene information corresponding to the virtual control position; generating a virtual model according to the action information; and superposing the conversion scene information and the virtual model to generate virtual conversion information, and replacing the current virtual display information with the virtual conversion information.
In one embodiment, when the processor executes the computer program, the step of converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position is further configured to: acquiring virtual element information corresponding to the virtual control position; and generating a virtual conversion model according to the action information and the virtual element information, and replacing the virtual model in the current virtual display information with the virtual conversion model.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring a space coordinate of a human body in a detection medium space; determining a first virtual coordinate of a virtual model corresponding to the human body in a virtual display scene according to the space coordinate; acquiring a second virtual coordinate of a preset control in the virtual display scene, and calculating relative position information according to the first virtual coordinate and the second virtual coordinate; and calculating the projection coordinates of the preset control in the detection medium space according to the space coordinates and the relative position information, and setting a virtual control position according to the projection coordinates.
In one embodiment, the step of analyzing the touch position from the motion information when the processor executes the computer program is further configured to: generating a three-dimensional model of the human body according to the action information; and identifying a triggerable part from the three-dimensional model, and analyzing the space coordinate corresponding to the triggerable part into a touch position.
In one embodiment, the processor, when executing the computer program, further performs the step of detecting whether the touch position is a virtual control position, further: detecting whether the touch position is matched with the space coordinate of each virtual control position; when the virtual control position matched with the coordinates is detected, acquiring a preset part corresponding to the matched virtual control position, and judging whether the triggerable part is matched with the preset part or not; and when the triggerable part is matched with the preset part, judging that the touch position is a virtual control position.
In one embodiment, when the processor executes the computer program to perform the step of calculating the wavelength variation characteristic corresponding to the touch position according to the ultrasonic wavelength data, the processor is further configured to: extracting wavelength variation from the ultrasonic wavelength data, wherein the wavelength variation is the wavelength variation in each acquisition direction corresponding to the matched triggerable part within a preset time; calculating the wavelength variation characteristics in each acquisition direction according to the wavelength variation; and obtaining the wavelength change characteristics of the triggerable part according to the wavelength change characteristics in each acquisition direction, and using the wavelength change characteristics as the wavelength change characteristics corresponding to the touch position.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring action information and ultrasonic wave wavelength data of a human body in a detection medium space; analyzing a touch position from the action information, and judging whether the touch position is a virtual control position; when the touch position is detected to be a virtual control position, calculating a wavelength change characteristic corresponding to the touch position according to the ultrasonic wavelength data; and when the wavelength change characteristic is matched with the preset characteristic threshold value, converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position.
In one embodiment, when executed by the processor, the computer program further performs the step of converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position, and is further configured to: acquiring conversion scene information corresponding to the virtual control position; generating a virtual model according to the action information; and superposing the conversion scene information and the virtual model to generate virtual conversion information, and replacing the current virtual display information with the virtual conversion information.
In one embodiment, when executed by the processor, the computer program further performs the step of converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position, and is further configured to: acquiring virtual element information corresponding to the virtual control position; and generating a virtual conversion model according to the action information and the virtual element information, and replacing the virtual model in the current virtual display information with the virtual conversion model.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a space coordinate of a human body in a detection medium space; determining a first virtual coordinate of a virtual model corresponding to the human body in a virtual display scene according to the space coordinate; acquiring a second virtual coordinate of a preset control in the virtual display scene, and calculating relative position information according to the first virtual coordinate and the second virtual coordinate; and calculating the projection coordinates of the preset control in the detection medium space according to the space coordinates and the relative position information, and setting a virtual control position according to the projection coordinates.
In one embodiment, the computer program when executed by the processor performs the step of resolving the touch location from the motion information further comprises: generating a three-dimensional model of the human body according to the action information; and identifying a triggerable part from the three-dimensional model, and analyzing the space coordinate corresponding to the triggerable part into a touch position.
In one embodiment, the computer program when executed by the processor performs the step of detecting whether the touch position is a virtual control position further comprises: detecting whether the touch position is matched with the space coordinate of each virtual control position; when the virtual control position matched with the coordinates is detected, acquiring a preset part corresponding to the matched virtual control position, and judging whether the triggerable part is matched with the preset part or not; and when the triggerable part is matched with the preset part, judging that the touch position is a virtual control position.
In one embodiment, when being executed by a processor, the computer program further performs the step of calculating a wavelength variation characteristic corresponding to the touch position according to the ultrasonic wavelength data, and is further configured to: extracting wavelength variation from the ultrasonic wavelength data, wherein the wavelength variation is the wavelength variation in each acquisition direction corresponding to the matched triggerable part within a preset time; calculating wavelength variation characteristics in each acquisition direction according to the wavelength variation; and obtaining the wavelength change characteristics of the triggerable part according to the wavelength change characteristics in each acquisition direction, and using the wavelength change characteristics as the wavelength change characteristics corresponding to the touch position.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A virtual touch method, the method comprising:
the method comprises the steps of collecting action information and ultrasonic wave length data of a human body in a detection medium space, wherein an information collection device is arranged in the detection medium space and is used for collecting the action information and the ultrasonic wave length data;
analyzing a touch position from the action information, and judging whether the touch position is a virtual control position;
when the touch position is detected to be the virtual control position, calculating a wavelength change characteristic corresponding to the touch position according to the ultrasonic wavelength data;
and when the wavelength change characteristic is matched with a preset characteristic threshold value, converting virtual display information in the virtual display scene according to virtual touch operation corresponding to the virtual control position.
2. The method according to claim 1, wherein converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position comprises:
acquiring conversion scene information corresponding to the virtual control position;
generating a virtual model according to the action information;
and superposing the conversion scene information and the virtual model to generate virtual conversion information, and replacing the current virtual display information with the virtual conversion information.
3. The method according to claim 1, wherein the converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position includes:
acquiring virtual element information corresponding to the virtual control position;
and generating a virtual conversion model according to the action information and the virtual element information, and replacing the virtual model in the current virtual display information with the virtual conversion model.
4. The method of claim 1, wherein the virtual control position is generated in a manner that comprises:
acquiring the space coordinate of the human body in the detection medium space;
determining a first virtual coordinate of a virtual model corresponding to the human body in a virtual display scene according to the space coordinate;
acquiring a second virtual coordinate of a preset control in the virtual display scene, and calculating relative position information of the first virtual coordinate and the second virtual coordinate;
and calculating the projection coordinate of the preset control in the detection medium space according to the space coordinate and the relative position information, and setting a virtual control position according to the projection coordinate.
5. The method of claim 1, wherein the analyzing the touch location from the motion information comprises:
generating a three-dimensional model of the human body according to the action information;
and identifying a triggerable part from the three-dimensional model, and analyzing the space coordinate corresponding to the triggerable part into a touch position.
6. The method of claim 5, wherein the detecting whether the touch position is a virtual control position comprises:
detecting whether the touch position is matched with the space coordinate of each virtual control position;
when a virtual control position matched with coordinates is detected, acquiring a preset part corresponding to the matched virtual control position, and judging whether the triggerable part is matched with the preset part or not;
and when the triggerable part is matched with the preset part, judging that the touch position is a virtual control position.
7. The method of claim 6, wherein the calculating the wavelength variation characteristic corresponding to the touch position according to the ultrasonic wavelength data comprises:
extracting wavelength variation from the ultrasonic wavelength data, wherein the wavelength variation is the wavelength variation in each acquisition direction corresponding to the matched triggerable part within a preset time;
calculating the wavelength variation characteristics in each acquisition direction according to the wavelength variation;
and obtaining the wavelength change characteristics of the triggerable part according to the wavelength change characteristics in each acquisition direction, and using the wavelength change characteristics as the wavelength change characteristics corresponding to the touch position.
8. A virtual touch device, the device comprising:
the system comprises an information acquisition module, a data acquisition module and a data processing module, wherein the information acquisition module is used for acquiring action information and ultrasonic wave wavelength data of a human body in a detection medium space, and information acquisition equipment is arranged in the detection medium space and is used for acquiring the action information and the ultrasonic wave wavelength data;
the position analyzing module is used for analyzing a touch position from the action information and judging whether the touch position is a virtual control position or not;
the characteristic obtaining module is used for calculating the wavelength change characteristic corresponding to the touch position according to the ultrasonic wavelength data when the touch position is detected to be a virtual control position;
and the touch operation module is used for converting the virtual display information in the virtual display scene according to the virtual touch operation corresponding to the virtual control position when the wavelength change characteristic is matched with a preset characteristic threshold value.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201910822371.0A 2019-09-02 2019-09-02 Virtual touch method and device, computer equipment and storage medium Pending CN112445325A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910822371.0A CN112445325A (en) 2019-09-02 2019-09-02 Virtual touch method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910822371.0A CN112445325A (en) 2019-09-02 2019-09-02 Virtual touch method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112445325A true CN112445325A (en) 2021-03-05

Family

ID=74734808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910822371.0A Pending CN112445325A (en) 2019-09-02 2019-09-02 Virtual touch method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112445325A (en)

Similar Documents

Publication Publication Date Title
US8856691B2 (en) Gesture tool
KR101658937B1 (en) Gesture shortcuts
US8428306B2 (en) Information processor and information processing method for performing process adapted to user motion
US9734393B2 (en) Gesture-based control system
US9075434B2 (en) Translating user motion into multiple object responses
US8866898B2 (en) Living room movie creation
KR101679442B1 (en) Standard gestures
JP2012525643A5 (en)
US8998718B2 (en) Image generation system, image generation method, and information storage medium
US20140362188A1 (en) Image processing device, image processing system, and image processing method
CN108200334B (en) Image shooting method and device, storage medium and electronic equipment
KR101082829B1 (en) The user interface apparatus and method for 3D space-touch using multiple imaging sensors
US20140139429A1 (en) System and method for computer vision based hand gesture identification
CN104364733A (en) Position-of-interest detection device, position-of-interest detection method, and position-of-interest detection program
KR20110139694A (en) Method and system for gesture recognition
US20120053015A1 (en) Coordinated Motion and Audio Experience Using Looped Motions
KR20120051659A (en) Auto-generating a visual representation
TWI528224B (en) 3d gesture manipulation method and apparatus
KR20120068253A (en) Method and apparatus for providing response of user interface
JP2017530447A (en) System and method for inputting a gesture in a 3D scene
KR20150094680A (en) Target and press natural user input
TW201428545A (en) Input device, apparatus, input method, and recording medium
KR101779564B1 (en) Method and Apparatus for Motion Recognition
JP2005056059A (en) Input device and method using head mounting type display equipped with image pickup part
WO2018006481A1 (en) Motion-sensing operation method and device for mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 330096 No.699 Tianxiang North Avenue, Nanchang hi tech Industrial Development Zone, Nanchang City, Jiangxi Province

Applicant after: Jiangxi OMS Microelectronics Co.,Ltd.

Address before: 330096 No.699 Tianxiang North Avenue, Nanchang hi tech Industrial Development Zone, Nanchang City, Jiangxi Province

Applicant before: OFilm Microelectronics Technology Co.,Ltd.

Address after: 330096 No.699 Tianxiang North Avenue, Nanchang hi tech Industrial Development Zone, Nanchang City, Jiangxi Province

Applicant after: OFilm Microelectronics Technology Co.,Ltd.

Address before: 330029 No. 1189 Jingdong Avenue, Nanchang high tech Zone, Jiangxi

Applicant before: NANCHANG OFILM BIO-IDENTIFICATION TECHNOLOGY Co.,Ltd.

WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210305