WO2018161906A1 - Procédé, dispositif, système de reconnaissance de mouvement et support d'informations - Google Patents

Procédé, dispositif, système de reconnaissance de mouvement et support d'informations Download PDF

Info

Publication number
WO2018161906A1
WO2018161906A1 PCT/CN2018/078215 CN2018078215W WO2018161906A1 WO 2018161906 A1 WO2018161906 A1 WO 2018161906A1 CN 2018078215 W CN2018078215 W CN 2018078215W WO 2018161906 A1 WO2018161906 A1 WO 2018161906A1
Authority
WO
WIPO (PCT)
Prior art keywords
action
motion
mobile terminal
data
sensing data
Prior art date
Application number
PCT/CN2018/078215
Other languages
English (en)
Chinese (zh)
Inventor
万猛
荆彦青
魏学峰
曹文升
耿天平
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018161906A1 publication Critical patent/WO2018161906A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • the present application relates to the field of Internet technologies, and in particular, to a motion recognition method, apparatus, system, and storage medium.
  • Motion Sensing Game (English: Motion Sensing Game) As the name suggests: a video game that uses the body to feel. Breaking through the previous operation mode of simply inputting the handle button, the somatosensory game is a new type of electronic game in which the user performs (operation) through the change of the limb movement.
  • the current somatosensory game mode usually requires a special somatosensory game machine to recognize the user's movements by collecting the user's limb movement changes in the screen through the depth camera.
  • the embodiment of the invention provides a motion recognition method, and the method includes:
  • the motion sensing data including acceleration sensing data or gyro sensing data
  • an embodiment of the present invention further provides a motion recognition apparatus, including: a memory, a processor; wherein the memory stores computer readable instructions, and the processor executes the computer readable instructions in the storing For:
  • the motion sensing data including acceleration sensing data or gyro sensing data
  • an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes: a memory, a processor; wherein the memory stores computer readable instructions, and the processor executes the computer in the storage Readable instructions for:
  • motion sensing data of the mobile terminal includes acceleration sensing data or gyro sensing data
  • Transmitting the motion sensing data to the motion recognition device so that the motion recognition device acquires the first motion feature data according to the motion sensing data of the mobile terminal, and the first motion feature data and the preset
  • the at least one second action feature data is compared to determine, from the known actions corresponding to the at least one second action feature data, that a known action is an action currently performed by the mobile terminal.
  • an embodiment of the present invention further provides a motion recognition system, including a motion recognition apparatus and at least one mobile terminal, where:
  • the mobile terminal is configured to collect motion sensing data of the mobile terminal by using a built-in sensor, and send the motion sensing data to the motion recognition device, wherein the motion sensing data includes acceleration sensing data or a gyroscope Sensing data
  • the motion recognition device is configured to receive the motion sensing data sent by the mobile terminal, acquire first motion feature data according to the motion sensing data of the mobile terminal, and set the first motion feature data with a preset
  • the at least one second action feature data is compared to determine, from the known actions corresponding to the at least one second action feature data, that a known action is an action currently performed by the mobile terminal.
  • the embodiment of the present application further provides a non-transitory computer readable storage medium storing computer readable instructions, which may cause at least one processor to perform the method described above.
  • FIG. 1A is a schematic structural diagram of a motion recognition system in an embodiment of the present invention.
  • FIG. 1B is a schematic flow chart of a motion recognition method in the architecture of the motion recognition system shown in FIG. 1A;
  • FIG. 2 is a schematic flow chart of a motion recognition method in the architecture of the motion recognition system shown in FIG. 1A;
  • FIG. 3 is a schematic diagram of actions performed by a mobile terminal in an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of another action performed by a mobile terminal in an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of another action performed by a mobile terminal in an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a motion recognition system in another embodiment of the present invention.
  • FIG. 7 is a schematic flow chart of a motion recognition method in the architecture of the motion recognition system shown in FIG. 6;
  • FIG. 8 is a schematic structural diagram of a motion recognition apparatus according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a sensing data acquiring module in an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a motion recognition module according to an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a hardware entity of a motion recognition apparatus according to an embodiment of the present invention.
  • FIG. 12 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of a hardware entity of a mobile terminal according to an embodiment of the present invention.
  • FIG. 1A is a schematic structural diagram of a motion recognition system according to an embodiment of the present invention. As shown in FIG. 1A, a motion recognition apparatus 102 and at least one mobile terminal 101 are included, where:
  • the mobile terminal 101 is configured to collect motion sensing data of the mobile terminal by using a built-in sensor, and send the motion sensing data to the motion recognition device 102, wherein the motion sensing data includes acceleration sensing data and / or gyroscope sensing data.
  • the mobile terminal 101 mentioned in the embodiment of the present invention may include a mobile phone, a tablet computer, an e-reader, a wearable smart device, etc.
  • the action recognition device in this embodiment may be implemented in a personal computer, a tablet computer, an e-reader, a notebook.
  • a wireless data transmission channel may be established between the mobile terminal 101 and the motion recognition device 102 for transmitting motion sensing data of the mobile terminal 101, and the wireless data transmission channel may be, for example, wifi or Bluetooth. Or a mobile communication network (eg 2/3/4/5G).
  • the motion recognition device 102 is configured to receive the motion sensing data sent by the mobile terminal 101, acquire user action feature data according to the motion sensing data of the mobile terminal 101, and obtain current user action feature data and a preset
  • the at least one known action feature data is compared to determine a known action among the known actions corresponding to the at least one known action feature data as the action currently performed by the mobile terminal 101.
  • FIG. 1B is a schematic flow chart of a motion recognition method in the architecture of the motion recognition system shown in FIG. 1A. As shown in FIG. 1B, the motion recognition method is performed by the motion recognition apparatus 102, and includes the following steps:
  • Step 101b Acquire motion sensing data of the mobile terminal, the motion sensing data including acceleration sensing data and/or gyroscope sensing data.
  • the motion sensing data of the mobile terminal includes data collected by the detecting device of the mobile terminal and describing an action of the mobile terminal.
  • the detecting device may include an acceleration sensor and/or a gyro sensor.
  • Step 102b Acquire first action feature data according to motion sensing data of the mobile terminal.
  • the first action feature data may also be referred to as user action feature data.
  • Step 103b Compare the first motion feature data with the preset at least one second action feature data to determine a known action as a mobile terminal from a known action corresponding to the at least one second action feature data.
  • the currently executing action may also be referred to as known action feature data.
  • the mobile terminal collects motion sensing data of the mobile terminal by using a built-in sensor, where the motion sensing data includes acceleration sensing data or gyro sensing data.
  • the acceleration sensing data includes data acquired using an acceleration sensor.
  • the gyro sensing data includes data acquired using a gyro sensor.
  • the built-in sensor may include an acceleration sensor or a gyroscope, and may further include a distance sensor, a direction sensor, and the like, and may acquire corresponding motion sensing data of the mobile terminal.
  • the collected motion sensing data of the mobile terminal may include multiple sets of motion sensing data collected in at least one time window, that is, the mobile terminal may collect the time in units of time windows. Motion sensing data of the mobile terminal in the window.
  • the time window may be a preset length of time. For example, the following acceleration data is acquired in a time window with a width of 0.5 seconds:
  • the mobile terminal performs filtering and denoising processing on the motion sensing data.
  • the mobile terminal may first process the original sensor data and perform low-pass filtering. Filter denoising processing can improve the accuracy of the subsequent motion similarity of the motion recognition device and also reduce network traffic.
  • the mobile terminal can also send the original motion sensing data acquired by the sensor to the motion recognition device.
  • the mobile terminal sends the motion sensing data subjected to the filtering and denoising processing to the motion recognition device.
  • the mobile terminal may send the motion sensing data by establishing a wireless data transmission channel with the motion recognition device, so that the motion recognition device may receive the motion sent by the mobile terminal by using the wireless data transmission channel. Sensing data.
  • the mobile terminal can run a process to establish a socket connection with the motion recognition device, and uses TCP (Transmission Control Protocol) protocol for data communication.
  • TCP Transmission Control Protocol
  • the method for establishing the wireless data transmission channel may be: the mobile terminal acquires the network information of the motion recognition device by scanning the motion recognition device to display the two-dimensional code on the screen of the terminal, and according to the obtained The network information of the motion recognition device transmits the network of the mobile terminal to the motion recognition device to establish a wireless data transmission channel of both parties.
  • the mobile terminal or the motion recognition device may acquire the network information of the other party by means of network information broadcast search, or may acquire the network information of the other party by the intermediate network server, so that the other party may The network information sends its own network information to the other party, thereby establishing a wireless data transmission channel for both parties.
  • the motion sensing data sent by the mobile terminal to the motion recognition device may carry the terminal identifier of the mobile terminal, which is used to distinguish the motion sensing data sent by other mobile terminals, as shown in FIG. 1A. If more than one mobile terminal is connected to the motion recognition device, the motion recognition device can perform processing according to the terminal identifier carried in the motion sensor data.
  • the motion recognition device acquires user motion feature data according to the motion sensing data of the mobile terminal.
  • the motion recognition device acquires an action feature vector of the mobile terminal in each time window according to the collected plurality of sets of motion sensing data of the mobile terminal in at least one time window, thereby obtaining an action.
  • Feature vector set the motion recognition device acquires an action feature vector of the mobile terminal in each time window according to the collected plurality of sets of motion sensing data of the mobile terminal in at least one time window, thereby obtaining an action.
  • the action feature vector may include a plurality of features used to characterize the execution of the mobile terminal within the time window, for example, may include a mean, standard deviation, or correlation coefficient between each sensor data component of each sensor data component.
  • the acceleration data collected in the previous example can be calculated by the mean value of the acceleration component on the X-axis:
  • the standard deviation can be calculated as:
  • the correlation between different sensor data components can be calculated as:
  • the motion sensing data of the mobile terminal collected by us in a single time window can obtain an n-dimensional feature vector ( ⁇ 1, ⁇ 2 according to different motion characteristics. ⁇ 3,..., ⁇ n), where n is the number of types of motion features used.
  • an action feature vector can be obtained.
  • the motion feature vector obtained from the motion sensing data of multiple time windows can form a feature vector set, which can be understood as a matrix of feature values m* n, m is the number of time windows sampled, and n is the number of feature values.
  • the motion recognition apparatus may perform principal component analysis (PCA) on the obtained n-dimensional feature vectors ( ⁇ 1, ⁇ 2, ⁇ 3, ..., ⁇ n).
  • PCA principal component analysis
  • :Principal Component Analysis reduces the dimension of the feature vector set to about 4-6. After the dimension reduction, the feature vector set can still save more than 90% of the features.
  • the motion recognition device acquires motion trajectory data of the mobile terminal in each time window according to the collected at least one set of motion sensing data of the mobile terminal in at least one time window.
  • the motion recognition device calculates the change of the relative position of the mobile terminal in the time window according to the collected at least one set of motion sensing data of the mobile terminal in the at least one time window, thereby obtaining the mobile terminal in each time window.
  • Motion track data For example, the action performed by the mobile terminal shown in FIG. 3, the motion recognition device may calculate, according to the collected at least one set of motion sensing data of the mobile terminal in at least one time window, the motion track of the mobile terminal is a reciprocating arc. Track. As shown in FIG. 4, the motion recognition apparatus may calculate, according to the collected at least one set of motion sensing data of the mobile terminal in at least one time window, a motion trajectory of the mobile terminal as a circular trajectory. . As shown in FIG.
  • the motion recognition apparatus may calculate, according to the collected at least one set of motion sensing data of the mobile terminal in at least one time window, a motion trajectory of the mobile terminal as a Z-shaped trajectory. .
  • the motion recognition apparatus may use motion vector sets composed of at least one motion vector to represent motion trajectory data of the mobile terminal in each time window, where each motion vector may represent the mobile terminal in each time window. The relative direction of the motion trajectory at different acquisition time points within.
  • the duration of the time window may be a preset value, for example, 0.5 to 1 second, or may be notified to the mobile terminal by the motion recognition apparatus, and the mobile terminal provides time according to the requirements of the motion recognition apparatus.
  • Motion sensing data of the mobile terminal in the window may be a predetermined number of the motion recognition device and the mobile terminal, for example, 3-5, or the motion recognition device may be The number of time windows corresponding to the action currently required by the user is determined. For example, the motion recognition device currently prompts the user to perform a relatively simple wave motion (as shown in FIG.
  • the motion recognition device can perform subsequent motion recognition based on the motion sensing data of the mobile terminal within the currently acquired 2-3 time windows, and if the motion recognition device currently prompts the user to perform a more complicated process.
  • the combined action for example, first performing the circle motion shown in FIG. 4 and then performing the Z-type swing shown in FIG. 5
  • the number of time windows required for the standard action corresponding to the set of actions is 8-10
  • the action The identification device can perform subsequent motion recognition based on the motion sensing data of the mobile terminal within the currently acquired 8-10 time windows.
  • the motion recognition device compares the currently acquired user action feature data with the preset at least one known action feature data, so as to determine a known action among the known actions corresponding to the at least one known action feature data.
  • the action currently performed by the mobile terminal is compared.
  • the motion recognition apparatus may calculate a similarity between the currently acquired user motion feature data and the preset at least one known motion feature data by using a distance metric or a similarity metric, and then the highest similarity is known.
  • the known action corresponding to the action feature data is the action currently performed by the mobile terminal.
  • the preset at least one known action feature data includes at least one known action
  • the motion recognition device compares the currently acquired motion feature vector set with the preset at least one known motion feature vector set, so as to have the highest similarity with the currently acquired action feature vector set.
  • the known action corresponding to the set of known action feature vectors is determined as the action currently performed by the mobile terminal.
  • the distance between two eigenvectors can be calculated by algorithms such as Euclidean distance algorithm and Minkowski distance algorithm, for example:
  • Minkowski distance (x1,x2,...,xn)
  • Calculating the distance between the decision action and the standard action feature vector in each dimension and accumulating can obtain the similarity between the two.
  • the similarity between the two eigenvectors can be obtained by the cosine similarity algorithm.
  • the cosine similarity algorithm between the two feature vectors can be as follows:
  • the preset at least one known action feature data includes motion track data of at least one known action
  • the action The identification device compares the currently acquired motion trajectory data with the motion trajectory data of the preset at least one known motion, so as to correspond to the motion trajectory data of the known motion with the highest similarity between the currently acquired motion trajectory data.
  • the known action is determined as the action currently performed by the mobile terminal.
  • the similarity between the motion trajectory data may specifically be the similarity between the two motion trajectory data according to the graphic similarity or the shape similarity of the two motion trajectories.
  • the motion vector set composed of the at least one motion vector represents the motion trajectory data of the mobile terminal in each time window
  • the motion vector set of the currently acquired mobile terminal and the motion track of the preset at least one known motion may be
  • the data corresponds to the distance or similarity between the motion vector sets as the similarity between the user action feature data of the currently acquired mobile terminal and the known action feature data.
  • Euclidean distance algorithm Minkov Skim distance algorithm or cosine similarity algorithm.
  • the motion recognition apparatus may further obtain the motion feature classifier according to the preset at least one known motion feature vector set and the plurality of training action feature vector sets, and further obtain the currently acquired motion feature vector.
  • the set inputs the action feature classifier to determine a known action corresponding to the set of known action feature vectors having the highest similarity between the currently acquired action feature vector sets as the action currently performed by the mobile terminal.
  • the action feature classifier can be, for example, a support vector machine (SVM) classifier, a neural network classifier, etc., and can input training effects by inputting a certain number of training action feature vector sets corresponding to each known action. Achieve the required action feature classifier.
  • SVM support vector machine
  • the distance measure or the similarity measure may be further used according to the foregoing.
  • the calculation method obtains the similarity between the user action feature data of the current action and the known action feature data of the corresponding known action, and no longer needs to be compared with the known action feature data of other known actions, thereby greatly The amplitude is reduced by the amount of calculation of the motion recognition device.
  • the motion recognition device outputs an action identifier of the known action corresponding to the action and a similarity between the action and the corresponding known action.
  • the action recognition device may perform action feedback on the action input by the user through the mobile terminal according to the known action, for example, perform game action feedback, action score or action record, etc. according to the current game progress scene, wherein the action may be performed according to the action.
  • the action identifier of the corresponding known action and the similarity between the action and the corresponding known action are fed back.
  • An exemplary so-called dance game when the motion recognition device plays a video of a dance game standard action to the user through the terminal, receives the user action feature data of the action performed by the user through the mobile terminal, and identifies the corresponding known action, if recognized If the obtained known action is different from the known action corresponding to the currently played standard action, the user may be prompted that the action does not perform correct feedback, and if the recognized known action is the same as the known action corresponding to the currently played standard action, Then, the action currently performed by the user may be evaluated or scored according to the similarity between the action and the corresponding known action, such as the score according to the similarity value, and the similarity is 90%, the score is 90, and the similarity is 60. % is scored 60 points, or the similarity reaches the corresponding threshold, then the corresponding evaluation level is given. For example, similarity 90% is excellent, similarity 80% is good, and so on.
  • the action recognition apparatus may further output an action identifier of a known action corresponding to the action currently performed by the mobile terminal in association with a terminal identifier of the mobile terminal.
  • an action identifier of a known action corresponding to the action currently performed by the mobile terminal in association with a terminal identifier of the mobile terminal.
  • the motion recognition device in the embodiment compares the motion motion data of the mobile terminal, and compares the user motion feature data extracted from the motion sensor data with the known motion feature data, so as to perform known actions corresponding to the similar known motion feature data.
  • the action currently performed by the mobile terminal is determined, so that the mobile terminal can recognize various actions of the mobile terminal in conjunction with the motion recognition device.
  • FIG. 6 is a schematic structural diagram of a motion recognition system according to another embodiment of the present invention.
  • the motion recognition apparatus in the embodiment of the present invention runs in one of the mobile terminals 3
  • the mobile terminal 3 itself can simultaneously act as a mobile terminal for the user to input the action and a motion recognition device that recognizes the action performed by the user through the mobile terminal, and can also acquire other mobile terminals (for example, the mobile terminal 4 shown in FIG. 6).
  • Motion sensing data is identified and identified to identify actions by the user through other mobile terminals, and the logic of the motion recognition device identifying actions performed by other mobile terminals is combined with the implementation logic shown in Figures 1A, 1B, and 2 in the foregoing.
  • the specific process of the action recognition device identifying the action performed by the mobile terminal itself is described, including the process shown in FIG. 7 :
  • the motion recognition device acquires motion sensing data of the mobile terminal, where the motion sensing data includes acceleration sensing data or gyro sensing data.
  • the motion recognition data of the mobile terminal where the mobile terminal is located is obtained by the motion recognition device running on the mobile terminal, and the motion sensor data is a motion sensor built in the mobile terminal, and the built-in sensor may include an acceleration sensor or
  • the gyroscope may further include a distance sensor, a direction sensor, etc., and may acquire corresponding motion sensing data of the mobile terminal.
  • the motion recognition device may acquire motion sensor data collected by the motion sensor by calling a sensor hardware related API (Application Programming Interface) interface of the mobile terminal.
  • a sensor hardware related API Application Programming Interface
  • the collected motion sensing data of the mobile terminal may include multiple sets of motion sensing data collected in at least one time window, that is, the mobile terminal may collect the time in units of time windows. Motion sensing data of the mobile terminal in the window.
  • the motion recognition device can process the original sensor data. If the filter denoising process is performed by low-pass filtering, the accuracy of the subsequent motion similarity of the motion recognition device can be improved.
  • the motion recognition device may carry the terminal identifier of the mobile terminal when acquiring the motion sensing data of the mobile terminal where the mobile terminal is located, so as to distinguish the motion sensing data sent by the other mobile terminal, such as 6 shows that other mobile terminals are connected to the mobile terminal where the motion recognition device is located, and the motion recognition device can separately process the motion sensing data of the mobile terminal and the motion transmission of other mobile terminals according to the terminal identifier carried by the motion sensing data.
  • Sense data may carry the terminal identifier of the mobile terminal when acquiring the motion sensing data of the mobile terminal where the mobile terminal is located, so as to distinguish the motion sensing data sent by the other mobile terminal, such as 6 shows that other mobile terminals are connected to the mobile terminal where the motion recognition device is located, and the motion recognition device can separately process the motion sensing data of the mobile terminal and the motion transmission of other mobile terminals according to the terminal identifier carried by the motion sensing data.
  • Sense data may carry the terminal identifier of the mobile terminal when acquiring the motion sensing data of
  • the motion recognition apparatus acquires user motion feature data according to the motion sensing data of the mobile terminal.
  • the motion recognition device compares the currently acquired user motion feature data with the preset at least one known motion feature data.
  • the motion recognition device determines, in the known action corresponding to the at least one known action feature data, that the known action is an action currently performed by the mobile terminal.
  • Steps S702-S704 in this embodiment are the same as S204-S205 in the foregoing embodiment, that is, the motion recognition apparatus in this embodiment identifies the manner in which the mobile terminal currently performs the action according to the motion sensing data of the mobile terminal.
  • the foregoing embodiments are the same, and are not described in detail in this embodiment.
  • the motion recognition device outputs an action identifier of the known action corresponding to the action and a similarity between the action and the corresponding known action.
  • the action recognition device may output the action identifier of the known action corresponding to the action and the similarity between the action and the corresponding known action, and may also communicate with other terminals through the mobile terminal.
  • the action identifier of the known action corresponding to the action and the similarity between the action and the corresponding known action are sent to other terminals, such as a digital television or a laptop, for feedback on the action currently made by the user.
  • the motion recognition device in the embodiment compares the motion motion data of the mobile terminal, and compares the user motion feature data extracted from the motion sensor data with the known motion feature data, so as to perform known actions corresponding to the similar known motion feature data.
  • the action currently performed by the mobile terminal is determined, so that the mobile terminal cooperates with the motion recognition device to implement various motion recognition feedback and game processes, which greatly reduces the experience threshold of the motion recognition application, so that more users can feel the motion recognition application. Convenience and experience.
  • FIG. 8 is a schematic structural diagram of a motion recognition apparatus according to an embodiment of the present invention.
  • the motion recognition apparatus in the embodiment of the present invention may include at least a sensor data acquisition module 810, configured to acquire motion transmission of the mobile terminal.
  • Sensation data, the motion sensing data includes acceleration sensing data or gyro sensing data.
  • the motion sensing data of the mobile terminal may be obtained by using a built-in sensor of the mobile terminal, and the built-in sensor may include an acceleration sensor or a gyroscope, and may further include a distance sensor, a direction sensor, etc., and the mobile terminal may be acquired. Corresponding motion sensing data.
  • the collected motion sensing data of the mobile terminal may include multiple sets of motion sensing data collected in at least one time window, that is, the mobile terminal may collect the time in units of time windows.
  • Motion sensing data of the mobile terminal in the window may carry the terminal identifier of the mobile terminal, and is used to distinguish the motion sensing data sent by other mobile terminals, so that more than one mobile terminal is connected to the motion recognition device as shown in FIG. 1A.
  • the identification device can perform processing according to the terminal identifier carried in the motion sensing data.
  • the motion recognition apparatus may be implemented in the mobile terminal, that is, in the scenario architecture shown in FIG. 6, the motion recognition apparatus may invoke a sensor hardware related API (Application Programming Interface) of the mobile terminal.
  • the programming interface acquires motion sensing data collected by a motion sensor of the mobile terminal on which it is located.
  • the action recognition device and the mobile terminal are separated from each other, for example, in the scenario architecture shown in FIG. 1A, a wireless data transmission channel may be established between the mobile terminal and the motion recognition device for transmitting the Motion sensing data of the mobile terminal, which may be, for example, wifi, Bluetooth or a mobile communication network (eg 2/3/4/5G).
  • a wireless data transmission channel may be established between the mobile terminal and the motion recognition device for transmitting the Motion sensing data of the mobile terminal, which may be, for example, wifi, Bluetooth or a mobile communication network (eg 2/3/4/5G).
  • the sensing data acquisition module 810 as shown in FIG. 9 may further include:
  • a transmission channel establishing unit 811 configured to establish a wireless data transmission channel with the mobile terminal
  • the transmission channel establishing unit 811 can receive the motion sensing data sent by the mobile terminal through the wireless data transmission channel by establishing a wireless data transmission channel with the mobile terminal.
  • the mobile terminal can run a process to establish a socket connection with the motion recognition device, and uses the TCP protocol for data communication.
  • the method for establishing the wireless data transmission channel may be: the mobile terminal acquires the network information of the motion recognition device by scanning the motion recognition device to display the two-dimensional code on the screen of the terminal, and according to the obtained The network information of the motion recognition device transmits the network of the mobile terminal to the motion recognition device to establish a wireless data transmission channel of both parties.
  • the mobile terminal or the motion recognition device may acquire the network information of the other party by means of network information broadcast search, or may acquire the network information of the other party by the intermediate network server, so that the other party may The network information sends its own network information to the other party, thereby establishing a wireless data transmission channel for both parties.
  • the sensing data receiving unit 812 is configured to receive the motion sensing data sent by the mobile terminal by using the wireless data transmission channel.
  • the action feature obtaining module 820 is configured to acquire user action feature data according to the motion sensing data of the mobile terminal.
  • the action feature acquiring module 820 acquires the action feature vector of the mobile terminal in each time window according to the collected plurality of sets of motion sensing data of the mobile terminal in at least one time window, thereby Get the set of action feature vectors.
  • the action feature vector may include a plurality of features used to characterize the execution of the mobile terminal within the time window, for example, may include a mean, standard deviation, or correlation coefficient between each sensor data component of each sensor data component. .
  • the action feature acquiring module 820 acquires motion trajectory data of the mobile terminal in each time window according to the collected plurality of sets of motion sensing data of the mobile terminal in at least one time window.
  • the action feature acquiring module 820 calculates the change of the relative position of the mobile terminal in the time window according to the collected plurality of sets of motion sensing data of the mobile terminal in the at least one time window, thereby obtaining the mobile terminal in each time window.
  • Motion track data For example, the action performed by the mobile terminal shown in FIG. 3, the action feature acquiring module 820 may calculate the motion track of the mobile terminal as a round-trip arc according to the collected plurality of sets of motion sensing data of the mobile terminal in at least one time window.
  • the action track obtaining module 820 can calculate the motion track of the mobile terminal according to the collected plurality of sets of motion sensor data of the mobile terminal in at least one time window, as shown in FIG. For a circular trajectory; as shown in FIG.
  • the action feature acquiring module 820 can calculate the mobile terminal according to the collected plurality of sets of motion sensing data of the mobile terminal in at least one time window.
  • the motion trajectory is a zigzag trajectory.
  • the action feature acquiring module 820 may use the motion vector set composed of the at least one motion vector to represent the motion track data of the mobile terminal in each time window, where each motion vector may represent the mobile terminal in each The relative direction of the motion trajectory at different acquisition time points within the time window.
  • the duration of the time window may be a preset value, for example, 0.5 to 1 second, or may be notified to the mobile terminal by the motion recognition apparatus, and the mobile terminal provides time according to the requirements of the motion recognition apparatus.
  • the number of corresponding time windows in the motion sensing data based on the motion feature acquiring module 820 may be a predetermined number of the action recognition device and the mobile terminal, for example, 3-5, or may be motion recognition.
  • the device is determined according to the number of time windows corresponding to the actions currently required by the user, for example, the action recognition device currently prompts the user to perform a relatively simple wave action (as shown in FIG.
  • the action feature acquiring module 820 can perform subsequent motion recognition based on the motion sensing data of the mobile terminal within the currently acquired 2-3 time windows, and if the motion recognition device currently prompts the user to perform a comparison.
  • Complex combined actions for example, performing the circle motion shown in FIG. 4 and then performing the Z-type swing shown in FIG. 5
  • the number of time windows required for the standard action corresponding to this set of actions is 8-10.
  • the action feature acquiring module 820 can perform subsequent actions based on the motion sensing data of the mobile terminal within the currently acquired 8-10 time windows. Identification.
  • the action recognition module 830 is configured to compare the currently acquired user action feature data with the preset at least one known action feature data, so as to determine a known one of the known actions corresponding to the at least one known action feature data.
  • the action is currently performed as a mobile terminal.
  • the action recognition module 830 may calculate the similarity between the currently acquired user action feature data and the preset at least one known action feature data by using a distance metric or a similarity measure, and then the highest similarity The known action corresponding to the action feature data is taken as the action currently performed by the mobile terminal.
  • the preset at least one known action feature data includes at least one known action
  • the action feature vector set 830 compares the currently acquired action feature vector set with the preset at least one known action feature vector set, so as to have the highest similarity with the currently acquired action feature vector set.
  • the known action corresponding to the set of action feature vectors is determined to be the action currently performed by the mobile terminal.
  • the distance between two feature vectors can be calculated by an Euclidean distance algorithm, a Minkowski distance algorithm, and the like.
  • the preset at least one known action feature data includes motion track data of at least one known action
  • motion recognition The module 830 compares the currently acquired motion trajectory data with the preset motion trajectory data of at least one known motion, so as to correspond to the motion trajectory data of the known motion with the highest similarity between the currently acquired motion trajectory data.
  • the known action is determined as the action currently performed by the mobile terminal.
  • the similarity between the motion trajectory data may specifically be the similarity between the two motion trajectory data according to the graphic similarity or the shape similarity of the two motion trajectories. If the motion vector set composed of the at least one motion vector represents the motion trajectory data of the mobile terminal in each time window, the motion recognition module 830 may set the currently acquired motion vector set of the mobile terminal with at least one preset.
  • the motion trajectory data of the motion corresponds to the distance or similarity between the motion vector sets as the similarity between the user motion feature data of the currently acquired mobile terminal and the known motion feature data, and may refer to the above-mentioned Euclidean distance algorithm. , Minkowski distance algorithm or cosine similarity algorithm.
  • the action recognition module 830 may further include:
  • the classifier training unit 831 is configured to train the action feature classifier according to the preset at least one known action feature vector set and the plurality of training action feature vector sets.
  • the action feature classifier may be, for example, a Support Vector Machine (SVM) classifier, a neural network classifier, etc., and the classifier training unit 831 inputs a certain number of training action feature vector sets corresponding to each known action, that is, It is possible to train an action feature classifier whose classification effect meets the requirements.
  • SVM Support Vector Machine
  • the action recognition unit 832 is configured to input the currently acquired action feature vector set into the action feature classifier, so as to correspond to the set of known action feature vectors with the highest similarity between the currently acquired action feature vector sets.
  • the known action is determined as the action currently performed by the mobile terminal.
  • the action recognition module 830 may further use the distance metric or the like according to the foregoing
  • the calculation method such as the sex measurement method obtains the similarity between the user action feature data of the current action and the known action feature data of the corresponding known action, and no longer needs to be compared with the known action feature data of other known actions. Thus, the amount of calculation of the motion recognition module 830 is greatly reduced.
  • the action recognition apparatus may further include:
  • the motion recognition output module 840 is configured to output an action identifier of the known action corresponding to the action and a similarity between the action and the corresponding known action.
  • the action recognition output module 840 can perform action feedback on the action input by the user through the mobile terminal according to the known action, for example, performing game action feedback, action scoring or action recording, etc. according to the current game progress scene, wherein The action identifier of the known action corresponding to the action and the similarity between the action and the corresponding known action are fed back.
  • An exemplary so-called dance game when the motion recognition device plays a video of a dance game standard action to the user through the terminal, receives the user action feature data of the action performed by the user through the mobile terminal, and identifies the corresponding known action, if recognized The obtained known action is different from the known action corresponding to the currently played standard action, and the action recognition output module 840 can prompt the user that the action does not perform correct feedback, and if the recognized known action corresponds to the currently played standard action
  • the action currently performed by the user may be evaluated or scored according to the similarity between the action and the corresponding known action, such as the score according to the similarity value, and the similarity is 90% and the score is 90. If the similarity is 60%, the score is 60, or the similarity reaches the corresponding threshold, and the corresponding evaluation level is given. For example, the similarity is 90%, the excellence is 80%, and the similarity is 80%.
  • the action recognition output module 840 may further output an action identifier of a known action corresponding to the action currently performed by the mobile terminal and a terminal identifier of the mobile terminal. For distinguishing the actions identified by the motion sensing data transmitted by different mobile terminals, such that as shown in FIG. 1A, more than one mobile terminal is connected to the motion recognition device, and the motion recognition device can be based on the terminal identifier carried by the motion sensing data. The actions performed by different mobile terminals are respectively processed, and the action recognition output module 840 can also output the association with the terminal identifier of the corresponding mobile terminal.
  • the motion recognition device in the embodiment compares the motion motion data of the mobile terminal, and compares the user motion feature data extracted from the motion sensor data with the known motion feature data, so as to perform known actions corresponding to the similar known motion feature data.
  • the action currently performed by the mobile terminal is determined, so that the mobile terminal cooperates with the motion recognition device to implement various motion recognition feedback and game processes, which greatly reduces the experience threshold of the motion recognition application, so that more users can feel the motion recognition application. Convenience and experience.
  • the above-mentioned motion recognition device can be implemented as an electronic device such as a PC, and can also be a portable electronic device such as a PAD, a tablet computer or a laptop computer, and is not limited to the description herein, and can be combined for realizing the functions of each unit.
  • the electronic device that is separately provided for an entity or each unit function, the action recognition device includes at least a database for storing data and a processor for data processing, and may include a built-in storage medium or a separately set storage medium.
  • a microprocessor for the processor for data processing, a microprocessor, a central processing unit (CPU), a digital signal processor (DSP, Digital Singnal Processor) or programmable logic may be used when performing processing.
  • An FPGA Field-Programmable Gate Array
  • An operation instruction for a storage medium, includes an operation instruction, which may be computer executable code, by which the implementation of the present invention described above is implemented, as shown in FIG. 2 or FIG. The actions identify the various steps in the process.
  • the apparatus includes a processor 1101, a storage medium 1102, and at least one external communication interface 1103; the processor 1101, the storage medium 1102, and the communication interface 1103 are all connected by a bus 1104.
  • the processor 1101 can invoke the storage medium 1102, such as operational instructions in a non-volatile storage medium for performing the operations performed by the embodiments illustrated in Figures IB, 2, 8, 9, and 10.
  • the processor 1101 in the action recognition apparatus may invoke an operation instruction in the storage medium 1102 to execute the following process:
  • the motion sensing data including acceleration sensing data or gyro sensing data
  • the embodiment of the present invention further provides a mobile terminal, which is mainly implemented in the motion recognition system architecture as shown in FIG. 1A or FIG. 6, for example, the mobile terminal 101 in FIG. 1A, or the mobile terminal 3 in FIG. Or the mobile terminal 4, as shown in FIG. 12, the mobile terminal in the embodiment of the present invention may include: a motion sensor 1210, configured to collect motion sensing data of the mobile terminal, where the motion sensing data includes acceleration sensing data or Gyro sensing data.
  • the motion sensor may include an acceleration sensor or a gyroscope, and may further include a distance sensor, a direction sensor, and the like, and may acquire corresponding motion sensing data of the mobile terminal.
  • the collected motion sensing data of the mobile terminal may include multiple sets of motion sensing data collected in at least one time window, that is, the mobile terminal may collect the time in units of time windows. Motion sensing data of the mobile terminal in the window.
  • the communication module 1220 is configured to send the motion sensing data to the motion recognition device, so that the motion recognition device acquires user motion feature data according to the motion sensing data of the mobile terminal, and the currently acquired user motion feature The data is compared with the preset at least one known motion feature data such that a known action corresponding to the at least one known action feature data determines a known action as an action currently performed by the mobile terminal.
  • the communication module 1220 can receive the motion sensing data sent by the mobile terminal through the wireless data transmission channel by establishing a wireless data transmission channel with the motion recognition device.
  • the mobile terminal can run a process to establish a socket connection with the motion recognition device, and uses TCP (Transmission Control Protocol) protocol for data communication.
  • TCP Transmission Control Protocol
  • the manner in which the communication module 1220 establishes the wireless data transmission channel may be: the mobile terminal acquires the network information of the motion recognition device by scanning the motion recognition device to display the two-dimensional code on the screen of the terminal, and The network of the mobile terminal is transmitted to the motion recognition device according to the acquired network information of the motion recognition device to establish a wireless data transmission channel of both parties.
  • the mobile terminal or the motion recognition device may acquire the network information of the other party by means of network information broadcast search, or may acquire the network information of the other party by the intermediate network server, so that the other party may The network information sends its own network information to the other party, thereby establishing a wireless data transmission channel for both parties.
  • the motion sensing data sent by the communication module 1220 to the motion recognition device may carry the terminal identifier of the mobile terminal for distinguishing the motion sensing data sent by other mobile terminals, as shown in FIG. 1A. It is shown that more than one mobile terminal is connected to the motion recognition device, and the motion recognition device can perform processing according to the terminal identifier carried by the motion sensor data.
  • the denoising module 1230 is configured to perform filtering and denoising processing on the motion sensing data.
  • the original sensor data can be processed by the denoising module 1230, and low-pass filtering is adopted.
  • the method performs filtering and denoising processing, which can improve the determination accuracy of the subsequent motion similarity of the motion recognition device, and can also reduce the network traffic.
  • the communication module 1220 sends the motion sensing data subjected to the filtering and denoising processing to the motion recognition device.
  • the mobile terminal includes a processor 1301, a storage medium 1302, and at least one external communication interface 1303.
  • the processor 1301, the storage medium 1302, and the communication interface 1303 are all connected by a bus 1304.
  • the processor 1301 can invoke the storage medium 1302, such as operational instructions in a non-volatile storage medium for performing the operations performed by the embodiment illustrated in FIG. 12 above.
  • the processor 1301 in the mobile terminal may invoke an operation instruction in the storage medium 1302 to perform the following process:
  • the mobile terminal collects motion sensing data of the mobile terminal by using a built-in sensor, wherein the motion sensing data includes acceleration sensing data or gyro sensing data;
  • the mobile terminal performs filtering and denoising processing on the motion sensing data
  • the mobile terminal transmits the motion-sensing data subjected to the filtering and denoising processing to the motion recognition device.
  • the mobile terminal in the embodiment of the present invention transmits the motion sensing data of the mobile terminal by using a wireless data transmission channel with the motion recognition device, so that the motion recognition device recognizes that the user uses the mobile device.
  • the action performed by the terminal enables the mobile terminal to cooperate with the motion recognition device to implement various motion recognition feedback and game processes, which greatly reduces the experience threshold of the motion recognition application, so that more users can feel the convenience brought by the motion recognition application. And experience.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit;
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing storage device includes the following steps: the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk.
  • optical disk A medium that can store program code.
  • the above-described integrated unit of the present application may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a stand-alone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)
  • Position Input By Displaying (AREA)

Abstract

Dans le mode de réalisation de la présente invention, un procédé de reconnaissance de mouvement, un dispositif , un système et un support d'informations sont décrits. Le procédé de reconnaissance de mouvement consiste à : obtenir des données de détection de mouvement d'un terminal mobile, les données de détection de mouvement comprenant des données de détection d'accélération ou des données de détection de gyroscope; obtenir des premières données de caractéristique de mouvement en fonction des données de détection de mouvement du terminal mobile; et comparer les premières données de caractéristique de mouvement à au moins une seconde donnée de caractéristique de mouvement prédéfinie, de telle sorte qu'un mouvement connu est déterminé à partir des mouvements connus correspondant à l'un ou aux secondes données de caractéristique de mouvement, comme le mouvement actuellement exécuté par le terminal mobile.
PCT/CN2018/078215 2017-03-09 2018-03-07 Procédé, dispositif, système de reconnaissance de mouvement et support d'informations WO2018161906A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710139182.4 2017-03-09
CN201710139182.4A CN107016347A (zh) 2017-03-09 2017-03-09 一种体感动作识别方法、装置以及***

Publications (1)

Publication Number Publication Date
WO2018161906A1 true WO2018161906A1 (fr) 2018-09-13

Family

ID=59440387

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/078215 WO2018161906A1 (fr) 2017-03-09 2018-03-07 Procédé, dispositif, système de reconnaissance de mouvement et support d'informations

Country Status (2)

Country Link
CN (1) CN107016347A (fr)
WO (1) WO2018161906A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376705A (zh) * 2018-11-30 2019-02-22 努比亚技术有限公司 舞蹈训练评分方法、装置及计算机可读存储介质
CN110502118A (zh) * 2019-08-28 2019-11-26 武汉宇宙寓言影视发展有限公司 一种运动感应控制的体感设备的控制方法、***及装置

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016347A (zh) * 2017-03-09 2017-08-04 腾讯科技(深圳)有限公司 一种体感动作识别方法、装置以及***
TWI670628B (zh) * 2017-11-15 2019-09-01 財團法人資訊工業策進會 動作評量模型生成裝置及其動作評量模型生成方法
CN108009620B (zh) * 2017-11-29 2022-01-21 顺丰科技有限公司 一种大礼拜计数方法、***和装置
CN108646931B (zh) * 2018-03-21 2022-10-14 深圳市创梦天地科技有限公司 一种终端控制方法及终端
CN108829237A (zh) * 2018-05-02 2018-11-16 北京小米移动软件有限公司 儿童手表控制方法、终端控制方法及装置
CN109011570A (zh) * 2018-06-14 2018-12-18 广州市点格网络科技有限公司 体感游戏互动方法与***
CN108989546B (zh) * 2018-06-15 2021-08-17 Oppo广东移动通信有限公司 电子装置的接近检测方法及相关产品
CN110009942B (zh) * 2019-04-11 2021-04-13 九思教育科技有限公司 一种函数体验装置
CN112235464B (zh) 2019-06-28 2022-05-31 华为技术有限公司 一种基于跌倒检测的呼救方法及电子设备
CN112316407A (zh) * 2019-08-04 2021-02-05 广州市品众电子科技有限公司 游戏控制方法及体感控制手柄
CN112316408B (zh) * 2019-08-04 2022-09-20 广州市品众电子科技有限公司 游戏控制方法及体感控制手柄

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463152A (zh) * 2015-01-09 2015-03-25 京东方科技集团股份有限公司 一种手势识别方法、***、终端设备及穿戴式设备
CN104679246A (zh) * 2015-02-11 2015-06-03 华南理工大学 一种交互界面中人手漫游控制的穿戴式设备及控制方法
CN104898828A (zh) * 2015-04-17 2015-09-09 杭州豚鼠科技有限公司 应用体感交互***的体感交互方法
CN105068657A (zh) * 2015-08-19 2015-11-18 北京百度网讯科技有限公司 手势的识别方法及装置
CN106094535A (zh) * 2016-05-31 2016-11-09 北京小米移动软件有限公司 设备控制方法及装置、电子设备
CN107016347A (zh) * 2017-03-09 2017-08-04 腾讯科技(深圳)有限公司 一种体感动作识别方法、装置以及***

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090017910A1 (en) * 2007-06-22 2009-01-15 Broadcom Corporation Position and motion tracking of an object
CN101788861B (zh) * 2009-01-22 2012-03-07 华硕电脑股份有限公司 三维动作识别方法与***
CN103886323B (zh) * 2013-09-24 2017-02-15 清华大学 基于移动终端的行为识别方法及移动终端
CN104516487A (zh) * 2013-09-28 2015-04-15 南京专创知识产权服务有限公司 基于体感技术的游戏模拟器
US9576192B2 (en) * 2014-03-12 2017-02-21 Yamaha Corporation Method and apparatus for notifying motion
CN104317389B (zh) * 2014-09-23 2017-12-26 广东小天才科技有限公司 一种通过动作识别人物角色的方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463152A (zh) * 2015-01-09 2015-03-25 京东方科技集团股份有限公司 一种手势识别方法、***、终端设备及穿戴式设备
CN104679246A (zh) * 2015-02-11 2015-06-03 华南理工大学 一种交互界面中人手漫游控制的穿戴式设备及控制方法
CN104898828A (zh) * 2015-04-17 2015-09-09 杭州豚鼠科技有限公司 应用体感交互***的体感交互方法
CN105068657A (zh) * 2015-08-19 2015-11-18 北京百度网讯科技有限公司 手势的识别方法及装置
CN106094535A (zh) * 2016-05-31 2016-11-09 北京小米移动软件有限公司 设备控制方法及装置、电子设备
CN107016347A (zh) * 2017-03-09 2017-08-04 腾讯科技(深圳)有限公司 一种体感动作识别方法、装置以及***

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376705A (zh) * 2018-11-30 2019-02-22 努比亚技术有限公司 舞蹈训练评分方法、装置及计算机可读存储介质
CN110502118A (zh) * 2019-08-28 2019-11-26 武汉宇宙寓言影视发展有限公司 一种运动感应控制的体感设备的控制方法、***及装置

Also Published As

Publication number Publication date
CN107016347A (zh) 2017-08-04

Similar Documents

Publication Publication Date Title
WO2018161906A1 (fr) Procédé, dispositif, système de reconnaissance de mouvement et support d'informations
US10169639B2 (en) Method for fingerprint template update and terminal device
JP6467965B2 (ja) 感情推定装置及び感情推定方法
CN108525305B (zh) 图像处理方法、装置、存储介质及电子设备
US11074466B2 (en) Anti-counterfeiting processing method and related products
CN108304758B (zh) 人脸特征点跟踪方法及装置
Liu et al. uWave: Accelerometer-based personalized gesture recognition and its applications
US20210133468A1 (en) Action Recognition Method, Electronic Device, and Storage Medium
EP2509070A1 (fr) Appareil et procédé pour déterminer la pertinence d'une saisie vocale
AU2020309090A1 (en) Image processing methods and apparatuses, electronic devices, and storage media
CN111368811B (zh) 活体检测方法、装置、设备及存储介质
US11918883B2 (en) Electronic device for providing feedback for specific movement using machine learning model and operating method thereof
WO2021047069A1 (fr) Procédé de reconnaissance faciale et dispositif terminal électronique
CN111432245B (zh) 多媒体信息的播放控制方法、装置、设备及存储介质
JP2021531589A (ja) 目標対象の動作認識方法、装置及び電子機器
US20170140215A1 (en) Gesture recognition method and virtual reality display output device
US20220408164A1 (en) Method for editing image on basis of gesture recognition, and electronic device supporting same
US10671713B2 (en) Method for controlling unlocking and related products
EP3757878A1 (fr) Estimation de pose de tête
WO2019024718A1 (fr) Procédé de traitement anti-contrefaçon, appareil de traitement anti-contrefaçon et dispositif électronique
WO2023168957A1 (fr) Procédé et appareil de détermination de pose, dispositif électronique, support d'enregistrement et programme
CN115100712A (zh) 表情识别方法、装置、电子设备及存储介质
KR102293416B1 (ko) 통신 장치, 서버 및 그것의 통신 방법
US20180126561A1 (en) Generation device, control method, robot device, call system, and computer-readable recording medium
CN108804996B (zh) 人脸验证方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18763962

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18763962

Country of ref document: EP

Kind code of ref document: A1