CN112489804A - Aiming prediction method and device - Google Patents

Aiming prediction method and device Download PDF

Info

Publication number
CN112489804A
CN112489804A CN201910867235.3A CN201910867235A CN112489804A CN 112489804 A CN112489804 A CN 112489804A CN 201910867235 A CN201910867235 A CN 201910867235A CN 112489804 A CN112489804 A CN 112489804A
Authority
CN
China
Prior art keywords
user
aiming
state data
prediction model
aiming result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910867235.3A
Other languages
Chinese (zh)
Inventor
张朕
秦林婵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 7Invensun Technology Co Ltd
Original Assignee
Beijing 7Invensun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 7Invensun Technology Co Ltd filed Critical Beijing 7Invensun Technology Co Ltd
Priority to CN201910867235.3A priority Critical patent/CN112489804A/en
Publication of CN112489804A publication Critical patent/CN112489804A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B2071/065Visualisation of specific exercise parameters
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2230/00Measuring physiological parameters of the user
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2230/00Measuring physiological parameters of the user
    • A63B2230/04Measuring physiological parameters of the user heartbeat characteristics, e.g. ECG, blood pressure modulations
    • A63B2230/06Measuring physiological parameters of the user heartbeat characteristics, e.g. ECG, blood pressure modulations heartbeat rate only

Landscapes

  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Primary Health Care (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The embodiment of the application discloses an aiming prediction method and device, which not only improve the efficiency of training aiming tasks, but also save training resources. Wherein the method comprises the following steps: acquiring actual state data when a user aims at a target; inputting the actual state data into a aiming result prediction model, and predicting an aiming result corresponding to the actual state data; the aiming result prediction model is obtained by training state data when the user aims at the target and an aiming result actually obtained by the user.

Description

Aiming prediction method and device
Technical Field
The present application relates to the field of computers, and in particular, to a method and an apparatus for aiming prediction.
Background
In sports, repeated training is required to achieve a good result. For some sports needing aiming, such as football, basketball, shooting, arrow shooting and the like, a user needs to repeatedly aim at a target and complete the whole sports process, such as kicking the football into a football gate or shooting an arrow to a target center and the like, so that a corresponding aiming result is obtained, the aiming action of the user is adjusted according to the corresponding aiming result, and the aim of improving the aiming result is achieved. However, the training efficiency is low, the athlete can hardly find out the real reason causing poor aiming result, and the specific training is carried out aiming at the specific reasons; and often wastes some training resources. For example, when training a soccer shoot, if the number of soccer balls is small, the time for training the shoot is often delayed by picking up the soccer balls; in order to not delay the shooting training time as much as possible, more football needs to be equipped for training, and certain resources are wasted.
Disclosure of Invention
In order to solve the technical problems in the prior art, the application provides an aiming prediction method and device, which can improve the training efficiency of aiming tasks and save training resources.
The application provides a method for aiming prediction, which comprises the following steps:
acquiring actual state data when a user aims at a target;
inputting the actual state data into a aiming result prediction model, and predicting an aiming result corresponding to the actual state data; the aiming result prediction model is obtained by training state data when the user aims at the target and an aiming result actually obtained by the user.
Optionally, the actual state data includes one or more of the following:
the system comprises static eye gazing data reflecting the gazing ability of the user in a static eye state, mental state data reflecting the mental state of the user and action data reflecting the action level of the user.
Optionally, the binocular fixation data comprises one or more of:
the method comprises the following steps of watching duration of a user in a static eye state, micro eye jump amplitude of the user, micro eye jump peak value of the user and micro eye jump frequency of the user.
Optionally, the mental state data includes one or more of the following:
concentration of the user, relaxation of the user, and heart rate variability of the user.
Optionally, the action data includes:
stability and/or consistency of the user's muscles.
Optionally, the aiming result prediction model includes one or more of the following:
the system comprises a first aiming result prediction model, a second aiming result prediction model and a third aiming result prediction model;
the actual state data corresponding to the first aiming result prediction model is the static eye watching data reflecting the watching capacity of the user in the static eye state;
the actual state data corresponding to the second aiming result prediction model is psychological state data reflecting the psychological state of the user;
and the actual state data corresponding to the third aiming result prediction model is action data reflecting the action level of the user.
The present application provides an aiming prediction device, the device comprising:
an acquisition unit configured to acquire actual state data when a user aims at a target;
the prediction unit is used for inputting the actual state data into a aiming result prediction model and predicting an aiming result corresponding to the actual state data; the aiming result prediction model is obtained by training state data when the user aims at the target and an aiming result actually obtained by the user.
Optionally, the actual state data includes one or more of the following:
the system comprises static eye gazing data reflecting the gazing ability of the user in a static eye state, mental state data reflecting the mental state of the user and action data reflecting the action level of the user.
Optionally, the binocular fixation data comprises one or more of:
the method comprises the following steps of watching duration of a user in a static eye state, micro eye jump amplitude of the user, micro eye jump peak value of the user and micro eye jump frequency of the user.
Optionally, the mental state data includes one or more of the following:
concentration of the user, relaxation of the user, and heart rate variability of the user.
Optionally, the action data includes:
stability and/or consistency of the user's muscles.
Optionally, the aiming result prediction model includes one or more of the following:
the system comprises a first aiming result prediction model, a second aiming result prediction model and a third aiming result prediction model;
the actual state data corresponding to the first aiming result prediction model is the static eye watching data reflecting the watching capacity of the user in the static eye state;
the actual state data corresponding to the second aiming result prediction model is psychological state data reflecting the psychological state of the user;
and the actual state data corresponding to the third aiming result prediction model is action data reflecting the action level of the user.
According to the method and the device, the training model of the aiming result is trained by utilizing the training state data and the aiming result of the user, after the training is finished, the corresponding aiming result can be predicted as long as the actual state data of the aiming target of the user is input, and therefore the user is guided to adjust the aiming action and state. That is to say, the user can train the aiming ability of the user without completing the whole movement process, the efficiency of training the user is improved, and the training resources are saved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of an aiming prediction method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of three aiming result prediction models provided by an embodiment of the present application;
fig. 3 is a block diagram of a collimation prediction apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a flowchart of a method for aiming prediction according to an embodiment of the present application is shown.
The aiming prediction method provided in this embodiment may be executed by a terminal device, where the terminal device may be a desktop computer, a notebook computer, a mobile phone, a PAD, and the like, and the embodiment of the present application is not particularly limited.
Specifically, the aiming prediction method may include the following steps:
s101: training state data when the user aims at the target and an aiming result actually obtained by the user are obtained.
In the embodiment of the application, training state data of the aiming target when the user performs the aiming task in the process of movement can be obtained. The aiming task is a task of aiming a target in motion, and may be a aiming task in motion such as shooting, shooting an arrow, shooting a dart, playing a billiard ball, playing a soccer ball, and playing a basketball. Taking shooting as an example, the aiming target can be a target; taking football playing as an example, the aiming target can be a football goal; for example, in the case of basketball, the aiming target may be a rim.
The training state data when the user aims at the target in the embodiment of the application may include one or more of the following: the system comprises a user, a static eye fixation data reflecting the fixation ability of the user in a static eye state, a psychological state data reflecting the psychological state of the user, an action data reflecting the action level of the user and the like. These data are data that affect to some extent the targeting results obtained by the user performing the targeting task. The aiming result may reflect how well the aiming task is performed, and may be an absolute value or a deviation value from a calibration value. For example, for archery, the aiming result can be the number of rings in the archery, and can also be a deviation value from the calibrated number of rings.
Wherein the binocular fixation data comprises one or more of: the method comprises the following steps of watching time of a user in a static eye state, micro eye jump amplitude of the user, micro eye jump peak value of the user, micro eye jump frequency of the user and the like. In the embodiment of the present application, the micro-eye jump refers to a micro-jump generated by an eyeball unconsciously when a user gazes at a certain fixation point. The above-mentioned still-eye gaze data may be obtained by an eye tracker that collects eye movement data of a user, in particular by using a wearable eye tracker for gaze data collection.
The mental state data includes one or more of: concentration of the user, relaxation of the user, and Heart Rate Variability (HRV) of the user, among others.
Wherein, the concentration degree and the relaxation degree of the user can be calculated by collecting the brain wave signals of the user by using a brain wave detector. For example, concentration can be simply considered as the degree to which electroencephalogram alpha waves are suppressed, while relaxation is the appearance of alpha waves, especially if medium frequency alpha waves are active. The concentration degree can reflect the concentration degree of the attention of the user, and the relaxation degree mainly reflects the mental state of the user. Concentration and relaxation can be considered as an effective index that can objectively judge the functional state of the user.
The heart rate variability is the variation of the difference of successive heartbeat cycles or the variation of the heartbeat speed. The level of heart rate variability may reflect the level of stress experienced by the user. Data on heart rate variability may be acquired by mounting a sensor on the user.
The motion data reflecting the user motion level may comprise the stability and/or consistency of the user's muscles, etc.
The muscle strength is reflected by the muscle stability, and the higher the stability is, the higher the accuracy of aiming shooting is, and the better the aiming result is. The higher the consistency of the muscle, the higher the accuracy and the better the targeting result.
For example, when the motion is archery, the motion data includes one or more of: stability of the deltoid muscle of the user's arm being bowed, consistency of the deltoid muscle of the user's arm being bowed, stability of the flexor muscle of the user's arm being bowed, and consistency of the flexor muscle of the user's arm being bowed.
The above-mentioned motion data can be calculated by an electromyograph from an Electromyography (EMG) detected by a user when performing a targeting task. The electromyographic signals are bioelectric signals generated by muscle contraction, and the collected electromyographic signals on the skin surface are called surface electromyographic signals (sEMG). And obtaining the stability data and the consistency data of the muscle of the user according to the detected electromyographic signals of the user.
In practical applications, one or more of the above training state data may be selected to perform the subsequent steps. The selection mode can be as follows: and performing correlation test on the training state data of the user and the corresponding aiming result, and selecting the training state data which is significantly correlated with the aiming result (for example, the correlation coefficient is greater than 0.09) for subsequent steps.
Furthermore, in the present embodiment, data embodying user concentration, relaxation, heart rate variability, muscle stability, muscle consistency may be represented, for example, by a number between 0-1, with closer to 1 representing higher; closer to 0 indicates lower.
S102: and training a aiming result prediction model according to the training state data and the aiming result.
In the embodiment of the application, the aiming result training model is trained in a machine learning mode. The input of the model is the training state data of the user and the corresponding aiming result.
In the training process, the fixed eye watching data reflecting the watching capacity of the user in the fixed eye state, the psychological state data reflecting the psychological state of the user, the action data reflecting the action level of the user and the like can be input into a training aiming result prediction model for training, and a corresponding aiming result prediction model can be trained aiming at each training state data, so that the corresponding aiming results can be predicted according to various actual state data of the user, and the actual state data of the user is better and the actual state data of the user is poorer, and the user can be trained in a targeted manner.
For example, referring to fig. 2, three aiming outcome prediction models, a first aiming outcome prediction model, a second aiming outcome prediction model, and a third aiming outcome prediction model, may be trained, wherein the first aiming outcome prediction model is trained based on the user's fixation data, the second aiming outcome prediction model is trained based on the mental state data, and the third aiming outcome prediction model is trained based on the motion data.
S103: actual state data is obtained when the user aims at the target.
The method of acquiring the actual state data when the user aims at the target is the same as the method of acquiring the training state data when the user aims at the target in step S101, and therefore the present step will not be repeated. It should be noted that all the state data may be acquired at this time, or only part of the state data may be acquired according to actual needs, depending on specific sports items, instruments equipped on the site, and specific requirements.
S104: and inputting the actual state data into a aiming result prediction model, and predicting an aiming result corresponding to the actual state data.
And after the training of the aiming result prediction model is finished, acquiring actual state data when the user aims at the target, inputting the actual state data into the aiming result prediction model, and predicting a corresponding aiming result.
Wherein the type of actual state data and the type of training state data should be identical. For example, if the training state data includes the user's amplitude of the micro-eye jump during the training aiming stage, then the actual state data should also include the user's amplitude of the micro-eye jump during the actual aiming stage.
According to the embodiment of the application, the aiming result prediction model is trained by utilizing the training state data and the aiming result of the user, after the training is finished, the corresponding aiming result can be predicted as long as the actual state data of the aiming target of the user is input, and therefore the user is guided to adjust the aiming action and state. That is to say, the user does not need to complete the whole movement process, namely, the user does not need to carry out final specific actions or use specific articles for movement, and the training of aiming ability of the user can be improved, so that the efficiency of training the user is improved, and training resources are saved.
Taking archery as an example, the user can know what the possible aiming result is by only aiming at the target without actually shooting the arrow out. If the aiming result is low, the aiming action can be readjusted, so that the user can be trained to shoot with higher efficiency. Meanwhile, the user does not need to actually shoot the arrow out, so that the training can be finished only by one or a small number of arrows, and the training resources are saved.
For example, in shooting training, a user only needs to use one gun without installing bullets, and can know the possible aiming results after completing shooting actions, so that the user can perform adaptive adjustment according to predicted aiming results, and finally the aiming results are improved.
Further, as described above, the aiming result prediction models may include a first aiming result prediction model, a second aiming result prediction model, and a third aiming result prediction model.
And after acquiring actual fixation data, actual psychological state data and actual action data of the user during the aiming task, respectively inputting the data into the corresponding aiming result prediction models, and predicting to obtain corresponding aiming results. If the aiming result predicted according to the actual static eye fixation data is better, and the achievement predicted according to the actual action data is lower, the stability and/or consistency of the muscle used by the user in the aiming task can be trained in a targeted manner, so that the final exercise aiming result is improved.
Therefore, the training of multiple aiming result prediction models in different categories is helpful for users to judge advantages and disadvantages of the users, the method can be used for training and testing when the aiming results of the movement need to be improved or the disadvantages of the users need to be found, the weaknesses and the disadvantages of the users can be found, a good training method can be found for obtaining a better aiming result, and the walking of a curved road is avoided.
Based on the aiming prediction method provided by the above embodiment, the embodiment of the application also provides an aiming prediction device, and the working principle of the aiming prediction device is described in detail below with reference to the accompanying drawings.
Referring to fig. 3, the drawing is a block diagram of a pointing prediction device according to an embodiment of the present application.
The aiming prediction device provided by the embodiment comprises:
an acquisition unit 301 for acquiring actual state data when a user aims at a target;
a prediction unit 302, configured to input the actual state data into a targeting result prediction model, and predict a targeting result corresponding to the actual state data; the aiming result prediction model is obtained by training state data when the user aims at the target and an aiming result actually obtained by the user.
According to the embodiment of the application, the aiming result prediction model is trained by utilizing the training state data and the aiming result of the user, after the training is finished, the corresponding aiming result can be predicted as long as the actual state data of the aiming target of the user is input, and therefore the user is guided to adjust the aiming action and state. That is to say, the user can train the aiming ability of the user without completing the whole movement process, the efficiency of training the user is improved, and the training resources are saved.
Optionally, the actual state data includes one or more of the following:
the system comprises static eye gazing data reflecting the gazing ability of the user in a static eye state, mental state data reflecting the mental state of the user and action data reflecting the action level of the user.
Optionally, the binocular fixation data comprises one or more of:
the method comprises the following steps of watching duration of a user in a static eye state, micro eye jump amplitude of the user, micro eye jump peak value of the user and micro eye jump frequency of the user.
Optionally, the mental state data includes one or more of the following:
concentration of the user, relaxation of the user, and heart rate variability of the user.
Optionally, the action data includes:
stability and/or consistency of the user's muscles.
Optionally, the aiming result prediction model includes one or more of the following:
the system comprises a first aiming result prediction model, a second aiming result prediction model and a third aiming result prediction model;
the actual state data corresponding to the first aiming result prediction model is the static eye watching data reflecting the watching capacity of the user in the static eye state;
the actual state data corresponding to the second aiming result prediction model is psychological state data reflecting the psychological state of the user;
and the actual state data corresponding to the third aiming result prediction model is action data reflecting the action level of the user.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above-described embodiments are intended to explain the objects, aspects and advantages of the present invention in further detail, and it should be understood that the above-described embodiments are merely exemplary embodiments of the present invention.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (12)

1. A method of aiming prediction, the method comprising:
acquiring actual state data when a user aims at a target;
inputting the actual state data into a aiming result prediction model, and predicting an aiming result corresponding to the actual state data; the aiming result prediction model is obtained by training state data when the user aims at the target and an aiming result actually obtained by the user.
2. The method of claim 1, wherein the actual state data comprises one or more of:
the system comprises static eye gazing data reflecting the gazing ability of the user in a static eye state, mental state data reflecting the mental state of the user and action data reflecting the action level of the user.
3. The method of claim 2, wherein the static eye gaze data comprises one or more of:
the method comprises the following steps of watching duration of a user in a static eye state, micro eye jump amplitude of the user, micro eye jump peak value of the user and micro eye jump frequency of the user.
4. The method of claim 2, wherein the mental state data comprises one or more of:
concentration of the user, relaxation of the user, and heart rate variability of the user.
5. The method of claim 2, wherein the action data comprises:
stability and/or consistency of the user's muscles.
6. The method of any of claims 2-5, wherein the target outcome prediction model comprises one or more of:
the system comprises a first aiming result prediction model, a second aiming result prediction model and a third aiming result prediction model;
the actual state data corresponding to the first aiming result prediction model is the static eye watching data reflecting the watching capacity of the user in the static eye state;
the actual state data corresponding to the second aiming result prediction model is psychological state data reflecting the psychological state of the user;
and the actual state data corresponding to the third aiming result prediction model is action data reflecting the action level of the user.
7. An aiming prediction device, the device comprising:
an acquisition unit configured to acquire actual state data when a user aims at a target;
the prediction unit is used for inputting the actual state data into a aiming result prediction model and predicting an aiming result corresponding to the actual state data; the aiming result prediction model is obtained by training state data when the user aims at the target and an aiming result actually obtained by the user.
8. The apparatus of claim 7, wherein the actual state data comprises one or more of:
the system comprises static eye gazing data reflecting the gazing ability of the user in a static eye state, mental state data reflecting the mental state of the user and action data reflecting the action level of the user.
9. The apparatus of claim 8, wherein the static eye gaze data comprises one or more of:
the method comprises the following steps of watching duration of a user in a static eye state, micro eye jump amplitude of the user, micro eye jump peak value of the user and micro eye jump frequency of the user.
10. The apparatus of claim 8, wherein the mental state data comprises one or more of:
concentration of the user, relaxation of the user, and heart rate variability of the user.
11. The apparatus of claim 8, wherein the action data comprises:
stability and/or consistency of the user's muscles.
12. The apparatus of any of claims 8-11, wherein the target outcome prediction model comprises one or more of:
the system comprises a first aiming result prediction model, a second aiming result prediction model and a third aiming result prediction model;
the actual state data corresponding to the first aiming result prediction model is the static eye watching data reflecting the watching capacity of the user in the static eye state;
the actual state data corresponding to the second aiming result prediction model is psychological state data reflecting the psychological state of the user;
and the actual state data corresponding to the third aiming result prediction model is action data reflecting the action level of the user.
CN201910867235.3A 2019-09-12 2019-09-12 Aiming prediction method and device Pending CN112489804A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910867235.3A CN112489804A (en) 2019-09-12 2019-09-12 Aiming prediction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910867235.3A CN112489804A (en) 2019-09-12 2019-09-12 Aiming prediction method and device

Publications (1)

Publication Number Publication Date
CN112489804A true CN112489804A (en) 2021-03-12

Family

ID=74920794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910867235.3A Pending CN112489804A (en) 2019-09-12 2019-09-12 Aiming prediction method and device

Country Status (1)

Country Link
CN (1) CN112489804A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114756137A (en) * 2022-06-15 2022-07-15 深圳市心流科技有限公司 Training mode adjusting method and device for electromyographic signals and electroencephalographic signals

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105852873A (en) * 2016-03-25 2016-08-17 惠州Tcl移动通信有限公司 Motion data analysis method, electronic equipment and terminal
CN107014378A (en) * 2017-05-22 2017-08-04 中国科学技术大学 A kind of eye tracking aims at control system and method
CN108491074A (en) * 2018-03-09 2018-09-04 广东欧珀移动通信有限公司 Electronic device, exercising support method and Related product
CN109828663A (en) * 2019-01-14 2019-05-31 北京七鑫易维信息技术有限公司 Determination method and device, the operating method of run-home object of aiming area
CN109925678A (en) * 2019-03-01 2019-06-25 北京七鑫易维信息技术有限公司 A kind of training method based on eye movement tracer technique, training device and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105852873A (en) * 2016-03-25 2016-08-17 惠州Tcl移动通信有限公司 Motion data analysis method, electronic equipment and terminal
CN107014378A (en) * 2017-05-22 2017-08-04 中国科学技术大学 A kind of eye tracking aims at control system and method
CN108491074A (en) * 2018-03-09 2018-09-04 广东欧珀移动通信有限公司 Electronic device, exercising support method and Related product
CN109828663A (en) * 2019-01-14 2019-05-31 北京七鑫易维信息技术有限公司 Determination method and device, the operating method of run-home object of aiming area
CN109925678A (en) * 2019-03-01 2019-06-25 北京七鑫易维信息技术有限公司 A kind of training method based on eye movement tracer technique, training device and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李玉章: "《射击运动员专项认知眼动特征的研究》", 上海:复旦大学出版社, pages: 233 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114756137A (en) * 2022-06-15 2022-07-15 深圳市心流科技有限公司 Training mode adjusting method and device for electromyographic signals and electroencephalographic signals
CN114756137B (en) * 2022-06-15 2022-09-27 深圳市心流科技有限公司 Training mode adjusting method and device for electromyographic signals and electroencephalographic signals

Similar Documents

Publication Publication Date Title
Mullineaux et al. Coordination-variability and kinematics of misses versus swishes of basketball free throws
Wulf et al. Increased jump height with an external focus due to enhanced lower extremity joint kinetics
Tang et al. Postural tremor and control of the upper limb in air pistol shooters
US20190344121A1 (en) Exercise training adaptation using physiological data
Vidal et al. Investigating the constrained action hypothesis: a movement coordination and coordination variability approach
Ciacci et al. Sprint start kinematics during competition in elite and world-class male and female sprinters
US20190343459A1 (en) Fatigue measurement in a sensor equipped garment
Guan et al. Biomechanical insights into the determinants of speed in the fencing lunge
WO2018214530A1 (en) Method and system for competitive state assessment of athletes
Chiu et al. Proximal-to-distal sequencing in vertical jumping with and without arm swing
CN108209912B (en) Electromyographic signal acquisition method and device
Rinaldi et al. Biomechanical characterization of the Junzuki karate punch: Indexes of performance
Vendrame et al. Performance assessment in archery: a systematic review
Kawabata et al. Acceleration patterns in the lower and upper trunk during running
Jurkojć et al. Mathematical modelling as a tool to assessment of loads in volleyball player’s shoulder joint during spike
Coventry et al. Kinematic effects of a short-term fatigue protocol on punt-kicking performance
Nishioka et al. Influence of strength level on performance enhancement using resistance priming
Plummer et al. The effects of localised fatigue on upper extremity jump shot kinematics and kinetics in team handball
Chadefaux et al. Active tuning of stroke-induced vibrations by tennis players
CN112489804A (en) Aiming prediction method and device
Vincze et al. Quiet Eye as a mechanism for table tennis performance under fatigue and complexity
Rigozzi et al. Application of wearable technologies for player motion analysis in racket sports: A systematic review
Kuhtz-Buschbeck et al. Muscle activity in throwing with the dominant and non-dominant arm
Seeley et al. A comparison of muscle activations during traditional and abbreviated tennis serves
JP2018000537A (en) Exercise supporting device and exercise supporting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination