CN117618723A - Be used for improving children's attention training system - Google Patents

Be used for improving children's attention training system Download PDF

Info

Publication number
CN117618723A
CN117618723A CN202311654917.9A CN202311654917A CN117618723A CN 117618723 A CN117618723 A CN 117618723A CN 202311654917 A CN202311654917 A CN 202311654917A CN 117618723 A CN117618723 A CN 117618723A
Authority
CN
China
Prior art keywords
game
children
eye
eye movement
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311654917.9A
Other languages
Chinese (zh)
Inventor
黄河
李晶
徐仁彬
柏志建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Polk Medical Technology Shanghai Co ltd
Original Assignee
Polk Medical Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Polk Medical Technology Shanghai Co ltd filed Critical Polk Medical Technology Shanghai Co ltd
Priority to CN202311654917.9A priority Critical patent/CN117618723A/en
Publication of CN117618723A publication Critical patent/CN117618723A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a training system for improving the attention of children, which particularly relates to the technical field of eye tracking, combines visual sense channels, auditory sense channels and tactile sense channels with a game, trains children through game scenes and game interactions, tracks eye movement by using eye tracking equipment and infrared technology, collects eye tracking data generated by the children in the game interaction process, extracts a gaze point by judging whether the eyeballs stay at one position, generates a gaze point density map and an eye thermodynamic diagram according to the collected eye tracking data, analyzes the gaze point density map and the eye thermodynamic diagram, determines an attention concentration area based on the analysis result of the eye tracking data, segments and adjusts the game content through visual guidance and feedback, and optimizes the game content.

Description

Be used for improving children's attention training system
Technical Field
The present invention relates to the field of eye tracking technology, and more particularly, to a training system for improving attention of children.
Background
Note that defective hyperactivity disorder (ADHD) is a common neuro-developmental mental disorder that usually begins to appear during childhood. It is mainly manifested by inattention, overactivity and impulsive behaviour. The exact cause of ADHD is not known. Currently, about 2300 tens of thousands of patients with attention deficit hyperactivity disorder aged 4-18 years exist in China, wherein 70% of children in school age last to puberty and 30% -50% last to adulthood. Attention deficit hyperactivity disorder is a neurological disorder for which early recognition and early intervention are important in children.
Disclosure of Invention
In order to overcome the above-mentioned drawbacks of the prior art, embodiments of the present invention provide an attention training system for improving the attention training of children by recording the eye movement trajectories of children while watching visual stimuli to infer their attention distribution and distraction, so as to solve the problems presented in the background art.
In order to achieve the above purpose, the present invention provides a technical scheme for improving a child attention training system, which comprises a multi-channel game interaction module, an eye tracking data acquisition module, an eye tracking data analysis module, and a game content optimization module;
a multi-channel game interaction module: combining multiple sense channels of vision, hearing and touch with a game, and training the child through game scenes and interaction with the game;
eye movement tracking data acquisition module: eye movement is tracked through eye movement tracking equipment and an infrared technology, eye movement tracking data generated by children in the game interaction process are obtained, and the fixation point is extracted;
eye movement tracking data analysis module: generating a fixation point density map and an eye movement thermodynamic diagram for analysis according to the acquired eye movement tracking data;
a game content optimizing module: and determining a concentration area according to the analysis result of the eye tracking data, segmenting and adjusting the game content by visual guidance and feedback, and optimizing the game content.
In a preferred embodiment, the multi-channel game interaction module trains children through a multi-channel game, attracts the attention and actively participates in the children, and the multi-channel game combines visual, auditory and tactile sense channels, excites the interests of the children and promotes the concentration of the attention through rich game scenes and interactions with the games, and specifically comprises the following steps:
step A1, visual stimulation: designing colorful and vivid interesting game scenes to attract the visual attention of children, adding some vivid animation elements including lovely animals, interesting roles and jumping bubbles, so that the elements are active in the scenes and attract the eyeballs of the children;
step A2, auditory stimulation: the method comprises the steps of exciting hearing attention of children by using sound effects, adding a story line with sound effects in a game, attracting the attention of the children through the description and the construction of sound effects, making game elements emit sound by utilizing a sound stereotactic technology, guiding the children to locate and find the elements through hearing, and culturing hearing observation and orientation capability of the children;
the sound stereotactic technology is a localization technology which utilizes the characteristic of sound wave propagation in space to realize the localization of sound source position, and is based on sound arrival time difference and sound arrival angle difference, and the specific calculation formula is as follows:
wherein Δt represents the sound arrival time difference, D represents the distance between the two ears, c represents the propagation velocity of sound in air, θ represents the sound source direction, Δx represents the phase difference of sound between the two radios, and D represents the distance between the two radios;
step A3, touch stimulation: the vibration device is added on the game handle, when special conditions are met, the vibration is used for providing touch feedback, the somatosensory equipment is utilized for guiding the children to perform physical actions, the immersion and interactivity of the game are enhanced through the touch stimulus, and the attention and input of the children are improved.
In a preferred embodiment, the eye movement tracking data acquisition module wears the eye movement tracking device on the eye position of the child, uses the infrared technology to track the eye movement, makes the child participate in the interactive activities of the game, acquires the eye movement tracking data, and extracts the fixation point by judging whether the eye stays on one position, which comprises the following specific steps:
step B1, data acquisition: the apparatus records the eye movement data of the child in a time-series manner such that the time-series of the eye movement data is (t=1, 2,3.., n), and at a time point t, the eye position of the child is (x t ,y t ) Pupil size r, gaze point duration d, use (x) t ,y t ,r t ,d t ) Representation, where x t And y t Representing the two-dimensional coordinate position of the eyeball on the screen, r t Represents pupil size, d t Representing the duration of the gaze point;
step B2, extracting the fixation point: after the eye movement tracking data is collected, whether or not the eyeball remains at one position is determined, and the gaze point position coordinates and the duration are extracted, and the eyeball position at time t in the eye movement data is (x) t ,y t ) Speed v t Using a rest time threshold T s And a speed threshold V s To determine whether the eyeball remains in one position, the specific steps include:
step 101, calculating displacement distance of eyeballs between two adjacent moments, wherein a specific calculation formula is as follows:
wherein p is t Representing the displacement distance of the eyeball between two adjacent moments, (x) t ,y t ) The eyeball position at time t, (x) t ,y t ) The eyeball position at time t+1;
step 102, judging a rest time threshold condition: when the eyeball is at position (x t ,y t ) The upper duration exceeds the rest time threshold T s The conditions are satisfied:
wherein p is i Represents the displacement distance between eyeballs at a certain moment, P s Is a displacement distance threshold corresponding to the rest time threshold;
step 103, judging a speed threshold condition: when the eyeball is at position (x t ,y t ) Velocity v at time v t Below the speed threshold V s The conditions are satisfied: v t <V s
Wherein v is t Representing the eyeball speed, V s Representing a speed threshold;
and when the duration of the eyeball at one position exceeds the resting time threshold and the speed is lower than the speed threshold, judging that the eyeball stays at the position, and extracting the stay position and the duration of the eyeball.
In a preferred embodiment, the eye tracking data analysis module analyzes eye tracking data generated by a child in a game interaction process by using an artificial intelligence technology, generates a gaze point density map and an eye thermodynamic map from the eye tracking data, and displays the attention distribution condition of the child in the game process, and specifically comprises the following steps:
step C1, a fixation point density chart: dividing a screen area into grids, counting the number of gaze points falling in each grid, and estimating the density distribution of the gaze points on the screen by using Gaussian kernel density, wherein a specific calculation formula is as follows:
where (l, h) denotes the position of the grid, D (l, h) denotes the density value at position (l, h), n is the number of gaze points falling within the grid, (x) t- i,y t- i) Is the position coordinate of the ith point of regard, d t Is the duration of each gaze point, σ is the standard deviation of the gaussian kernel;
step C2, thermodynamic diagram analysis: generating an eye thermodynamic diagram by analyzing and image processing eye movement data of a child in a game, wherein the eye thermodynamic diagram is used for displaying a concentration area of the child in the game, the gazing frequencies of different areas are represented by using color shades, and the eye thermodynamic diagram is generated by gazing frequencies and color mapping, and the specific steps comprise:
in step 201, the gaze frequency is used to represent the degree of eyeball gazing in a certain area, and the specific calculation formula is as follows:
wherein F (l, h) represents the gaze frequency at the location (l, h), N represents the number of gaze points falling within the grid, N represents the total number of gaze points;
step 202, color mapping: according to the size of the fixation frequency, the fixation frequency is mapped to different color shades, the linear mapping function is used for mapping the fixation frequency to gray values between 0 and 255, and a specific calculation formula is as follows:
wherein Gray represents Gray value, F represents gazing frequency, F min Representing the minimum value of the gaze frequency, F max Representing a maximum value of the gaze frequency;
step 203, image generation: according to the gazing frequency and the color mapping, the gazing frequency of each gazing point is mapped into corresponding colors, pixel points are drawn at corresponding positions, all the gazing points are drawn on one image, and an eye thermodynamic diagram is generated.
In a preferred embodiment, the game content optimizing module obtains the attention distribution and the attention concentration of the child in the game interaction process based on the analysis result of the eye tracking data, optimizes the game content, and improves the attention of the child, and specifically comprises the following steps:
step S1, determining a concentration area: determining the attention focusing area of the child in the game through the gaze point density map and the eye thermodynamic diagram, and placing game key elements in the key areas so as to guide the attention of the child;
step S2, visual guidance and feedback: according to the analysis results of the glance path and the eye thermodynamic diagram, guiding the attention of the child in the game, and focusing the attention of the child in a correct place by adding visual guiding elements in a key area, wherein the visual guiding elements comprise animation effects and color flickering so as to attract the eyes of the child and provide timely visual feedback;
the scanning path is represented by connecting position coordinates of adjacent fixation points, the distance between the adjacent fixation points is calculated, and the distance is accumulated to obtain the total length of the scanning path, and the specific calculation formula is as follows:
where M represents the total length of the glance path, n is the number of gaze points, (x) t- i,y t- i) Is the position coordinate of the ith point of regard, [ x ] t- (i+1),y t- (i+1)]Is the position coordinates of the (i+1) th gaze point;
step S3, segmentation and adjustment difficulty: according to the analysis result of the eye movement data, the concentration time and the dispersion time of the children in the game are obtained, the game content is divided into small sections, the difficulty of each section is moderate, and rewards are given after one section is completed, so that the power and the interest of the children are increased.
The beneficial effects of the invention are as follows: combining visual sense, auditory sense and tactile sense channels with a game, training a child through a game scene and interaction with the game, tracking eye movement by using eye movement tracking equipment and an infrared technology, collecting eye movement tracking data generated by the child in the game interaction process, judging whether the eyeballs stay at one position, extracting a fixation point, generating a fixation point density map and an eye movement thermodynamic diagram according to the collected eye movement tracking data, analyzing, determining an attention concentration area based on an eye movement tracking data analysis result, segmenting and adjusting game contents through visual guidance and feedback, optimizing the game contents, attracting the eye movement of the child, and improving the attention of the child.
Drawings
Fig. 1 is a block diagram of the structure of the present invention.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present application, the term "for example" is used to mean "serving as an example, instance, or illustration. Any embodiment described herein as "for example" is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the invention. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and processes have not been described in detail so as not to obscure the description of the invention with unnecessary detail. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
Example 1
The embodiment provides a training system for improving the attention of children, which is shown in fig. 1, and specifically comprises a multi-channel game interaction module, an eye tracking data acquisition module, an eye tracking data analysis module and a game content optimization module;
a multi-channel game interaction module: combining multiple sense channels of vision, hearing and touch with a game, and training the child through game scenes and interaction with the game;
eye movement tracking data acquisition module: eye movement is tracked through eye movement tracking equipment and an infrared technology, eye movement tracking data generated by children in the game interaction process are obtained, and the fixation point is extracted;
eye movement tracking data analysis module: generating a fixation point density map and an eye movement thermodynamic diagram for analysis according to the acquired eye movement tracking data;
a game content optimizing module: and determining a concentration area according to the analysis result of the eye tracking data, segmenting and adjusting the game content by visual guidance and feedback, and optimizing the game content.
In this embodiment, a specific description is a multi-channel game interaction module, where the multi-channel game interaction module trains children through a multi-channel game, attracts attention and actively participates in the children, and the multi-channel game combines multiple sense channels of vision, hearing and touch, and through rich game scenes and interactions with the game, excites interests of the children and promotes concentration of attention, and specifically includes the following steps:
step A1, visual stimulation: designing colorful and vivid interesting game scenes to attract the visual attention of children, adding some vivid animation elements including lovely animals, interesting roles and jumping bubbles, so that the elements are active in the scenes and attract the eyeballs of the children;
step A2, auditory stimulation: the method comprises the steps of exciting hearing attention of children by using sound effects, adding a story line with sound effects in a game, attracting the attention of the children through the description and the construction of sound effects, making game elements emit sound by utilizing a sound stereotactic technology, guiding the children to locate and find the elements through hearing, and culturing hearing observation and orientation capability of the children;
the sound stereotactic technology is a localization technology which utilizes the characteristic of sound wave propagation in space to realize the localization of sound source position, and is based on sound arrival time difference and sound arrival angle difference, and the specific calculation formula is as follows:
wherein Δt represents the sound arrival time difference, D represents the distance between the two ears, c represents the propagation velocity of sound in air, θ represents the sound source direction, Δx represents the phase difference of sound between the two radios, and D represents the distance between the two radios;
step A3, touch stimulation: the vibration device is added on the game handle, when special conditions are met, the vibration is used for providing touch feedback, the somatosensory equipment is utilized for guiding the children to perform physical actions, the immersion and interactivity of the game are enhanced through the touch stimulus, and the attention and input of the children are improved.
In this embodiment, it is specifically to be noted that an eye movement tracking data acquisition module, the eye movement tracking data acquisition module wears an eye movement tracking device on an eye position of a child, uses an infrared technology to track eye movement, makes the child participate in an interactive activity of a game, acquires eye movement tracking data, and extracts a fixation point by judging whether the eye stays on a position, and specifically includes the following steps:
step B1, data acquisition: the apparatus records the eye movement data of the child in a time-series manner such that the time-series of the eye movement data is (t=1, 2,3.., n), and at a time point t, the eye position of the child is (x t ,y t ) Pupil size r, gaze point duration d, use (x) t ,y t ,r t ,d t ) Representation, where x t And y t Representing the two-dimensional coordinate position of the eyeball on the screen, r t Represents pupil size, d t Representing the duration of the gaze point;
step B2, extracting the fixation point: after the eye movement tracking data is collected, whether or not the eyeball remains at one position is determined, and the gaze point position coordinates and the duration are extracted, and the eyeball position at time t in the eye movement data is (x) t ,y t ) Speed v t Using a rest time threshold T s And a speed threshold V s To determine whether the eyeball remains in one position, the specific steps include:
step 101, calculating displacement distance of eyeballs between two adjacent moments, wherein a specific calculation formula is as follows:
wherein p is t Representing the displacement distance of the eyeball between two adjacent moments, (x) t ,y t ) The eyeball position at time t, (x) t ,y t ) The eyeball position at time t+1;
step 102, judging a rest time threshold condition: when the eyeball is at position (x t ,y t ) The upper duration exceeds the rest time threshold T s Full of all that isFoot conditions:
wherein p is i Represents the displacement distance between eyeballs at a certain moment, P s Is a displacement distance threshold corresponding to the rest time threshold;
step 103, judging a speed threshold condition: when the eyeball is at position (x t ,y t ) Velocity v at time v t Below the speed threshold V s The conditions are satisfied: v t <V s
And when the duration of the eyeball at one position exceeds the resting time threshold and the speed is lower than the speed threshold, judging that the eyeball stays at the position, and extracting the stay position and the duration of the eyeball.
In this embodiment, it is specifically to be described that an eye movement tracking data analysis module, which uses artificial intelligence technology to analyze eye movement tracking data generated by a child in a game interaction process, generates a gaze point density map and an eye movement thermodynamic map from the eye movement tracking data, and displays the attention distribution situation of the child in the game process, and specifically includes the following steps:
step C1, a fixation point density chart: dividing a screen area into grids, counting the number of gaze points falling in each grid, and estimating the density distribution of the gaze points on the screen by using Gaussian kernel density, wherein a specific calculation formula is as follows:
where (l, h) denotes the position of the grid, D (l, h) denotes the density value at position (l, h), n is the number of gaze points falling within the grid, (x) t- i,y t- i) Is the position coordinate of the ith point of regard, d t Is the duration of each gaze point, σ is the standard deviation of the gaussian kernel;
step C2, thermodynamic diagram analysis: generating an eye thermodynamic diagram by analyzing and image processing eye movement data of a child in a game, wherein the eye thermodynamic diagram is used for displaying a concentration area of the child in the game, the gazing frequencies of different areas are represented by using color shades, and the eye thermodynamic diagram is generated by gazing frequencies and color mapping, and the specific steps comprise:
in step 201, the gaze frequency is used to represent the degree of eyeball gazing in a certain area, and the specific calculation formula is as follows:
wherein F (l, h) represents the gaze frequency at the location (l, h), N represents the number of gaze points falling within the grid, N represents the total number of gaze points;
step 202, color mapping: according to the size of the fixation frequency, the fixation frequency is mapped to different color shades, the linear mapping function is used for mapping the fixation frequency to gray values between 0 and 255, and a specific calculation formula is as follows:
wherein Gray represents Gray value, F represents gazing frequency, F min Representing the minimum value of the gaze frequency, F max Representing a maximum value of the gaze frequency;
step 203, image generation: according to the gazing frequency and the color mapping, the gazing frequency of each gazing point is mapped into corresponding colors, pixel points are drawn at corresponding positions, all the gazing points are drawn on one image, and an eye thermodynamic diagram is generated.
In this embodiment, a specific description is provided of a game content optimization module, where the game content optimization module obtains the attention distribution and the attention concentration of the child in the game interaction process based on the analysis result of the eye tracking data, optimizes the game content, and improves the attention of the child, and specifically includes the following steps:
step S1, determining a concentration area: determining the attention focusing area of the child in the game through the gaze point density map and the eye thermodynamic diagram, and placing game key elements in the key areas so as to guide the attention of the child;
step S2, visual guidance and feedback: according to the analysis results of the glance path and the eye thermodynamic diagram, guiding the attention of the child in the game, and focusing the attention of the child in a correct place by adding visual guiding elements in a key area, wherein the visual guiding elements comprise animation effects and color flickering so as to attract the eyes of the child and provide timely visual feedback;
the scanning path is represented by connecting position coordinates of adjacent fixation points, the distance between the adjacent fixation points is calculated, and the distance is accumulated to obtain the total length of the scanning path, and the specific calculation formula is as follows:
where M represents the total length of the glance path, n is the number of gaze points, (x) t- i,y t- i) Is the position coordinate of the ith point of regard, [ x ] t- (i+1),y t- (i+1)]Is the position coordinates of the (i+1) th gaze point;
step S3, segmentation and adjustment difficulty: according to the analysis result of the eye movement data, the game content is divided into small sections, the difficulty of the small sections is adjusted, and rewards are given after one section is completed, so that the power and the interest of children are increased.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and for those portions of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. A system for improving attention training in children, characterized by: the system comprises a multichannel game interaction module, an eye tracking data acquisition module, an eye tracking data analysis module and a game content optimization module;
a multi-channel game interaction module: combining multiple sense channels of vision, hearing and touch with a game, and training the child through game scenes and interaction with the game;
eye movement tracking data acquisition module: eye movement is tracked through eye movement tracking equipment and an infrared technology, eye movement tracking data generated by children in the game interaction process are obtained, and the fixation point is extracted;
eye movement tracking data analysis module: generating a fixation point density map and an eye movement thermodynamic diagram for analysis according to the acquired eye movement tracking data;
a game content optimizing module: and determining a concentration area according to the analysis result of the eye tracking data, segmenting and adjusting the game content by visual guidance and feedback, and optimizing the game content.
2. A system for improving attention training of children as in claim 1, wherein: the multi-channel game interaction module trains children through multi-channel games, attracts the attention of the children and actively participates in the children, combines visual, auditory and tactile multiple sensory channels, stimulates the interests of the children and promotes the concentration of the attention through game scenes and interactions with the games, and comprises the following specific steps:
step A1, visual stimulation: designing a game scene to attract the visual attention of children, adding some animation elements, and attracting the eyeballs of the children;
step A2, auditory stimulation: the method comprises the steps of exciting hearing attention of children by using sound effects, creating through sound narration and sound effects in a game, adding a story line, attracting the attention of the children, and utilizing a sound stereotactic technology to make game elements emit sound, guiding the children to locate and find the elements through hearing, and culturing hearing observation and orientation capability of the children;
step A3, touch stimulation: a vibration device is added on the game handle, when special conditions are met, the tactile feedback is provided through vibration, the body feeling equipment is utilized to guide children to conduct body actions, and the immersion and interactivity of the game are enhanced through tactile stimulation.
3. A system for improving attention training of children as claimed in claim 2, wherein: the sound stereotactic technology utilizes the characteristic of sound wave propagation in space, and realizes sound source position localization according to sound arrival time difference and sound arrival angle difference, and the specific calculation formula is as follows:
where Δt represents the sound arrival time difference, D represents the distance between the two ears, c represents the propagation velocity of sound in air, θ represents the sound source direction, Δx represents the phase difference of sound between the two radios, and D represents the distance between the two radios.
4. A system for improving attention training of children as in claim 1, wherein: the eye movement tracking data acquisition module is used for wearing the eye movement tracking device on the eye position of the child, tracking the eye movement by using an infrared technology, acquiring eye movement tracking data, judging whether the eye stays at one position, and extracting the fixation point.
5. A system for improving attention training of children as in claim 4, wherein: the eye movement tracking data is acquired, the fixation point is extracted, the eye movement data of the child is recorded in a time series mode through the eye movement tracking device, the time series of eye movement data was set to (t=1, 2,3., n), at a time point t, the eyeball position of the child is (x) t ,y t ) Pupil size r, gaze point duration d, use (x) t ,y t ,r t ,d t ) Representation, where x t And y t Representing the two-dimensional coordinate position of the eyeball on the screen, r t Represents pupil size, d t Representing the duration of the gaze point.
6. A system for improving attention training of children as in claim 5, wherein: the gaze point extraction step comprises: after the eye movement tracking data is collected, whether or not the eyeball remains at one position is determined, and the gaze point position coordinates and the duration are extracted, and the eyeball position at time t in the eye movement data is (x) t ,y t ) Speed v t Using a rest time threshold T s And a speed threshold V s To judge whether the eyeball stays at one position, the specific steps are as follows:
step 101, calculating displacement distance of eyeballs between two adjacent moments, wherein a specific calculation formula is as follows:
wherein p is t Representing the displacement distance of the eyeball between two adjacent moments, (x) t ,y t ) The eyeball position at time t, (x) t ,y t ) The eyeball position at time t+1;
step 102, judging a rest time threshold condition: when the eyeball is at position (x t ,y t ) The upper duration exceeds the rest time threshold T s The conditions are satisfied:
wherein p is i Represents the displacement distance between eyeballs at a certain moment, P s Is a displacement distance threshold corresponding to the rest time threshold;
step 103, judging a speed threshold condition: when the eyeball is at position (x t ,y t ) Velocity v at time v t Below the speed threshold V s The conditions are satisfied: v t <V s
Wherein v is t Representing the eyeball speed, V s Representing a speed threshold;
and when the duration of the eyeball at one position exceeds the resting time threshold and the speed is lower than the speed threshold, judging that the eyeball stays at the position, and extracting the stay position and the duration of the eyeball.
7. A system for improving attention training of children as in claim 1, wherein: the eye movement tracking data analysis module analyzes eye movement tracking data generated by children in the game interaction process by utilizing an artificial intelligence technology, and generates a fixation point density map and an eye movement thermodynamic diagram from the eye movement tracking data.
8. A system for improving attention training of children as in claim 7, wherein: the step of generating the gaze point density map comprises the following steps: dividing a screen area into grids, counting the number of gaze points falling in each grid, and estimating the density distribution of the gaze points on the screen by using Gaussian kernel density, wherein a specific calculation formula is as follows:
where (l, h) denotes the position of the grid, D (l, h) denotes the density value at position (l, h), n is the number of gaze points falling within the grid, (x) t- i,y t- i) Is the position coordinate of the ith point of regard, d t Is the duration of each gaze point and σ is the standard deviation of the gaussian kernel.
9. A system for improving attention training of children as in claim 7, wherein: the step of generating an eye thermodynamic diagram comprises the steps of: generating an eye thermodynamic diagram by analyzing and image processing eye movement data of a child in a game, wherein the eye thermodynamic diagram is used for displaying a concentration area of the child in the game, using color shades to represent fixation frequencies of different areas, and generating the eye thermodynamic diagram by the fixation frequencies and color mapping, and specifically comprises the following steps of:
in step 201, the gaze frequency is used to represent the degree of eyeball gazing in a certain area, and the specific calculation formula is as follows:
wherein F (l, h) represents the gaze frequency at the location (l, h), N represents the number of gaze points falling within the grid, N represents the total number of gaze points;
step 202, color mapping: according to the size of the fixation frequency, the fixation frequency is mapped to different color shades, the linear mapping function is used for mapping the fixation frequency to gray values between 0 and 255, and a specific calculation formula is as follows:
wherein Gray represents Gray value, F represents gazing frequency, F min Representing the minimum value of the gaze frequency, F max Representing the most frequent of gazeA large value;
step 203, image generation: according to the gazing frequency and the color mapping, the gazing frequency of each gazing point is mapped into corresponding colors, pixel points are drawn at corresponding positions, all the gazing points are drawn on one image, and an eye thermodynamic diagram is generated.
10. A system for improving attention training of children as in claim 1, wherein: the game content optimizing module obtains the attention distribution and the attention concentration condition of the children in the game interaction process based on the eye tracking data analysis result, optimizes the game content, and specifically comprises the following steps:
step S1, determining a concentration area: determining the attention focusing area of the child in the game through the gaze point density map and the eye thermodynamic diagram, and placing game key elements in the key areas so as to guide the attention of the child;
step S2, visual guidance and feedback: according to the analysis results of the glance path and the eye thermodynamic diagram, guiding the attention of the child in the game, and focusing the attention of the child in a correct place by adding visual guiding elements in a key area, wherein the visual guiding elements comprise animation effects and color flickering so as to attract the eyes of the child and provide timely visual feedback;
the scanning path is represented by connecting position coordinates of adjacent fixation points, the distance between the adjacent fixation points is calculated, and the distance is accumulated to obtain the total length of the scanning path, and the specific calculation formula is as follows:
where M represents the total length of the glance path, n is the number of gaze points, (x) t- i,y t- i) Is the position coordinate of the ith point of regard, [ x ] t- (i+1),y t- (i+1)]Is the position coordinates of the (i+1) th gaze point;
step S3, segmentation and adjustment difficulty: according to the analysis result of the eye movement data, the concentration time and the dispersion time of the children in the game are obtained, the game content is divided into small sections, the difficulty of each section is moderate, and rewards are given after one section is completed, so that the power and the interest of the children are increased.
CN202311654917.9A 2023-12-05 2023-12-05 Be used for improving children's attention training system Pending CN117618723A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311654917.9A CN117618723A (en) 2023-12-05 2023-12-05 Be used for improving children's attention training system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311654917.9A CN117618723A (en) 2023-12-05 2023-12-05 Be used for improving children's attention training system

Publications (1)

Publication Number Publication Date
CN117618723A true CN117618723A (en) 2024-03-01

Family

ID=90023125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311654917.9A Pending CN117618723A (en) 2023-12-05 2023-12-05 Be used for improving children's attention training system

Country Status (1)

Country Link
CN (1) CN117618723A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117976129A (en) * 2024-04-01 2024-05-03 河海大学 Depth perception training method based on multi-depth cue scene

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117976129A (en) * 2024-04-01 2024-05-03 河海大学 Depth perception training method based on multi-depth cue scene

Similar Documents

Publication Publication Date Title
CN110070944B (en) Social function assessment training system based on virtual environment and virtual roles
US10165940B2 (en) Vision-based diagnosis and treatment
CN109298779B (en) Virtual training system and method based on virtual agent interaction
Williams et al. Perceptual-cognitive expertise in sport: Some considerations when applying the expert performance approach
Yu et al. Embodied attention and word learning by toddlers
CN111063416A (en) Alzheimer disease rehabilitation training and capability assessment system based on virtual reality
CN112970056A (en) Human-computer interface using high speed and accurate user interaction tracking
CN110890140A (en) Virtual reality-based autism rehabilitation training and capability assessment system and method
CN107997933B (en) Child visual function training rehabilitation system with real-time assessment function
CN117618723A (en) Be used for improving children's attention training system
CN108721070A (en) A kind of intelligent vision functional training system and its training method based on eyeball tracking
Rojas Ferrer et al. Read-the-game: System for skill-based visual exploratory activity assessment with a full body virtual reality soccer simulation
CN112545517A (en) Attention training method and terminal
CN103226665A (en) Human brain health training system based on SAAS platform
US20210401339A1 (en) Adaptive behavioral training, and training of associated physiological responses, with assessment and diagnostic functionality
CN112017750A (en) Self-adaptive training method and device for rehabilitation exercise, medium and rehabilitation robot
US20230376116A1 (en) Electronic training system and method for electronic evaluation and feedback of sports performance
Cohen et al. Boundary vector cells in the goldfish central telencephalon encode spatial information
Yang et al. Research on face recognition sports intelligence training platform based on artificial intelligence
Gonzalez et al. Fear levels in virtual environments, an approach to detection and experimental user stimuli sensation
Faria et al. Towards ai-based interactive game intervention to monitor concentration levels in children with attention deficit
Schmidt et al. Speed, amplitude, and asymmetry of lip movement in voluntary puckering and blowing expressions: implications for facial assessment
US20210187374A1 (en) Augmented extended realm system
Hosp et al. Eye movement feature classification for soccer expertise identification in virtual reality
CN114495594A (en) On-line sports item adaptive training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination