TWI823577B - Exercise training system able to recognize fatigue of user - Google Patents

Exercise training system able to recognize fatigue of user Download PDF

Info

Publication number
TWI823577B
TWI823577B TW111135609A TW111135609A TWI823577B TW I823577 B TWI823577 B TW I823577B TW 111135609 A TW111135609 A TW 111135609A TW 111135609 A TW111135609 A TW 111135609A TW I823577 B TWI823577 B TW I823577B
Authority
TW
Taiwan
Prior art keywords
fatigue
identification model
identification
sensing unit
svm
Prior art date
Application number
TW111135609A
Other languages
Chinese (zh)
Other versions
TW202327517A (en
Inventor
蔡佳良
王采蕎
許煜亮
Original Assignee
國立成功大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立成功大學 filed Critical 國立成功大學
Publication of TW202327517A publication Critical patent/TW202327517A/en
Application granted granted Critical
Publication of TWI823577B publication Critical patent/TWI823577B/en

Links

Landscapes

  • Eye Examination Apparatus (AREA)
  • Rehabilitation Tools (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)

Abstract

An exercise training system able to recognize fatigue of a user has a head-mounted glasses device and a real-time computing device. The head-mounted glasses device includes a display unit and at least one physiological information sensing unit. The at least one physiological information sensing unit includes at least one of an eye-tracking unit and a brainwave sensing unit. The real-time computing device is signally connected to the head-mounted glasses device for bidirectional data transmission, and stores program codes of a fatigue recognition model. The real-time computing device executes the fatigue recognition model based on sensing data of the at least one physiological information sensing unit to generate a fatigue recognition result, and displays an image corresponding to the fatigue recognition result on the display unit. The fatigue recognition model is a machine-learning-based model or a deep-learning-based model.

Description

可辨識使用者疲勞的運動訓練系統Sports training system that can identify user fatigue

本發明涉及一種運動訓練系統,特別是指可辨識使用者疲勞的運動訓練系統。The present invention relates to a sports training system, in particular to a sports training system that can identify user fatigue.

教練一般是指具有專業運動素養的人員,能對一般民眾與運動員進行指導,舉例來說,羽毛球是一項包含許多對打策略、擊球技術與步伐技術的運動,而羽球教練即能提供學習者正確的觀念。Coaches generally refer to people with professional sports literacy who can provide guidance to ordinary people and athletes. For example, badminton is a sport that includes many sparring strategies, batting techniques and footwork techniques, and badminton coaches can provide training correct concept.

隨著科技進展,目前已有利用虛擬實境(VR)、擴增實境(AR)、混合實境(MR)或延展實境(XR)技術為學習者提供指導,例如中華民國專利公告第I704942號的「虛擬實境的水中運動訓練裝置」,其包含一感測單元、一顯示器及一處理單元,該感測單元戴在使用者人體的部位且感測該部位的動作軌跡及速率,該顯示器能固定在使用者的頭部並顯示虛擬實境的畫面,該處理單元接收該感測單元的偵測信號並加以處理以產生一虛擬實境顯示信號,並將該虛擬實境顯示信號傳送給顯示器。With the advancement of science and technology, virtual reality (VR), augmented reality (AR), mixed reality (MR) or extended reality (XR) technologies have been used to provide guidance to learners, such as the Republic of China Patent Announcement No. The "virtual reality water sports training device" No. I704942 includes a sensing unit, a display and a processing unit. The sensing unit is worn on a part of the user's human body and senses the movement trajectory and speed of the part. The display can be fixed on the user's head and display virtual reality images. The processing unit receives the detection signal of the sensing unit and processes it to generate a virtual reality display signal, and converts the virtual reality display signal into a virtual reality display signal. sent to the display.

此外,運用虛擬實境技術亦能增加使用者運動時的臨場感,例如中華民國專利公告第I683687號的「雲端虛擬實境之運動健身機系統」,其包含一健身裝置、一控制裝置以及一虛擬實境顯示裝置,該健身裝置可用於提供一運動者運動,該控制裝置用於控制該健身裝置之傳動速度,該虛擬實境顯示裝置可配戴於使用者之上,並接收並顯示一虛擬實境影像給使用者觀看。In addition, the use of virtual reality technology can also increase the user's sense of presence during exercise, such as the "Cloud Virtual Reality Exercise and Fitness Machine System" in the Republic of China Patent Announcement No. I683687, which includes a fitness device, a control device and a A virtual reality display device. The fitness device can be used to provide an athlete with exercise. The control device is used to control the transmission speed of the fitness device. The virtual reality display device can be worn on the user and receive and display a Virtual reality images are displayed to users.

然而,運動領域的習知AR、VR、MR或XR技術是以指導使用者做出正確的姿勢或以增加臨場感為主,如果使用者在運動的過程中有疲勞的跡象卻不自知,縱使其姿勢皆正確,仍有運動傷害的高風險。However, conventional AR, VR, MR or XR technologies in the sports field are mainly used to guide users to make correct postures or to increase the sense of presence. If the user shows signs of fatigue during exercise but does not know it, Even if the posture is correct, there is still a high risk of sports injuries.

有鑒於此,本發明的主要目的是提供一種可辨識使用者疲勞的運動訓練系統,以期克服先前技術未能偵測出使用者疲勞跡象的缺點,本發明進而能確保使用者的運動訓練效果,並有效降低運動傷害的風險。In view of this, the main purpose of the present invention is to provide a sports training system that can identify the user's fatigue, in order to overcome the shortcomings of the previous technology that cannot detect the signs of the user's fatigue. The present invention can thereby ensure the user's sports training effect. And effectively reduce the risk of sports injuries.

本發明可辨識使用者疲勞的運動訓練系統包含: 一頭戴式眼鏡裝置,包含一顯示單元與至少一生理資訊感測單元,該至少一生理資訊感測單元包含一眼動追蹤單元與一腦波感測單元中的至少一者;及 一即時運算裝置,訊號連接該頭戴式眼鏡裝置以進行雙向資料傳輸,並儲存有一疲勞辨識模型的程式資料;該即時運算裝置執行該疲勞辨識模型,以根據該至少一生理資訊感測單元的感測資料產生一疲勞辨識結果,並透過該顯示單元顯示對應於該疲勞辨識結果的畫面;其中,該疲勞辨識模型為基於機器學習或基於深度學習的模型。 The sports training system of the present invention that can identify user fatigue includes: A head-mounted glasses device, including a display unit and at least one physiological information sensing unit, the at least one physiological information sensing unit including at least one of a eye movement tracking unit and a brain wave sensing unit; and A real-time computing device, the signal is connected to the head-mounted glasses device for two-way data transmission, and stores program data of a fatigue recognition model; the real-time computing device executes the fatigue recognition model to based on the at least one physiological information sensing unit The sensing data generates a fatigue identification result, and a screen corresponding to the fatigue identification result is displayed through the display unit; wherein the fatigue identification model is a model based on machine learning or deep learning.

因為使用者的眼動狀態及腦波與其是否疲勞有關,故根據本發明透過該眼動追蹤單元及/或該腦波感測單元即時偵測使用者的眼動狀態及腦波,並以此為輸入資料,應用基於機器學習或基於深度學習的疲勞辨識模型自動且即時辨識使用者是處在疲勞或非疲勞狀態。當本發明辨識出使用者有疲勞跡象,本發明能自動對應調整該頭戴式眼鏡裝置所播放的畫面,亦即提供較低運動強度、收操或休息的影片給使用者觀看。Because the user's eye movement state and brain waves are related to whether he or she is tired, according to the present invention, the user's eye movement state and brain waves are detected in real time through the eye tracking unit and/or the brain wave sensing unit, and thereby In order to input data, a fatigue identification model based on machine learning or deep learning is applied to automatically and instantly identify whether the user is in a fatigue or non-fatigue state. When the present invention recognizes that the user has signs of fatigue, the present invention can automatically adjust the picture played by the head-mounted glasses device accordingly, that is, provide videos with lower exercise intensity, exercise or rest for the user to watch.

藉此,使用者在較低強度、收操或休息的影片中得以調整呼吸、放鬆肌肉...等,藉此獲得舒緩,有效降低運動傷害的風險。Through this, users can adjust their breathing, relax muscles, etc. in videos of lower intensity, exercise or rest, thereby obtaining relief and effectively reducing the risk of sports injuries.

本發明為可辨識使用者疲勞的運動訓練系統,供使用者利用本發明之系統進行互動式運動訓練,所謂"互動式"是指本發明之系統與使用者之間的互動,也就是說,使用者根據本發明顯示的畫面進行運動訓練,同時,本發明能即時偵測使用者的生理信號,並即時對應調整該畫面,藉此達成互動式運動訓練之目的。舉例而言,本發明可應用在一開放式運動訓練與一閉鎖式運動訓練,該開放式運動訓練可包含羽球、桌球、網球、匹克球、籃球、排球、棒球、壘球、曲棍球…等,但不以此為限;該閉鎖式運動訓練可包含田徑、自行車、射箭、高爾夫、馬術、划船…等,但不以此限。The present invention is a sports training system that can identify user fatigue, allowing users to use the system of the present invention to perform interactive sports training. The so-called "interactive" refers to the interaction between the system of the present invention and the user, that is to say, The user performs sports training according to the picture displayed by the present invention. At the same time, the present invention can detect the user's physiological signals in real time and adjust the picture accordingly in real time, thereby achieving the purpose of interactive sports training. For example, the present invention can be applied to an open sports training and a closed sports training. The open sports training may include badminton, table tennis, tennis, pickleball, basketball, volleyball, baseball, softball, hockey, etc., but It is not limited to this; the closed sports training can include track and field, cycling, archery, golf, equestrian, rowing, etc., but it is not limited to this.

本發明的特色在於:本發明透過人工智慧(Artificial Intelligence, AI)輔助辨識使用者在運動訓練進行中是否處於疲勞狀態,若是,本發明可從原本播放的運動影片改變為較低強度、收操或休息的另一影片,讓使用者在較低強度、收操或休息的影片中得以調整呼吸、放鬆肌肉...等,藉此獲得舒緩,除了可提高專注力,更可有效避免運動傷害。當本發明辨識出使用者已處於非疲勞狀態時,則改變為播放原本的運動影片,以在使用者恢復體能後,回復原有的訓練強度。The characteristic of the present invention is that the present invention uses artificial intelligence (AI) to assist in identifying whether the user is in a fatigue state during sports training. If so, the present invention can change the originally played sports video to a lower-intensity, exercise-reducing one. Or another video with a break, allowing users to adjust their breathing, relax muscles, etc. in a lower-intensity, exercise or rest video, etc., so as to gain relief. In addition to improving concentration, it can also effectively avoid sports injuries. . When the present invention recognizes that the user is in a non-fatigue state, it changes to playing the original sports video to restore the original training intensity after the user recovers his physical fitness.

以下配合圖式說明本發明可辨識使用者疲勞的運動訓練系統的實施例。The following describes an embodiment of a sports training system capable of identifying user fatigue according to the present invention with reference to the drawings.

請參考圖1,本發明可辨識使用者疲勞的運動訓練系統包含一頭戴式眼鏡裝置10與一即時運算裝置20,或進一步包含一取像元件31、一觸覺回饋元件32、一電子配件33、一遠端裝置34與一嗅覺刺激元件35中的至少一者。其中,該頭戴式眼鏡裝置10與該即時運算裝置20可分別分離設置的兩個硬體裝置,或者該即時運算裝置20與該頭戴式眼鏡裝置10可結合為一體。另一方面,該頭戴式眼鏡裝置10與該即時運算裝置20的電源可取自電池,或透過一電源適配器連接市電電源。Please refer to Figure 1. The sports training system of the present invention that can identify user fatigue includes a head-mounted glasses device 10 and a real-time computing device 20, or further includes an imaging element 31, a tactile feedback element 32, and an electronic accessory 33. , at least one of a remote device 34 and an olfactory stimulation element 35 . The head-mounted glasses device 10 and the real-time computing device 20 can be two separate hardware devices, or the real-time computing device 20 and the head-mounted glasses device 10 can be integrated into one. On the other hand, the power supply of the head-mounted glasses device 10 and the real-time computing device 20 can be obtained from a battery, or connected to the mains power supply through a power adapter.

該頭戴式眼鏡裝置10可為運用虛擬實境(VR)、擴增實境(AR)、混合實境(MR)或延展實境(XR)的眼鏡裝置,請配合參考圖2、圖3與圖4,該頭戴式眼鏡裝置10包含一眼鏡本體11與設置在該眼鏡本體11的一顯示單元12與至少一生理資訊感測單元。該顯示單元12包含顯示器面板120與顯示控制器121,該顯示控制器121連接該顯示器面板120以控制其播放的畫面,該顯示器面板120可為液晶顯示器面板(LCD),當使用者配戴該眼鏡本體11時,使用者的雙眼能直接目視該顯示單元12所播放的畫面;該至少一生理資訊感測單元包含一眼動追蹤單元13(Eye Tracking)與一腦波感測單元14中的至少一者,也就是說,該眼鏡本體11可設有該眼動追蹤單元13與該腦波感測單元14的其中一者,或同時設有該眼動追蹤單元13和該腦波感測單元14。The head-mounted glasses device 10 can be a glasses device using virtual reality (VR), augmented reality (AR), mixed reality (MR) or extended reality (XR). Please refer to Figure 2 and Figure 3 As shown in FIG. 4 , the head-mounted glasses device 10 includes a glasses body 11 , a display unit 12 and at least one physiological information sensing unit disposed on the glasses body 11 . The display unit 12 includes a display panel 120 and a display controller 121. The display controller 121 is connected to the display panel 120 to control the images it plays. The display panel 120 can be a liquid crystal display panel (LCD). When the user wears the When the glasses body 11 is installed, the user's eyes can directly view the picture played by the display unit 12; the at least one physiological information sensing unit includes an eye movement tracking unit 13 (Eye Tracking) and a brain wave sensing unit 14. At least one, that is, the glasses body 11 may be provided with one of the eye tracking unit 13 and the brain wave sensing unit 14, or may be provided with both the eye tracking unit 13 and the brain wave sensing unit 13. Unit 14.

需說明的是,該眼動追蹤單元13與該腦波感測單元14的運作原理為所屬技術領域中的通常知識,在此容不詳述。簡言之,該眼動追蹤單元13可包含多個相機130與一眼動信號處理器131,該眼動信號處理器131為具有信號處理功能的晶片。當使用者配戴該眼鏡本體11時,該多個相機130朝使用者的雙眼眼球進行拍攝,其所拍攝的影像資料傳送至該眼動信號處理器131進行處理,以產生使用者雙眼的眼動信號,所述眼動信號例如包含眼球瞳孔的座標和移動軌跡等資訊。該腦波感測單元14可包含多個電極墊140與一腦波訊號處理器141,該腦波訊號處理器141亦為具有信號處理功能的晶片,當使用者配戴該眼鏡本體11時,如圖3所示,該多個電極墊140可貼附於使用者的頭部以量測電信號,並將所述電信號傳送至該腦波訊號處理器141,該腦波訊號處理器141可將該電信號進行信號放大與類比至數位(Analog to Digital)的轉換,進而產生一腦波信號(Electroencephalogram Signal或簡稱為EEG Signal)。It should be noted that the operating principles of the eye tracking unit 13 and the brainwave sensing unit 14 are common knowledge in the technical field and will not be described in detail here. In short, the eye tracking unit 13 may include a plurality of cameras 130 and an eye movement signal processor 131. The eye movement signal processor 131 is a chip with a signal processing function. When the user wears the glasses body 11, the multiple cameras 130 take pictures of both eyes of the user, and the image data captured are sent to the eye movement signal processor 131 for processing to generate the user's eyes. The eye movement signal includes, for example, information such as the coordinates and movement trajectory of the eyeball pupil. The brainwave sensing unit 14 may include a plurality of electrode pads 140 and a brainwave signal processor 141. The brainwave signal processor 141 is also a chip with a signal processing function. When the user wears the glasses body 11, As shown in Figure 3, the plurality of electrode pads 140 can be attached to the user's head to measure electrical signals and transmit the electrical signals to the brainwave signal processor 141. The brainwave signal processor 141 The electrical signal can be amplified and converted from analog to digital to generate an electroencephalogram signal (EEG Signal for short).

該即時運算裝置20訊號連接該頭戴式眼鏡裝置10以進行雙向資料傳輸,其中,該即時運算裝置20可以有線(wired)或無線(wireless)方式連接該頭戴式眼鏡裝置10,也就是說,該即時運算裝置20與該頭戴式眼鏡裝置10各具有相互匹配連線的傳輸介面15、23,所述傳輸介面15、23例如可為輸入/輸出(I/O)介面、USB介面、HDMI介面、行動通訊介面或WiFi介面等,但不以此為限。The real-time computing device 20 is signal-connected to the head-mounted glasses device 10 for bidirectional data transmission. The real-time computing device 20 can be connected to the head-mounted glasses device 10 in a wired or wireless manner. That is, , the real-time computing device 20 and the head-mounted glasses device 10 each have transmission interfaces 15 and 23 that are matched to each other. The transmission interfaces 15 and 23 can be, for example, an input/output (I/O) interface, a USB interface, HDMI interface, mobile communication interface or WiFi interface, etc., but not limited to this.

該即時運算裝置20包含一處理單元21與一儲存單元22,該處理單元21包含中央處理器(CPU),該儲存單元22可為快閃記憶體或記憶卡,該儲存單元22儲存有一疲勞辨識模型的程式資料,該處理單元21連接該儲存單元22,故該處理單元21可存取並執行該疲勞辨識模型。此外,該儲存單元22亦儲存有多個影片之檔案,該處理單元21可選擇其中之一影片,並將所選擇的影片傳送到該頭戴式眼鏡裝置10,以透過該顯示單元12顯示該影片,該影片可為動畫影片或錄影影片。The real-time computing device 20 includes a processing unit 21 and a storage unit 22. The processing unit 21 includes a central processing unit (CPU). The storage unit 22 can be a flash memory or a memory card. The storage unit 22 stores a fatigue recognition The processing unit 21 is connected to the storage unit 22 for the program data of the model, so the processing unit 21 can access and execute the fatigue identification model. In addition, the storage unit 22 also stores a plurality of video files, and the processing unit 21 can select one of the videos and transmit the selected video to the head-mounted glasses device 10 to display the video through the display unit 12 Video, which can be an animated video or a video.

該取像元件31例如可為一相機,亦可配備一廣角鏡頭,以拍攝並輸出一影像。該即時運算裝置20訊號連接該取像元件31,以接收該取像元件31所拍攝的影像,並對該影像進行影像處理、儲存或對外傳輸至其他裝置。舉例而言,該取像元件31可設置於該頭戴式眼鏡裝置10之眼鏡本體11的底側並朝下拍攝,例如可拍攝到使用者的肢體動作。以羽球運動為例,該取像元件31可拍攝使用者的持拍手臂,該影像即包含使用者手臂揮拍的畫面。The imaging element 31 can be a camera, for example, and can also be equipped with a wide-angle lens to capture and output an image. The real-time computing device 20 is connected to the imaging element 31 via signals to receive the image captured by the imaging element 31, and performs image processing, storage or external transmission on the image to other devices. For example, the imaging element 31 can be disposed on the bottom side of the glasses body 11 of the head-mounted glasses device 10 and shoot downward, for example, the user's body movements can be captured. Taking badminton as an example, the imaging element 31 can capture the user's arm holding the racket, and the image includes the image of the user's arm swinging the racket.

該觸覺回饋元件32實施觸覺感測及/或回饋功能,舉例而言,該觸覺回饋元件32可包含一薄型指尖觸覺感測器、一震動器、一手腕感測器與一鞋墊感測器中的至少一者。該即時運算裝置20訊號連接該觸覺回饋元件32,以輸出一驅動訊號控制該觸覺回饋元件32作動。以羽球運動為例,該觸覺回饋元件32可設置在一球拍而與該頭戴式眼鏡裝置10為分離設置,當該即時運算裝置20驅動該觸覺回饋元件32產生震動時,手握該球拍的使用者也能感受到震動,以期模擬出真實擊球的觸覺震動感受。The tactile feedback element 32 implements tactile sensing and/or feedback functions. For example, the tactile feedback element 32 may include a thin fingertip tactile sensor, a vibrator, a wrist sensor and an insole sensor. at least one of them. The real-time computing device 20 is signal-connected to the tactile feedback element 32 to output a driving signal to control the operation of the tactile feedback element 32 . Taking badminton as an example, the tactile feedback element 32 can be provided in a racket and is separated from the head-mounted glasses device 10. When the real-time computing device 20 drives the tactile feedback element 32 to vibrate, the user holding the racket can Users can also feel the vibration to simulate the tactile vibration experience of a real ball.

該電子配件33可提供多元擴充功能,舉例而言,該電子配件33可為一體感智能衣(haptic suit)、一智慧球拍、一智慧鞋墊...等,該即時運算裝置訊20號連接該電子配件33以進行雙向資料傳輸,以實現多元擴充功能的整合應用,優化互動式虛擬運動的準確度。The electronic accessory 33 can provide multiple expansion functions. For example, the electronic accessory 33 can be a haptic suit, a smart racket, a smart insole, etc. The real-time computing device is connected to the The electronic accessory 33 is used for two-way data transmission to realize the integrated application of multiple expanded functions and optimize the accuracy of interactive virtual movements.

該遠端裝置34可為一遠端電腦或一伺服器,該即時運算裝置20訊號連接該遠端裝置34,例如該即時運算裝置20可透過網際網路連線至該遠端裝置34以進行資料傳輸,故該遠端裝置34可與該即時運算裝置20實現多元互動功能。以羽球運動為例,羽球教練可操作該遠端裝置34,將指導資料傳送給該即時運算裝置20,其中,該指導資料的格式可為影音串流,該即時運算裝置20可將該指導資料透過該頭戴式眼鏡裝置10的顯示單元12顯示給使用者觀看。The remote device 34 can be a remote computer or a server, and the real-time computing device 20 is connected to the remote device 34 via a signal. For example, the real-time computing device 20 can be connected to the remote device 34 through the Internet for processing. data transmission, so the remote device 34 can realize multiple interactive functions with the real-time computing device 20 . Taking badminton as an example, the badminton coach can operate the remote device 34 to transmit the guidance data to the real-time computing device 20. The format of the guidance data can be an audio-visual stream, and the real-time computing device 20 can transmit the guidance data. The information is displayed to the user through the display unit 12 of the head-mounted glasses device 10 .

另一方面,為了克服空間與時間的限制,以及跳脫傳統需要面對面或是錄影式的訓練與教學法,該即時運算裝置20也可將該取像元件31拍攝到的影像傳送到該遠端裝置34供教練觀看,在配合如前所述該影音串流的傳輸,達成使用者和教練之間的線上真人即時雙向互動,隨時隨地皆能獲得訓練指導。故當使用者進行練習或訓練時,即可得知自我的運動姿勢是否正確,教練可進行即時的姿勢修正,而達到目標的運動成效。On the other hand, in order to overcome the limitations of space and time, and to break away from the traditional face-to-face or video-based training and teaching methods, the real-time computing device 20 can also transmit the image captured by the imaging element 31 to the remote end. The device 34 is for the coach to watch, and cooperates with the transmission of the audio and video stream as mentioned above to achieve online real-time two-way interaction between the user and the coach, so that training guidance can be obtained anytime and anywhere. Therefore, when users perform exercises or training, they can know whether their exercise posture is correct, and the coach can make immediate posture corrections to achieve the target exercise results.

該嗅覺刺激元件35可提供嗅覺回饋(刺激)功能,舉例而言,該嗅覺刺激元件35可為電控式芳香劑,或者可以電訊號刺激方式刺激使用者的嗅覺,讓使用者感覺到氣味。該即時運算裝置20訊號連接該嗅覺刺激元件35,以輸出一驅動訊號控制該嗅覺刺激元件35作動,讓使用者產生聞到某一特定味道的感受。The olfactory stimulation element 35 can provide an olfactory feedback (stimulation) function. For example, the olfactory stimulation element 35 can be an electronically controlled fragrance, or can stimulate the user's sense of smell through electrical signal stimulation to allow the user to feel the smell. The real-time computing device 20 is connected to the olfactory stimulation element 35 via a signal to output a driving signal to control the operation of the olfactory stimulation element 35 so that the user can have the feeling of smelling a specific smell.

該即時運算裝置20執行該疲勞辨識模型時,係根據該至少一生理資訊感測單元的感測資料產生一疲勞辨識結果,也就是說,該處理單元21將該至少一生理資訊感測單元的感測資料(即:眼動信號及/或腦波信號)作為該疲勞辨識模型的輸入資料,該疲勞辨識模型根據該輸入資料進行演算後,所產生的該疲勞辨識結果可為對應於"疲勞"或"非疲勞"的文字、代碼或代號。其中,該疲勞辨識模型可為經過監督式學習(Supervised learning)、以及基於機器學習(Machine Learning)或基於深度學習(Deep Learning)的預訓練模型(Pre-Trained Model),容後說明。該即時運算裝置20能根據該疲勞辨識結果控制該顯示單元12所播放的訓練畫面,或進一步驅動該觸覺回饋元件32作動,或進一步與該電子配件33產生互動,或進一步與該遠端裝置34產生互動,或進一步驅動該嗅覺刺激元件35作動。When the real-time computing device 20 executes the fatigue identification model, it generates a fatigue identification result based on the sensing data of the at least one physiological information sensing unit. That is to say, the processing unit 21 converts the sensing data of the at least one physiological information sensing unit. Sensing data (i.e., eye movement signals and/or brainwave signals) are used as input data of the fatigue identification model. After the fatigue identification model performs calculations based on the input data, the fatigue identification results generated can be corresponding to "fatigue." " or "non-fatigue" words, codes or designations. Among them, the fatigue identification model can be a pre-trained model (Pre-Trained Model) based on supervised learning (Supervised learning), machine learning (Machine Learning) or deep learning (Deep Learning), which will be described later. The real-time computing device 20 can control the training scene played by the display unit 12 according to the fatigue identification result, or further drive the tactile feedback element 32 to act, or further interact with the electronic accessory 33 , or further interact with the remote device 34 Interaction is generated, or the olfactory stimulation element 35 is further driven to act.

舉例來說,該儲存單元22所儲存之多個影片中,包含一第一影片和一第二影片,該第二影片的運動訓練強度低於該第一影片的訓練強度,或者該第二影片可為呈現收操或休息的影片。以羽毛球之訓練影片為例,該第一影片的發球頻率為每分鐘40顆球,該第二影片的發球頻率為每分鐘20顆球。假設該處理單元21原本從該儲存單元22讀取並傳送該第一影片至該顯示單元12進行播放,當該疲勞辨識模型產生"疲勞"的疲勞辨識結果時,該處理單元21從該儲存單元22改為讀取並傳送該第二影片至該顯示單元12進行播放,故可暫時減弱運動訓練強度,讓使用者得以獲得舒緩。然後,當該疲勞辨識模型產生"非疲勞"的疲勞辨識結果時,代表使用者已恢復體能,該處理單元21從該儲存單元22改為讀取並傳送該第一影片至該顯示單元12進行播放,以回復原本的運動訓練強度。For example, the plurality of videos stored in the storage unit 22 include a first video and a second video, and the exercise training intensity of the second video is lower than the training intensity of the first video, or the second video It can be a video showing recuperation or rest. Taking the badminton training video as an example, the serving frequency of the first video is 40 balls per minute, and the serving frequency of the second video is 20 balls per minute. Assume that the processing unit 21 originally reads and transmits the first video from the storage unit 22 to the display unit 12 for playback. When the fatigue recognition model generates a fatigue recognition result of "fatigue", the processing unit 21 reads the first video from the storage unit 22 and transmits it to the display unit 12 for playback. 22 instead reads and sends the second video to the display unit 12 for playback, so the intensity of the exercise training can be temporarily reduced, allowing the user to gain relief. Then, when the fatigue recognition model generates a "non-fatigue" fatigue recognition result, which means that the user has recovered physical fitness, the processing unit 21 reads and transmits the first video from the storage unit 22 to the display unit 12 for processing. Play to restore the original intensity of exercise training.

以下分別說明基於"機器學習"和基於"深度學習"的該疲勞辨識模型,並以羽毛球訓練為例。其中,以下亦說明每一種疲勞辨識模型分別根據"眼動信號"和"腦波信號"辨識使用者是否運動疲勞。The following describes the fatigue identification models based on "machine learning" and "deep learning" respectively, taking badminton training as an example. Among them, the following also explains that each fatigue identification model uses "eye movement signals" and "brain wave signals" to identify whether the user is tired due to exercise.

1、基於"機器學習"的疲勞辨識模型-1. Fatigue identification model based on "machine learning"-

1-1、根據"眼動信號"辨識使用者是否運動疲勞1-1. Identify whether the user is tired due to exercise based on "eye movement signals"

按一般信號處理原理,請參考圖5,該疲勞辨識模型40可依序實施包含一資料擷取步驟(Data Acquisition)401、一信號預處理步驟(Signal Preprocessing)402和一特徵萃取步驟(Feature Extraction)403。簡言之,該資料擷取步驟401是從該眼動追蹤單元13接收使用者的眼動信號;該信號預處理步驟402可根據該眼動信號得到眼動速度(Eye Movement Speed)隨時間變化的波形資訊,並對該眼動速度的波形資訊實施視窗化(Windowing);該特徵萃取步驟403是基於該信號預處理步驟402所輸出視窗化後的信號,建立一時域特徵資料、一龐卡萊圖(Poincaré Plot Features)特徵資料、或該時域特徵資料以及該龐卡萊圖特徵資料的組合。According to the general signal processing principle, please refer to Figure 5. The fatigue identification model 40 can be implemented sequentially including a data acquisition step (Data Acquisition) 401, a signal preprocessing step (Signal Preprocessing) 402 and a feature extraction step (Feature Extraction). )403. In short, the data retrieval step 401 is to receive the user's eye movement signal from the eye movement tracking unit 13; the signal preprocessing step 402 can obtain the eye movement speed (Eye Movement Speed) changes with time based on the eye movement signal. The waveform information of the eye movement speed is windowed; the feature extraction step 403 is based on the windowed signal output by the signal preprocessing step 402 to create a time domain feature data, a Ponka Poincaré Plot Features feature data, or a combination of the time domain feature data and the Poincaré Plot feature data.

本發明的實施例中,該特徵萃取步驟403的輸出資料可包含對應於該眼動信號的一眼動速度(eye movement speed)、該眼動速度之最大值、範圍、中位數、平均絕對誤差值、均方根值、第一四分位數、第三四分位數、四分位距、偏度(skewness)、峰度(kurtosis)及熵值(entropy)中的至少一者、一注視時間(fixation time)、一注視次數(fixation count)、一瞳孔大小(pupil size)、一掃視速度(saccade speed)與一眨眼頻率(blink rate)中的至少一者,其為所屬技術領域的通常知識,在此容不詳述其計算原理。In an embodiment of the present invention, the output data of the feature extraction step 403 may include the eye movement speed (eye movement speed) corresponding to the eye movement signal, the maximum value, range, median, and mean absolute error of the eye movement speed. value, root mean square value, first quartile, third quartile, interquartile range, skewness, kurtosis, and entropy. At least one of a fixation time, a fixation count, a pupil size, a saccade speed and a blink rate, which is a standard in the technical field It is common knowledge and the calculation principle will not be described in detail here.

另外,該特徵萃取步驟403的輸出資料可包含對應於該眼動信號的龐卡萊圖特徵資料,在此先以圖7B所示的龐卡萊圖為例,先將x ,y軸設為 為該眼動速度之序列的序列索引;接著將座標系統逆時針旋轉45度得到新的軸向 (圖中未示,此為實施龐卡萊圖的通常知識)。是以,該眼動信號的該龐卡萊圖特徵資料包含:沿著 軸向散佈資料點之標準差SD 1、沿著 軸向散佈資料點之標準差SD 2、SD 1與SD 2之比值SD 12、SD 2與SD 1之比值SD 21、龐卡萊圖中擬合橢圓(fitted ellipse)的面積、SD 1與SD 2之乘積( )、SD 1與SD 2之乘積的對數( )以及龐卡萊圖之複雜相關度量(complex correlation measure, CCM)。 In addition, the output data of the feature extraction step 403 may include Poincare plot feature data corresponding to the eye movement signal. Here, taking the Poincare plot shown in Figure 7B as an example, first x , the y-axis is set to , is the sequence index of the sequence of eye movement speeds; then rotate the coordinate system 45 degrees counterclockwise to obtain the new axis and (Not shown in the figure, this is common knowledge for implementing Poincare diagram). Therefore, the Poincare diagram characteristic data of the eye movement signal includes: along The standard deviation of the axially dispersed data points SD 1 , along The standard deviation of the axial scatter data points SD 2 , the ratio of SD 1 to SD 2 SD 12 , the ratio of SD 2 to SD 1 SD 21 , the area of the fitted ellipse (fitted ellipse) in the Poincare diagram, SD 1 to SD The product of 2 ( ), the logarithm of the product of SD 1 and SD 2 ( ) and the complex correlation measure (CCM) of the Poincare diagram.

接續於該特徵萃取步驟403之後,該疲勞辨識模型40還可依序執行包含一特徵降維/選取步驟(Feature Reduction/Selection)404和一智慧辨識步驟(Intelligent Recognition)405。如圖5所示,本發明於該特徵降維/選取步驟404中,採取主成分分析(Principal Component Analysis, PCA)、線性識別分析(Linear Discriminant Analysis, LDA)、無參數加權特徵萃取(Nonparametric Weighted Feature Extraction, NWFE)、核無參數加權特徵萃取法(Kernel Nonparametric Weighted Feature Extraction, KNWFE)、及基於核函數之類別可分離方法(Kernel-Based Class Separability, KBCS)中的至少一者進行特徵降維/選取,故透過該特徵降維/選取步驟404,所產生的疲勞特徵向量包含:對應於所述眼動速度之最大值、範圍、中位數、平均絕對誤差值、均方根值、第一四分位數、第三四分位數、四分位距、偏度、峰度、熵值、注視時間、注視次數、瞳孔大小、掃視速度、眨眼頻率與眼動速度的龐卡萊圖特徵之SD 1、SD 2、SD 12、SD 21、fitted ellipse的面積、 、CCM中的至少一者。 Following the feature extraction step 403, the fatigue recognition model 40 may also sequentially execute a feature reduction/selection step (Feature Reduction/Selection) 404 and an intelligent recognition step (Intelligent Recognition) 405. As shown in Figure 5, in the feature dimensionality reduction/selection step 404, the present invention adopts Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Nonparametric Weighted Feature Extraction (Nonparametric Weighted Perform feature dimensionality reduction using at least one of Feature Extraction (NWFE), Kernel Nonparametric Weighted Feature Extraction (KNWFE), and Kernel-Based Class Separability (KBCS). /select, so through the feature dimensionality reduction/selection step 404, the fatigue feature vector generated includes: the maximum value, range, median, mean absolute error value, root mean square value, and third value corresponding to the eye movement speed. Poincare plot of first quartile, third quartile, interquartile range, skewness, kurtosis, entropy, fixation time, number of fixations, pupil size, saccade speed, blink frequency and eye movement speed Features SD 1 , SD 2 , SD 12 , SD 21 , area of fitted ellipse, , , at least one of CCM.

如圖5所示,於該智慧辨識步驟405中,依據該特徵降維/選取步驟404輸出的所述疲勞特徵向量,本發明採取兩種分類器中的一者,該兩種分類器分別是最小平方支持向量機(LS-SVM)及機率神經網路(PNN),由所述分類器根據所述疲勞特徵向量進行分類(Classification)而產生對應於"疲勞"或"非疲勞"的該疲勞辨識結果。As shown in Figure 5, in the smart identification step 405, based on the fatigue feature vector output by the feature dimensionality reduction/selection step 404, the present invention adopts one of two classifiers. The two classifiers are respectively Least Squares Support Vector Machine (LS-SVM) and Probabilistic Neural Network (PNN), the classifier performs classification according to the fatigue feature vector to generate the fatigue corresponding to "fatigue" or "non-fatigue" Identification results.

需說明的是,於該特徵降維/選取步驟404和該智慧辨識步驟405中,本發明涉及PCA、LDA、NWFE、KNWFE、KBCS、LS-SVM及PNN的電腦程式應用,其運作原理是所屬技術領域中的通常知識,容不詳述其運作原理。It should be noted that in the feature dimensionality reduction/selection step 404 and the smart identification step 405, the present invention involves computer program applications of PCA, LDA, NWFE, KNWFE, KBCS, LS-SVM and PNN, and their operating principles are Common knowledge in the technical field does not allow for a detailed description of its operating principles.

1-1-1、眼動信號與疲勞辨識結果的關係1-1-1. The relationship between eye movement signals and fatigue identification results

整體來看該特徵降維/選取步驟404和該智慧辨識步驟405,是以從所述眼動信號萃取的特徵(即:如前所述眼動速度之時頻特徵資料、眼動速度之龐卡萊圖特徵資料、注視時間、注視次數、瞳孔大小、掃視速度與眨眼頻率)作為輸入資料,該疲勞辨識結果為輸出資料。Overall, the feature dimensionality reduction/selection step 404 and the smart identification step 405 are based on the features extracted from the eye movement signal (i.e., the time-frequency characteristic data of the eye movement speed, the size of the eye movement speed as mentioned above). Calais diagram characteristic data, fixation time, number of fixations, pupil size, saccade speed and blink frequency) are used as input data, and the fatigue identification result is the output data.

以下眼動信號的狀態將使該疲勞辨識結果偏向"疲勞":眼動速度越慢、注視時間越短、注視次數越少、瞳孔越小、掃視速度越慢、眨眼頻率越高。The following eye movement signal states will bias the fatigue identification result toward "fatigue": the slower the eye movement speed, the shorter the fixation time, the fewer the number of fixations, the smaller the pupil, the slower the scan speed, and the higher the blink frequency.

另一方面,以下眼動信號的狀態將使該疲勞辨識結果偏向"非疲勞":眼動速度越快、注視時間越長、注視次數越多、瞳孔越大、掃視速度越快、眨眼頻率越低。On the other hand, the following eye movement signal states will bias the fatigue identification result toward "non-fatigue": the faster the eye movement speed, the longer the gaze time, the more gaze times, the larger the pupil, the faster the scan speed, and the higher the blink frequency. Low.

以羽毛球運動為例,羽毛球包含多種球路,本發明僅以快速平球、被殺球之防守以及動態殺球之球路訓練,配合量測資料說明眼動信號與疲勞辨識結果的關係。Taking badminton as an example, badminton includes a variety of ball paths. The present invention only uses the ball path training of fast flat balls, defense of killed balls, and dynamic smash balls, and uses measurement data to illustrate the relationship between eye movement signals and fatigue identification results.

1-1-2、快速平球之球路訓練1-1-2. Quick ball path training

請參考圖2的座標系與圖6A至圖6C,為該眼動追蹤單元13之相機130所拍攝之使用者右眼瞳孔在一段時間內的位置分佈,另請參考圖6D至圖6F,為該眼動追蹤單元13之相機130所拍攝之使用者右眼瞳孔在另一段時間內的位置分佈,將圖6A至圖6C與圖6D至圖6F相比,圖6A至圖6C所示之瞳孔位置比圖6D至圖6F所示之瞳孔位置更為集中,圖6A至圖6C可表示注視時間越長及掃視速度越快,故對應於圖6A至圖6C的眼動信號會導致該疲勞辨識結果偏向"非疲勞",而對應於圖6D至圖6F的眼動信號會導致該疲勞辨識結果偏向"疲勞"。請參考圖7A與圖7B,為使用者右眼在一段時間內的眼動速度波形圖與龐卡萊圖,另請參考圖7C與圖7D,為使用者右眼在另一段時間內的眼動速度波形圖與龐卡萊圖,將圖7A、圖7B與圖7C、圖7D相比,圖7A呈現的眼動速度整體而言比圖7C所示的眼動速度更快,且圖7B呈現的龐卡萊圖的眼動速度分布狀態比圖7D更集中,故對應於圖7A、圖7B的眼動信號會導致該疲勞辨識結果偏向"非疲勞",而對應於圖7C、圖7D的眼動信號會導致該疲勞辨識結果偏向"疲勞"。Please refer to the coordinate system of Figure 2 and Figures 6A to 6C , which shows the position distribution of the user's right eye pupil within a period of time captured by the camera 130 of the eye tracking unit 13 . Please also refer to Figures 6D to 6F . The position distribution of the user's right eye pupil captured by the camera 130 of the eye tracking unit 13 in another period of time. Compare Figures 6A to 6C with Figures 6D to 6F. The pupil shown in Figures 6A to 6C The position is more concentrated than the pupil position shown in Figures 6D to 6F. Figures 6A to 6C can indicate that the longer the gaze time and the faster the scanning speed, the eye movement signals corresponding to Figures 6A to 6C will lead to fatigue recognition. The result is biased towards "non-fatigue", and the eye movement signals corresponding to Figures 6D to 6F will cause the fatigue identification result to be biased towards "fatigue". Please refer to Figures 7A and 7B, which show the eye movement velocity waveforms and Poincare diagrams of the user's right eye in a period of time. Please also refer to Figures 7C and 7D, which show the eye movement velocity waveforms and Poincaré diagrams of the user's right eye in another period of time. Moving speed waveform and Poincaré plot, comparing Figures 7A and 7B with Figures 7C and 7D, the eye movement speed presented in Figure 7A is generally faster than the eye movement speed shown in Figure 7C, and Figure 7B The eye movement velocity distribution state of the Poincaré diagram presented is more concentrated than that of Figure 7D, so the eye movement signals corresponding to Figures 7A and 7B will cause the fatigue identification result to be biased toward "non-fatigue", while the eye movement signals corresponding to Figures 7C and 7D The eye movement signal will cause the fatigue identification result to be biased towards "fatigue".

1-1-3、被殺球之防守訓練1-1-3. Defensive training on killed ball

同前述,將圖8A至圖8C與圖8D至圖8F相比,圖8A至圖8C所示之瞳孔位置比圖8D至圖8F所示之瞳孔位置更為集中,圖8A至圖8C可表示注視時間越長及掃視速度越快,對應於圖8A至圖8C的眼動信號會導致該疲勞辨識結果偏向"非疲勞",而對應於圖8D至圖8F的眼動信號會導致該疲勞辨識結果偏向"疲勞"。圖9A呈現的眼動速度整體而言比圖9C所示的眼動速度更快,且圖9B呈現的龐卡萊圖的眼動速度分布狀態比圖9D更集中,故對應於圖9A、圖9B的眼動信號會導致該疲勞辨識結果偏向"非疲勞",而對應於圖9C、圖9D的眼動信號會導致該疲勞辨識結果偏向"疲勞"。As mentioned above, comparing Figures 8A to 8C with Figures 8D to 8F, the pupil positions shown in Figures 8A to 8C are more concentrated than those shown in Figures 8D to 8F. Figures 8A to 8C can represent The longer the gaze time and the faster the saccade speed, the eye movement signals corresponding to Figures 8A to 8C will cause the fatigue identification result to be biased towards "non-fatigue", while the eye movement signals corresponding to Figures 8D to 8F will lead to the fatigue identification result. The results were biased towards "fatigue". The eye movement speed shown in Figure 9A is generally faster than the eye movement speed shown in Figure 9C, and the eye movement speed distribution state of the Poincare diagram presented in Figure 9B is more concentrated than that in Figure 9D, so it corresponds to Figure 9A, Figure The eye movement signal of 9B will cause the fatigue identification result to be biased towards "non-fatigue", while the eye movement signals corresponding to Figures 9C and 9D will cause the fatigue identification result to be biased towards "fatigue".

1-1-4、動態殺球之球路訓練1-1-4. Dynamic ball path training

同前述,將圖10A至圖10C與圖10D至圖10F相比,圖10A至圖10C所示之瞳孔位置比圖10D至圖10F所示之瞳孔位置更為集中,圖10A至圖10C可表示注視時間越長及掃視速度越快,對應於圖10A至圖10C的眼動信號會導致該疲勞辨識結果偏向"非疲勞",而對應於圖10D至圖10F的眼動信號會導致該疲勞辨識結果偏向"疲勞"。圖11A呈現的眼動速度整體而言比圖11C所示的眼動速度更快,且圖11B呈現的龐卡萊圖的眼動速度分布狀態比圖11D更集中,故對應於圖11A、圖11B的眼動信號會導致該疲勞辨識結果偏向"非疲勞",而對應於圖11C、圖11D的眼動信號會導致該疲勞辨識結果偏向"疲勞"。As mentioned above, comparing Figures 10A to 10C with Figures 10D to 10F , the pupil positions shown in Figures 10A to 10C are more concentrated than those shown in Figures 10D to 10F , and Figures 10A to 10C can represent The longer the gaze time and the faster the scanning speed, the eye movement signals corresponding to Figures 10A to 10C will cause the fatigue identification result to be biased toward "non-fatigue", while the eye movement signals corresponding to Figures 10D to 10F will lead to the fatigue identification result. The results were biased towards "fatigue". The eye movement speed shown in Figure 11A is generally faster than the eye movement speed shown in Figure 11C , and the eye movement speed distribution state of the Poincaré diagram presented in Figure 11B is more concentrated than that in Figure 11D , so it corresponds to Figure 11A and Figure 11D . The eye movement signal of 11B will cause the fatigue identification result to be biased towards "non-fatigue", while the eye movement signals corresponding to Figures 11C and 11D will cause the fatigue identification result to be biased towards "fatigue".

1-1-5、疲勞辨識模型的較佳實施例1-1-5. Better embodiment of fatigue identification model

如前所述,該特徵降維/選取步驟404包含PCA、LDA、NWFE、KNWFE及KBCS中的至少一者,該智慧辨識步驟405採取LS-SVM及PNN中之一者。請參考下表,在快速平球之球路訓練,如圖12所示,該特徵降維/選取步驟404和該智慧辨識步驟405係以KNWFE和LS-SVM之組合為較佳實施例。As mentioned above, the feature dimensionality reduction/selection step 404 includes at least one of PCA, LDA, NWFE, KNWFE and KBCS, and the smart identification step 405 adopts one of LS-SVM and PNN. Please refer to the table below. In fast ball path training, as shown in Figure 12, the feature dimensionality reduction/selection step 404 and the smart identification step 405 are based on the combination of KNWFE and LS-SVM as a preferred embodiment.

根據"眼動信號"辨識是否疲勞 Identify fatigue based on "eye movement signals" 訓練 球路 Training ball path 特徵降維/選取步驟和智慧辨識步驟的組合 A combination of feature dimensionality reduction/selection steps and smart identification steps 辨識率 (CCR) % Recognition rate (CCR) % 基於"機器學習"的疲勞辨識模型- Fatigue identification model based on "machine learning"- 快速 平球    fast flat ball ​ PCA、LS-SVM PCA, LS-SVM 60.92 % 60.92% LDA、LS-SVM LDA, LS-SVM 48.28 % 48.28% NWFE、LS-SVM NWFE, LS-SVM 63.22 % 63.22% KNWFE、LS-SVM KNWFE, LS-SVM 68.97 % 68.97% KBCS、LS-SVM KBCS, LS-SVM 55.17 % 55.17% PCA、LDA、LS-SVM PCA, LDA, LS-SVM 65.52 % 65.52% KBCS、PCA、LS-SVM KBCS, PCA, LS-SVM 66.67 % 66.67% KBCS、LDA、LS-SVM KBCS, LDA, LS-SVM 57.47 % 57.47% PCA、PNN PCA, PNN 62.07 % 62.07% LDA、PNN LDA, PNN 52.87 % 52.87% NWFE、PNN NWFE, PNN 66.67 % 66.67% KNWFE、PNN KNWFE, PNN 68.96 % 68.96% KBCS、PNN KBCS, PNN 60.92 % 60.92% PCA、LDA、PNN PCA, LDA, PNN 62.07 % 62.07% KBCS、PCA、PNN KBCS, PCA, PNN 62.07 % 62.07% KBCS、LDA、PNN KBCS, LDA, PNN 56.32 % 56.32%

請參考下表,在被殺球之防守訓練,如圖13所示,該特徵降維/選取步驟404和該智慧辨識步驟405係以KBCS、PCA、PNN之組合為較佳實施例。Please refer to the table below. In the defensive training of a killed ball, as shown in Figure 13, the feature dimensionality reduction/selection step 404 and the smart identification step 405 are based on a combination of KBCS, PCA, and PNN as a preferred embodiment.

根據"眼動信號"辨識是否疲勞 Identify fatigue based on "eye movement signals" 訓練 球路 Training ball path 特徵降維/選取步驟和智慧辨識步驟的組合 A combination of feature dimensionality reduction/selection steps and smart identification steps 辨識率 (CCR) % Recognition rate (CCR) % 基於"機器學習"的疲勞辨識模型- Fatigue identification model based on "machine learning"- 被殺 球之 防守    killed ball of defense ​ PCA、LS-SVM PCA, LS-SVM 69.35 % 69.35% LDA、LS-SVM LDA, LS-SVM 70.97 % 70.97% NWFE、LS-SVM NWFE, LS-SVM 66.13 % 66.13% KNWFE、LS-SVM KNWFE, LS-SVM 67.74 % 67.74% KBCS、LS-SVM KBCS, LS-SVM 69.35 % 69.35% PCA、LDA、LS-SVM PCA, LDA, LS-SVM 66.13 % 66.13% KBCS、PCA、LS-SVM KBCS, PCA, LS-SVM 75.81 % 75.81% KBCS、LDA、LS-SVM KBCS, LDA, LS-SVM 66.13 % 66.13% PCA、PNN PCA, PNN 75.81 % 75.81% LDA、PNN LDA, PNN 61.29 % 61.29% NWFE、PNN NWFE, PNN 75.81 % 75.81% KNWFE、PNN KNWFE, PNN 74.19 % 74.19% KBCS、PNN KBCS, PNN 75.81 % 75.81% PCA、LDA、PNN PCA, LDA, PNN 64.52 % 64.52% KBCS、PCA、PNN KBCS, PCA, PNN 79.03 % 79.03% KBCS、LDA、PNN KBCS, LDA, PNN 59.68 % 59.68%

請參考下表,在動態殺球之球路訓練,如圖14所示,該特徵降維/選取步驟404和該智慧辨識步驟405係以KBCS和LS-SVM之組合為較佳實施例。Please refer to the table below. In the dynamic ball path training, as shown in Figure 14, the feature dimensionality reduction/selection step 404 and the smart identification step 405 are based on the combination of KBCS and LS-SVM as a preferred embodiment.

根據"眼動信號"辨識是否疲勞 Identify fatigue based on "eye movement signals" 訓練 球路 Training ball path 特徵降維/選取步驟和智慧辨識步驟的組合 A combination of feature dimensionality reduction/selection steps and smart identification steps 辨識率 (CCR) % Recognition rate (CCR) % 基於"機器學習"的疲勞辨識模型- Fatigue identification model based on "machine learning"- 動態 殺球 Dynamic Kill the ball PCA、LS-SVM PCA, LS-SVM 65.08 % 65.08% LDA、LS-SVM LDA, LS-SVM 46.03 % 46.03% NWFE、LS-SVM NWFE, LS-SVM 65.08 % 65.08% KNWFE、LS-SVM KNWFE, LS-SVM 61.90 % 61.90% KBCS、LS-SVM KBCS, LS-SVM 69.84 % 69.84% PCA、LDA、LS-SVM PCA, LDA, LS-SVM 65.08 % 65.08% KBCS、PCA、LS-SVM KBCS, PCA, LS-SVM 63.49 % 63.49% KBCS、LDA、LS-SVM KBCS, LDA, LS-SVM 61.90 % 61.90% PCA、PNN PCA, PNN 65.08 % 65.08% LDA、PNN LDA, PNN 50.79 % 50.79% NWFE、PNN NWFE, PNN 63.49 % 63.49% KNWFE、PNN KNWFE, PNN 65.08 % 65.08% KBCS、PNN KBCS, PNN 65.08 % 65.08% PCA、LDA、PNN PCA, LDA, PNN 61.90 % 61.90% KBCS、PCA、PNN KBCS, PCA, PNN 63.49 % 63.49% KBCS、LDA、PNN KBCS, LDA, PNN 68.25 % 68.25%

1-1-6、該疲勞辨識模型的訓練1-1-6. Training of the fatigue identification model

如前所述,眼動信號與疲勞辨識結果息息相關,故在模型訓練階段,可先提供對應於"疲勞"和"非疲勞"的複數眼動信號的樣本資料給該疲勞辨識模型以進行監督式學習(Supervised learning)之訓練,故該疲勞辨識模型即為預訓練模型(Pre-Trained Model)。As mentioned above, eye movement signals are closely related to fatigue identification results. Therefore, in the model training stage, sample data of complex eye movement signals corresponding to "fatigue" and "non-fatigue" can be provided to the fatigue identification model for supervision. Supervised learning training, so the fatigue identification model is a pre-trained model (Pre-Trained Model).

1-2、根據"腦波信號"辨識使用者是否運動疲勞1-2. Identify whether the user is tired from exercise based on "brain wave signals"

按一般信號處理原理,請參考圖15,該疲勞辨識模型40可依序實施包含一資料擷取步驟401、一信號預處理步驟402和一特徵萃取步驟403。簡言之,該401是從該腦波感測單元14接收使用者的一腦波信號(EEG Signal);該信號預處理步驟402可對該腦波信號實施濾波和視窗化等信號處理,其中,所述濾波可為帶通濾波(Bandpass Filtering);該特徵萃取步驟403是基於該信號預處理步驟402所輸出視窗化後的信號,擷取其時域特徵、頻域特徵、或時域特徵及頻域特徵的組合。According to general signal processing principles, please refer to Figure 15. The fatigue identification model 40 can be implemented sequentially including a data acquisition step 401, a signal preprocessing step 402 and a feature extraction step 403. In short, step 401 is to receive an brainwave signal (EEG Signal) of the user from the brainwave sensing unit 14; the signal preprocessing step 402 can perform signal processing such as filtering and windowing on the brainwave signal, where , the filtering may be bandpass filtering; the feature extraction step 403 is based on the windowed signal output by the signal preprocessing step 402, and extracts its time domain features, frequency domain features, or time domain features and a combination of frequency domain features.

本發明的實施例中,該特徵萃取步驟403的輸出資料可包含對應於該腦波信號的時域特徵以及頻域特徵,該時域特徵包含平均數、變異數、過零率、偏度、峰度、熵值中的至少一者,該頻域特徵包含 波能量、 波能量、 波能量、 波能量、紡錘波能量、鋸齒波能量、 中的至少一者,上述之腦波信號的時域特徵及頻域特徵為所屬技術領域的通常知識,在此容不詳述其計算原理。 In embodiments of the present invention, the output data of the feature extraction step 403 may include time domain features and frequency domain features corresponding to the brain wave signal. The time domain features include average, variation, zero-crossing rate, skewness, At least one of kurtosis and entropy, the frequency domain feature includes wave energy, wave energy, wave energy, Wave energy, spindle wave energy, sawtooth wave energy, , , and At least one of them, the above-mentioned time domain characteristics and frequency domain characteristics of the brain wave signal are common knowledge in the technical field, and the calculation principle will not be described in detail here.

接續於該特徵萃取步驟403之後,該疲勞辨識模型40還依序執行包含一特徵降維/選取步驟406和一智慧辨識步驟407。如圖15所示,本發明於該特徵降維/選取步驟406中,採取PCA、LDA、NWFE、KNWFE及KBCS中的至少一者進行特徵降維/選取,故透過該特徵降維/選取步驟406,產生對應於該腦波信號之時域特徵及頻域特徵中的至少一者疲勞特徵向量。Following the feature extraction step 403, the fatigue identification model 40 also sequentially executes a feature dimensionality reduction/selection step 406 and a smart identification step 407. As shown in Figure 15, in the feature dimensionality reduction/selection step 406, the present invention adopts at least one of PCA, LDA, NWFE, KNWFE and KBCS to perform feature dimensionality reduction/selection. Therefore, through the feature dimensionality reduction/selection step 406. Generate a fatigue feature vector corresponding to at least one of the time domain features and frequency domain features of the brain wave signal.

於該智慧辨識步驟407中,依據所述疲勞特徵向量,本發明採取兩種分類器中的一者,該兩種分類器分別是LS-SVM及PNN,由所述分類器根據所述疲勞特徵向量進行分類以產生對應於"疲勞"或"非疲勞"的該疲勞辨識結果。In the smart identification step 407, based on the fatigue feature vector, the present invention adopts one of two classifiers, the two classifiers are LS-SVM and PNN, and the classifier uses the fatigue feature vector to The vectors are classified to produce the fatigue identification result corresponding to "fatigue" or "non-fatigue".

1-2-1、腦波信號與疲勞辨識結果的關係1-2-1. The relationship between brain wave signals and fatigue identification results

以羽毛球運動為例,本發明僅以靜態休息、快速平球、被殺球之防守以及動態殺球之球路訓練,配合量測資料說明腦波信號與疲勞辨識結果的關係。Taking badminton as an example, the present invention only uses static rest, fast flat ball, defense of the ball and dynamic ball path training, and combines the measurement data to illustrate the relationship between brain wave signals and fatigue identification results.

1-2-2、靜態休息1-2-2. Static rest

請參考圖16A,為該腦波感測單元14在一段時間內產生的腦波信號,另請參考圖16B,為該腦波感測單元14在另一段時間內的腦波信號。將圖16A與圖16B相比,圖16A所示之整體振幅比圖16B所示之整體振幅大,故對應於圖16A的腦波信號會導致該疲勞辨識結果偏向"非疲勞",而對應於圖16B的腦波信號會導致該疲勞辨識結果偏向"疲勞"。Please refer to FIG. 16A , which shows the brainwave signal generated by the brainwave sensing unit 14 in a period of time. Please also refer to FIG. 16B , which shows the brainwave signal generated by the brainwave sensing unit 14 in another period of time. Comparing Figure 16A with Figure 16B, the overall amplitude shown in Figure 16A is larger than the overall amplitude shown in Figure 16B. Therefore, the brain wave signal corresponding to Figure 16A will cause the fatigue identification result to be biased towards "non-fatigue", while the corresponding The brainwave signal in Figure 16B will cause the fatigue identification result to be biased towards "fatigue".

1-2-3、快速平球之球路訓練1-2-3, fast ball path training

請參考圖17A,為該腦波感測單元14在一段時間內產生的腦波信號,另請參考圖17B,為該腦波感測單元14在另一段時間內的腦波信號。將圖17A與圖17B相比,圖17A所示之整體振幅比圖17B所示之整體振幅小,故對應於圖17A的腦波信號會導致該疲勞辨識結果偏向"非疲勞",而對應於圖17B的腦波信號會導致該疲勞辨識結果偏向"疲勞"。Please refer to FIG. 17A , which shows the brainwave signal generated by the brainwave sensing unit 14 in a period of time. Please also refer to FIG. 17B , which shows the brainwave signal generated by the brainwave sensing unit 14 in another period of time. Comparing Figure 17A with Figure 17B, the overall amplitude shown in Figure 17A is smaller than the overall amplitude shown in Figure 17B. Therefore, the brain wave signal corresponding to Figure 17A will cause the fatigue identification result to be biased toward "non-fatigue", while the brain wave signal corresponding to The brainwave signal in Figure 17B will cause the fatigue identification result to be biased towards "fatigue".

1-2-4、被殺球之防守訓練1-2-4. Defensive training on killed ball

同上所述,將圖18A與圖18B相比,圖18A所示之整體振幅比圖18B所示之整體振幅小,故對應於圖18A的腦波信號會導致該疲勞辨識結果偏向"非疲勞",而對應於圖18B的腦波信號會導致該疲勞辨識結果偏向"疲勞"。As mentioned above, comparing Figure 18A with Figure 18B, the overall amplitude shown in Figure 18A is smaller than the overall amplitude shown in Figure 18B, so the brain wave signal corresponding to Figure 18A will cause the fatigue identification result to be biased towards "non-fatigue" , and the brainwave signal corresponding to Figure 18B will cause the fatigue identification result to be biased towards "fatigue".

1-2-5、動態殺球之球路訓練1-2-5. Dynamic ball path training

同上所述,將圖19A與圖19B相比,圖19A所示之整體振幅比圖19B所示之整體振幅小,故對應於圖19A的腦波信號會導致該疲勞辨識結果偏向"非疲勞",而對應於圖19B的腦波信號會導致該疲勞辨識結果偏向"疲勞"。As mentioned above, comparing Figure 19A with Figure 19B, the overall amplitude shown in Figure 19A is smaller than the overall amplitude shown in Figure 19B, so the brain wave signal corresponding to Figure 19A will cause the fatigue identification result to be biased towards "non-fatigue" , and the brainwave signal corresponding to Figure 19B will cause the fatigue identification result to be biased towards "fatigue".

1-2-6、疲勞辨識模型的較佳實施例1-2-6. Better embodiment of fatigue identification model

如前所述,該特徵降維/選取步驟406包含PCA、LDA、NWFE、KNWFE及KBCS中的至少一者,該智慧辨識步驟407採取LS-SVM及PNN中之一者。請參考下表,在靜態休息的時候,如圖20所示,該特徵降維/選取步驟406和該智慧辨識步驟407係以KNWFE和LS-SVM之組合為較佳實施例。As mentioned above, the feature dimensionality reduction/selection step 406 includes at least one of PCA, LDA, NWFE, KNWFE and KBCS, and the smart identification step 407 adopts one of LS-SVM and PNN. Please refer to the table below. During a static rest, as shown in Figure 20, the feature dimensionality reduction/selection step 406 and the smart identification step 407 are based on a combination of KNWFE and LS-SVM as a preferred embodiment.

根據"腦波信號"辨識是否疲勞 Identify fatigue based on "brain wave signals" 訓練 球路 Training ball path 特徵降維/選取步驟和智慧辨識步驟的組合 A combination of feature dimensionality reduction/selection steps and smart identification steps 辨識率 (CCR) % Recognition rate (CCR) % 基於"機器學習"的疲勞辨識模型- Fatigue identification model based on "machine learning"- 靜態 休息    static rest ​ PCA、LS-SVM PCA, LS-SVM 88.00 % 88.00% LDA、LS-SVM LDA, LS-SVM 58.00 % 58.00% NWFE、LS-SVM NWFE, LS-SVM 74.00 % 74.00% KNWFE、LS-SVM KNWFE, LS-SVM 94.00 % 94.00% KBCS、LS-SVM KBCS, LS-SVM 86.00 % 86.00% PCA、LDA、LS-SVM PCA, LDA, LS-SVM 90.00 % 90.00% KBCS、PCA、LS-SVM KBCS, PCA, LS-SVM 88.00 % 88.00% KBCS、LDA、LS-SVM KBCS, LDA, LS-SVM 82.00 % 82.00% PCA、PNN PCA, PNN 88.00 % 88.00% LDA、PNN LDA, PNN 72.00 % 72.00% NWFE、PNN NWFE, PNN 82.00 % 82.00% KNWFE、PNN KNWFE, PNN 80.00 % 80.00% KBCS、PNN KBCS, PNN 86.00 % 86.00% PCA、LDA、PNN PCA, LDA, PNN 84.00 % 84.00% KBCS、PCA、PNN KBCS, PCA, PNN 88.00 % 88.00% KBCS、LDA、PNN KBCS, LDA, PNN 76.00 % 76.00%

請參考下表,在快速平球之球路訓練,如圖21所示,該特徵降維/選取步驟406和該智慧辨識步驟407是以PCA和LS-SVM之組合為較佳實施例。Please refer to the table below. In fast ball path training, as shown in Figure 21, the feature dimensionality reduction/selection step 406 and the smart identification step 407 are a combination of PCA and LS-SVM as a preferred embodiment.

根據"腦波信號"辨識是否疲勞 Identify fatigue based on "brain wave signals" 訓練 球路 Training ball path 特徵降維/選取步驟和智慧辨識步驟的組合 A combination of feature dimensionality reduction/selection steps and smart identification steps 辨識率 (CCR) % Recognition rate (CCR) % 基於"機器學習"的疲勞辨識模型- Fatigue identification model based on "machine learning"- 快速 平球    fast flat ball ​ PCA、LS-SVM PCA, LS-SVM 81.42 % 81.42% LDA、LS-SVM LDA, LS-SVM 75.71 % 75.71% NWFE、LS-SVM NWFE, LS-SVM 80.00 % 80.00% KNWFE、LS-SVM KNWFE, LS-SVM 77.14 % 77.14% KBCS、LS-SVM KBCS, LS-SVM 79.28 % 79.28% PCA、LDA、LS-SVM PCA, LDA, LS-SVM 78.57 % 78.57% KBCS、PCA、LS-SVM KBCS, PCA, LS-SVM 80.00 % 80.00% KBCS、LDA、LS-SVM KBCS, LDA, LS-SVM 75.71 % 75.71% PCA、PNN PCA, PNN 76.42 % 76.42% LDA、PNN LDA, PNN 68.57 % 68.57% NWFE、PNN NWFE, PNN 63.57 % 63.57% KNWFE、PNN KNWFE, PNN 68.57 % 68.57% KBCS、PNN KBCS, PNN 77.85 % 77.85% PCA、LDA、PNN PCA, LDA, PNN 78.57 % 78.57% KBCS、PCA、PNN KBCS, PCA, PNN 77.14 % 77.14% KBCS、LDA、PNN KBCS, LDA, PNN 75.00 % 75.00%

請參考下表,被殺球之防守之球路訓練,如圖22所示,該特徵降維/選取步驟406和該智慧辨識步驟407是以KBCS和LS-SVM之組合為較佳實施例。Please refer to the table below for ball path training for defense of a killed ball. As shown in Figure 22, the feature dimensionality reduction/selection step 406 and the smart identification step 407 are based on a combination of KBCS and LS-SVM as a preferred embodiment.

根據"腦波信號"辨識是否疲勞 Identify fatigue based on "brain wave signals" 訓練 球路 Training ball path 特徵降維/選取步驟和智慧辨識步驟的組合 A combination of feature dimensionality reduction/selection steps and smart identification steps 辨識率 (CCR) % Recognition rate (CCR) % 基於"機器學習"的疲勞辨識模型- Fatigue identification model based on "machine learning"- 被殺 球之 防守    killed ball of defense ​ PCA、LS-SVM PCA, LS-SVM 80.71 % 80.71% LDA、LS-SVM LDA, LS-SVM 78.57 % 78.57% NWFE、LS-SVM NWFE, LS-SVM 66.42 % 66.42% KNWFE、LS-SVM KNWFE, LS-SVM 71.42 % 71.42% KBCS、LS-SVM KBCS, LS-SVM 81.42 % 81.42% PCA、LDA、LS-SVM PCA, LDA, LS-SVM 80.71 % 80.71% KBCS、PCA、LS-SVM KBCS, PCA, LS-SVM 81.41 % 81.41% KBCS、LDA、LS-SVM KBCS, LDA, LS-SVM 79.28 % 79.28% PCA、PNN PCA, PNN 80.00 % 80.00% LDA、PNN LDA, PNN 68.57 % 68.57% NWFE、PNN NWFE, PNN 64.28 % 64.28% KNWFE、PNN KNWFE, PNN 62.85 % 62.85% KBCS、PNN KBCS, PNN 78.57 % 78.57% PCA、LDA、PNN PCA, LDA, PNN 80.71 % 80.71% KBCS、PCA、PNN KBCS, PCA, PNN 80.71 % 80.71% KBCS、LDA、PNN KBCS, LDA, PNN 76.42 % 76.42%

請參考下表,動態殺球之球路訓練,如圖23所示,該特徵降維/選取步驟406和該智慧辨識步驟407是以KBCS、PCA和PNN之組合為較佳實施例。Please refer to the table below for dynamic ball path training. As shown in Figure 23, the feature dimensionality reduction/selection step 406 and the smart identification step 407 are based on a combination of KBCS, PCA and PNN as a preferred embodiment.

根據"腦波信號"辨識是否疲勞 Identify fatigue based on "brain wave signals" 訓練 球路 Training ball path 特徵降維/選取步驟和智慧辨識步驟的組合 A combination of feature dimensionality reduction/selection steps and smart identification steps 辨識率 (CCR) % Recognition rate (CCR) % 基於"機器學習"的疲勞辨識模型- Fatigue identification model based on "machine learning"- 動態 殺球 Dynamic Kill the ball PCA、LS-SVM PCA, LS-SVM 82.85 % 82.85% LDA、LS-SVM LDA, LS-SVM 74.28 % 74.28% NWFE、LS-SVM NWFE, LS-SVM 70.71 % 70.71% KNWFE、LS-SVM KNWFE, LS-SVM 71.42 % 71.42% KBCS、LS-SVM KBCS, LS-SVM 81.42 % 81.42% PCA、LDA、LS-SVM PCA, LDA, LS-SVM 80.71 % 80.71% KBCS、PCA、LS-SVM KBCS, PCA, LS-SVM 82.14 % 82.14% KBCS、LDA、LS-SVM KBCS, LDA, LS-SVM 80.00 % 80.00% PCA、PNN PCA, PNN 81.42 % 81.42% LDA、PNN LDA, PNN 63.57 % 63.57% NWFE、PNN NWFE, PNN 64.28 % 64.28% KNWFE、PNN KNWFE, PNN 66.42 % 66.42% KBCS、PNN KBCS, PNN 82.14 % 82.14% PCA、LDA、PNN PCA, LDA, PNN 79.28 % 79.28% KBCS、PCA、PNN KBCS, PCA, PNN 83.57 % 83.57% KBCS、LDA、PNN KBCS, LDA, PNN 82.85 % 82.85%

1-2-7、該疲勞辨識模型的訓練1-2-7. Training of the fatigue identification model

如前所述,腦波信號與疲勞辨識結果息息相關,故在模型訓練階段,可先提供對應於"疲勞"和"非疲勞"的複數腦波信號的樣本資料給該疲勞辨識模型以進行監督式學習(Supervised learning)之訓練,故該疲勞辨識模型即為預訓練模型(Pre-Trained Model)。As mentioned before, brainwave signals are closely related to fatigue identification results. Therefore, in the model training stage, sample data of complex brainwave signals corresponding to "fatigue" and "non-fatigue" can be provided to the fatigue identification model for supervision. Supervised learning training, so the fatigue identification model is a pre-trained model (Pre-Trained Model).

2、基於"深度學習"的疲勞辨識模型2. Fatigue identification model based on "deep learning"

2-1、該疲勞辨識模型根據"眼動信號"辨識使用者是否運動疲勞的較佳實施例2-1. A preferred embodiment of the fatigue identification model to identify whether the user is fatigued during exercise based on "eye movement signals"

按一般信號處理原理,請參考圖24,該疲勞辨識模型40可依序實施包含一資料擷取步驟401和一信號預處理步驟402,其已如前所述,在此容不重複贅述。本發明於該信號預處理步驟402後,該疲勞辨識模型40可依序執行包含一特徵萃取步驟408、一特徵降維/選取步驟409和一智慧辨識步驟410;或者,請參考圖25,可依序執行包含一特徵萃取步驟411和一智慧辨識步驟412;或者,請參考圖26,可直接執行一智慧辨識步驟413。According to general signal processing principles, please refer to Figure 24. The fatigue identification model 40 can be implemented sequentially including a data acquisition step 401 and a signal preprocessing step 402, which have been described above and will not be repeated here. After the signal preprocessing step 402 of the present invention, the fatigue identification model 40 can sequentially execute a feature extraction step 408, a feature dimensionality reduction/selection step 409 and a smart identification step 410; or, please refer to Figure 25, you can A feature extraction step 411 and a smart identification step 412 are performed sequentially; or, referring to FIG. 26 , a smart identification step 413 can be directly performed.

前述中,圖24和圖25所示的該特徵萃取步驟408、411可採取卷積神經網路(CNN)與多通道卷積神經網路(MCCNN)中的一者,以產生對應於所述眼動信號的深度特徵,需說明的是,CNN及MCCNN的程式應用是所屬技術領域中的通常知識,容不詳述其運作原理。圖24所示的該特徵降維/選取步驟409可採取PCA、LDA、NWFE、KNWFE及KBCS中的至少一者進行特徵降維/選取,以產生所述深度特徵的疲勞特徵向量。圖24和圖25所示的該智慧辨識步驟410、412可採取LS-SVM與長短期記憶法(LSTM)中的一者,圖26所示的該智慧辨識步驟413可採取CNN與MCCNN中的一者,使各該智慧辨識步驟410、412、413根據所述疲勞特徵向量進行分類而產生對應於"疲勞"或"非疲勞"的該疲勞辨識結果,需說明的是,LSTM的程式應用是所屬技術領域中的通常知識,容不詳述其運作原理。In the foregoing, the feature extraction steps 408 and 411 shown in Figures 24 and 25 can adopt one of the convolutional neural network (CNN) and the multi-channel convolutional neural network (MCCNN) to generate the corresponding Regarding the depth features of eye movement signals, it should be noted that the program applications of CNN and MCCNN are common knowledge in the technical fields to which they belong, and it is not necessary to elaborate on their operating principles. The feature dimensionality reduction/selection step 409 shown in Figure 24 may adopt at least one of PCA, LDA, NWFE, KNWFE and KBCS to perform feature dimensionality reduction/selection to generate the fatigue feature vector of the depth feature. The smart identification steps 410 and 412 shown in Figures 24 and 25 can take one of LS-SVM and long short-term memory (LSTM). The smart identification step 413 shown in Figure 26 can take one of CNN and MCCNN. First, each of the smart identification steps 410, 412, and 413 is classified according to the fatigue feature vector to generate the fatigue identification result corresponding to "fatigue" or "non-fatigue". It should be noted that the program application of LSTM is It is common knowledge in the technical field and does not allow for a detailed description of its operating principle.

此外,請參考圖27,該信號預處理步驟402後,該疲勞辨識模型40亦可依序執行包含一時頻(Time-Frequency)分析步驟414與一智慧辨識步驟415,該時頻分析步驟414包含一短時距傅利葉轉換(Short-time Fourier Transform)與一映像縮放(Image Zoom),該智慧辨識步驟415為一分類演算法,該分類演算法可採用GoogLeNet CNN(卷積神經網路)及AlexNet CNN(卷積神經網路)中的一者,以根據所述眼動速度經由該時頻分析步驟414後所獲得之眼動速度時頻圖進行分類而產生對應於"疲勞"或"非疲勞"的該疲勞辨識結果。需說明的是,GoogLeNet CNN及AlexNet CNN的程式應用是所屬技術領域中的通常知識,容不詳述其運作原理。In addition, please refer to Figure 27. After the signal preprocessing step 402, the fatigue identification model 40 can also sequentially execute a time-frequency (Time-Frequency) analysis step 414 and a smart identification step 415. The time-frequency analysis step 414 includes A Short-time Fourier Transform and an Image Zoom. The smart recognition step 415 is a classification algorithm. The classification algorithm can use GoogLeNet CNN (Convolutional Neural Network) and AlexNet. One of the CNNs (Convolutional Neural Networks) is used to classify the eye movement speed time-frequency diagram obtained after the time-frequency analysis step 414 according to the eye movement speed to generate a signal corresponding to "fatigue" or "non-fatigue". "The fatigue identification result. It should be noted that the program applications of GoogLeNet CNN and AlexNet CNN are common knowledge in the technical fields to which they belong, and their operating principles cannot be described in detail.

請參考下表,在快速平球之球路訓練,如圖28所示,該智慧辨識步驟415的分類演算法係以GoogLeNet CNN為較佳實施例;該時頻分析步驟414中,該映像縮放之尺寸可設定為224 224 3,故該時頻分析步驟414產生的224 224 3映像資料作為GoogLeNet CNN的輸入資料,進而進行分類而產生對應於"疲勞"與"非疲勞"的該疲勞辨識結果。 Please refer to the table below. In the fast ball path training, as shown in Figure 28, the classification algorithm of the smart identification step 415 is based on GoogLeNet CNN as a preferred embodiment; in the time-frequency analysis step 414, the image scaling The size can be set to 224 224 3. Therefore, the time-frequency analysis step 414 generates 224 224 3. The image data is used as the input data of GoogLeNet CNN, and then classified to produce fatigue identification results corresponding to "fatigue" and "non-fatigue".

根據"眼動信號"辨識是否疲勞 Identify fatigue based on "eye movement signals" 訓練 球路 Training ball path 疲勞辨識模型的 處理步驟 Fatigue identification model Processing steps 辨識率 (CCR) % Recognition rate (CCR) % 基於"深度學習"的疲勞辨識模型- Fatigue identification model based on "deep learning"- 快速 平球    fast flat ball ​ CNN CNN 60.32 % 60.32% MCCNN MCCNN 61.90 % 61.90% AlexNet CNN AlexNet CNN 60.32 % 60.32% GoogLeNet CNN GoogLeNet CNN 69.84 % 69.84% CNN、PCA、LS-SVM CNN, PCA, LS-SVM 65.12 % 65.12% CNN、LDA、LS-SVM CNN, LDA, LS-SVM 51.16 % 51.16% CNN、NWFE、LS-SVM CNN, NWFE, LS-SVM 65.12 % 65.12% CNN、KNWFE、LS-SVM CNN, KNWFE, LS-SVM 58.14 % 58.14% CNN、KBCS、LS-SVM CNN, KBCS, LS-SVM 65.12 % 65.12% CNN、PCA、LDA、LS-SVM CNN, PCA, LDA, LS-SVM 67.44 % 67.44% CNN、KBCS、PCA、LS-SVM CNN, KBCS, PCA, LS-SVM 67.44 % 67.44% CNN、KBCS、LDA、LS-SVM CNN, KBCS, LDA, LS-SVM 62.79 % 62.79% MCCNN、LSTM MCCNN, LSTM 58.73 % 58.73%

在被殺球之防守訓練,如圖29所示,該疲勞辨識模型40的特徵萃取步驟408、特徵降維/選取步驟409和智慧辨識步驟410係以CNN、KBCS、PCA和LS-SVM之組合為較佳實施例。In the defensive training of killing the ball, as shown in Figure 29, the feature extraction step 408, feature dimensionality reduction/selection step 409 and smart identification step 410 of the fatigue identification model 40 are based on a combination of CNN, KBCS, PCA and LS-SVM is a preferred embodiment.

根據"眼動信號"辨識是否疲勞 Identify fatigue based on "eye movement signals" 訓練 球路 Training ball path 疲勞辨識模型的 處理步驟 Fatigue identification model Processing steps 辨識率 (CCR) % Recognition rate (CCR) % 基於"深度學習"的疲勞辨識模型- Fatigue identification model based on "deep learning"- 被殺 球之 防守    killed ball of defense ​ CNN CNN 50.00 % 50.00% MCCNN MCCNN 46.77 % 46.77% AlexNet CNN AlexNet CNN 56.45 % 56.45% GoogLeNet CNN GoogLeNet CNN 61.29 % 61.29% CNN、PCA、LS-SVM CNN, PCA, LS-SVM 61.90 % 61.90% CNN、LDA、LS-SVM CNN, LDA, LS-SVM 66.67 % 66.67% CNN、NWFE、LS-SVM CNN, NWFE, LS-SVM 61.90 % 61.90% CNN、KNWFE、LS-SVM CNN, KNWFE, LS-SVM 54.76 % 54.76% CNN、KBCS、LS-SVM CNN, KBCS, LS-SVM 71.43 % 71.43% CNN、PCA、LDA、LS-SVM CNN, PCA, LDA, LS-SVM 69.05 % 69.05% CNN、KBCS、PCA、LS-SVM CNN, KBCS, PCA, LS-SVM 73.81 % 73.81% CNN、KBCS、LDA、LS-SVM CNN, KBCS, LDA, LS-SVM 70.45 % 70.45% MCCNN、LSTM MCCNN, LSTM 50.00 % 50.00%

在動態殺球之球路訓練,如圖30所示,該疲勞辨識模型40的特徵萃取步驟408、特徵降維/選取步驟409和智慧辨識步驟410係以CNN、PCA、LDA和LS-SVM之組合為較佳實施例。In the dynamic ball path training, as shown in Figure 30, the feature extraction step 408, feature dimensionality reduction/selection step 409 and smart identification step 410 of the fatigue identification model 40 are based on CNN, PCA, LDA and LS-SVM. The combination is the preferred embodiment.

根據"眼動信號"辨識是否疲勞 Identify fatigue based on "eye movement signals" 訓練 球路 Training ball path 疲勞辨識模型的 處理步驟 Fatigue identification model Processing steps 辨識率 (CCR) % Recognition rate (CCR) % 基於"深度學習"的疲勞辨識模型- Fatigue identification model based on "deep learning"- 動態 殺球    Dynamic Kill the ball ​ CNN CNN 62.50 % 62.50% MCCNN MCCNN 62.50 % 62.50% AlexNet CNN AlexNet CNN 62.50 % 62.50% GoogLeNet CNN GoogLeNet CNN 53.13 % 53.13% CNN、PCA、LS-SVM CNN, PCA, LS-SVM 56.82 % 56.82% CNN、LDA、LS-SVM CNN, LDA, LS-SVM 51.16 % 51.16% CNN、NWFE、LS-SVM CNN, NWFE, LS-SVM 65.12 % 65.12% CNN、KNWFE、LS-SVM CNN, KNWFE, LS-SVM 50.00 % 50.00% CNN、KBCS、LS-SVM CNN, KBCS, LS-SVM 65.91 % 65.91% CNN、PCA、LDA、LS-SVM CNN, PCA, LDA, LS-SVM 75.00 % 75.00% CNN、KBCS、PCA、LS-SVM CNN, KBCS, PCA, LS-SVM 70.45 % 70.45% CNN、KBCS、LDA、LS-SVM CNN, KBCS, LDA, LS-SVM 65.91 % 65.91% MCCNN、LSTM MCCNN, LSTM 60.94 % 60.94%

2-2、該疲勞辨識模型根據"腦波信號"辨識使用者是否運動疲勞的較佳實施例2-2. A preferred embodiment of the fatigue identification model to identify whether the user is fatigued during exercise based on "brain wave signals"

按一般信號處理原理,請參考圖31,該疲勞辨識模型40可依序實施包含一資料擷取步驟401和一信號預處理步驟402,其已如前所述,在此容不重複贅述。本發明於該信號預處理步驟402後,該疲勞辨識模型40可根據處理後的該腦波信號依序執行包含一特徵萃取步驟416、一特徵降維/選取步驟417和一智慧辨識步驟418;或者,請參考圖32,可依序執行包含一特徵萃取步驟419和一智慧辨識步驟420;或者,請參考圖33,可直接執行一智慧辨識步驟421;或者,請參考圖34,可依序執行包含一時頻分析步驟422與一智慧辨識步驟423。該些步驟416~423已如前所述,並請配合參考下表,在此容不重複贅述。According to general signal processing principles, please refer to Figure 31. The fatigue identification model 40 can be implemented sequentially including a data acquisition step 401 and a signal preprocessing step 402, which have been described above and will not be repeated here. In the present invention, after the signal preprocessing step 402, the fatigue recognition model 40 can sequentially execute a feature extraction step 416, a feature dimensionality reduction/selection step 417 and a smart identification step 418 according to the processed brain wave signal; Alternatively, please refer to Figure 32, which may include a feature extraction step 419 and a smart identification step 420; or, please refer to Figure 33, a smart identification step 421 may be directly executed; or, please refer to Figure 34, a smart identification step 420 may be executed sequentially. The execution includes a time-frequency analysis step 422 and a smart identification step 423. These steps 416~423 have been mentioned above, and please refer to the following table, so they will not be repeated here.

在靜態休息時,如圖35所示,該疲勞辨識模型40以執行該時頻分析步驟422與該智慧辨識步驟423,且該智慧辨識步驟423的分類演算法係以AlexNet CNN為較佳實施例。該時頻(Time-Frequency)分析步驟422中,該映像縮放之尺寸可設定為227 227 3,故該時頻分析步驟422產生的227 227 3映像資料作為AlexNet CNN的輸入資料,進而進行分類而產生對應於"疲勞"與"非疲勞"的該疲勞辨識結果。 During static rest, as shown in Figure 35, the fatigue identification model 40 executes the time-frequency analysis step 422 and the smart identification step 423, and the classification algorithm of the smart identification step 423 is based on AlexNet CNN as a preferred embodiment. . In the time-frequency (Time-Frequency) analysis step 422, the size of the image scaling can be set to 227 227 3. Therefore, the time-frequency analysis step 422 generates 227 227 3. The image data is used as the input data of AlexNet CNN, which is then classified to produce fatigue identification results corresponding to "fatigue" and "non-fatigue".

根據"腦波信號"辨識是否疲勞 Identify fatigue based on "brain wave signals" 訓練 球路 Training ball path 疲勞辨識模型的 處理步驟 Fatigue identification model Processing steps 辨識率 (CCR) % Recognition rate (CCR) % 基於"深度學習"的疲勞辨識模型- Fatigue identification model based on "deep learning"- 靜止 休息    still rest ​ CNN CNN 68.09 % 68.09% MCCNN MCCNN 66.42 % 66.42% AlexNet CNN AlexNet CNN 73.51 % 73.51% GoogLeNet CNN GoogLeNet CNN 70.78 % 70.78% CNN、PCA、LS-SVM CNN, PCA, LS-SVM 65.47 % 65.47% CNN、LDA、LS-SVM CNN, LDA, LS-SVM 60.71 % 60.71% CNN、NWFE、LS-SVM CNN, NWFE, LS-SVM 65.47 % 65.47% CNN、KNWFE、LS-SVM CNN, KNWFE, LS-SVM 65.47 % 65.47% CNN、KBCS、LS-SVM CNN, KBCS, LS-SVM 64.16 % 64.16% CNN、PCA、LDA、LS-SVM CNN, PCA, LDA, LS-SVM 64.28 % 64.28% CNN、KBCS、PCA、LS-SVM CNN, KBCS, PCA, LS-SVM 66.42 % 66.42% CNN、KBCS、LDA、LS-SVM CNN, KBCS, LDA, LS-SVM 64.64 % 64.64% MCCNN、LSTM MCCNN, LSTM 64.04 % 64.04%

在快速平球之球路訓練,如圖36所示,該疲勞辨識模型40的特徵萃取步驟416、特徵降維/選取步驟417和智慧辨識步驟418係以CNN、KBCS、PCA和LS-SVM之組合為較佳實施例。In fast ball path training, as shown in Figure 36, the feature extraction step 416, feature dimensionality reduction/selection step 417 and smart identification step 418 of the fatigue identification model 40 are based on CNN, KBCS, PCA and LS-SVM. The combination is the preferred embodiment.

根據"腦波信號"辨識是否疲勞 Identify fatigue based on "brain wave signals" 訓練 球路 Training ball path 疲勞辨識模型的 處理步驟 Fatigue identification model Processing steps 辨識率 (CCR) % Recognition rate (CCR) % 基於"深度學習"的疲勞辨識模型- Fatigue identification model based on "deep learning"- 快速 平球    fast flat ball ​ CNN CNN 67.33 % 67.33% MCCNN MCCNN 73.33 % 73.33% AlexNet CNN AlexNet CNN 73.57 % 73.57% GoogLeNet CNN GoogLeNet CNN 76.42 % 76.42% CNN、PCA、LS-SVM CNN, PCA, LS-SVM 81.00 % 81.00% CNN、LDA、LS-SVM CNN, LDA, LS-SVM 61.00 % 61.00% CNN、NWFE、LS-SVM CNN, NWFE, LS-SVM 74.00 % 74.00% CNN、KNWFE、LS-SVM CNN, KNWFE, LS-SVM 54.00 % 54.00% CNN、KBCS、LS-SVM CNN, KBCS, LS-SVM 78.00 % 78.00% CNN、PCA、LDA、LS-SVM CNN, PCA, LDA, LS-SVM 80.00 % 80.00% CNN、KBCS、PCA、LS-SVM CNN, KBCS, PCA, LS-SVM 82.00 % 82.00% CNN、KBCS、LDA、LS-SVM CNN, KBCS, LDA, LS-SVM 75.00 % 75.00% MCCNN、LSTM MCCNN, LSTM 76.66 % 76.66%

在被殺球之防守訓練,如圖36所示,該疲勞辨識模型40的特徵萃取步驟416、特徵降維/選取步驟417和智慧辨識步驟418係以CNN、KBCS、PCA和LS-SVM之組合為較佳實施例。In the defensive training of killing the ball, as shown in Figure 36, the feature extraction step 416, feature dimensionality reduction/selection step 417 and smart identification step 418 of the fatigue identification model 40 are based on a combination of CNN, KBCS, PCA and LS-SVM is a preferred embodiment.

根據"腦波信號"辨識是否疲勞 Identify fatigue based on "brain wave signals" 訓練 球路 Training ball path 疲勞辨識模型的 處理步驟 Fatigue identification model Processing steps 辨識率 (CCR) % Recognition rate (CCR) % 基於"深度學習"的疲勞辨識模型- Fatigue identification model based on "deep learning"- 被殺 球之 防守 killed ball of defense CNN CNN 78.66 % 78.66% MCCNN MCCNN 80.66 % 80.66% AlexNet CNN AlexNet CNN 77.14 % 77.14% GoogLeNet CNN GoogLeNet CNN 75.00 % 75.00% CNN、PCA、LS-SVM CNN, PCA, LS-SVM 81.00 % 81.00% CNN、LDA、LS-SVM CNN, LDA, LS-SVM 63.00 % 63.00% CNN、NWFE、LS-SVM CNN, NWFE, LS-SVM 83.00 % 83.00% CNN、KNWFE、LS-SVM CNN, KNWFE, LS-SVM 74.00 % 74.00% CNN、KBCS、LS-SVM CNN, KBCS, LS-SVM 83.00 % 83.00% CNN、PCA、LDA、LS-SVM CNN, PCA, LDA, LS-SVM 83.00 % 83.00% CNN、KBCS、PCA、LS-SVM CNN, KBCS, PCA, LS-SVM 84.00 % 84.00% CNN、KBCS、LDA、LS-SVM CNN, KBCS, LDA, LS-SVM 78.00 % 78.00% MCCNN、LSTM MCCNN, LSTM 78.00 % 78.00%

動態殺球之球路訓練,如圖37所示,該疲勞辨識模型40的特徵萃取步驟416、特徵降維/選取步驟417和智慧辨識步驟418係以CNN、KBCS和LS-SVM之組合為較佳實施例。Dynamic ball path training, as shown in Figure 37, the feature extraction step 416, feature dimensionality reduction/selection step 417 and smart identification step 418 of the fatigue identification model 40 are based on the combination of CNN, KBCS and LS-SVM. Best embodiment.

根據"腦波信號"辨識是否疲勞 Identify fatigue based on "brain wave signals" 訓練 球路 Training ball path 疲勞辨識模型的 處理步驟 Fatigue identification model Processing steps 辨識率 (CCR) % Recognition rate (CCR) % 基於"深度學習"的疲勞辨識模型- Fatigue identification model based on "deep learning"- 動態 殺球 Dynamic Kill the ball CNN CNN 72.00 % 72.00% MCCNN MCCNN 74.66 % 74.66% AlexNet CNN AlexNet CNN 79.28 % 79.28% GoogLeNet CNN GoogLeNet CNN 77.14 % 77.14% CNN、PCA、LS-SVM CNN, PCA, LS-SVM 83.00 % 83.00% CNN、LDA、LS-SVM CNN, LDA, LS-SVM 60.00 % 60.00% CNN、NWFE、LS-SVM CNN, NWFE, LS-SVM 80.00 % 80.00% CNN、KNWFE、LS-SVM CNN, KNWFE, LS-SVM 57.00 % 57.00% CNN、KBCS、LS-SVM CNN, KBCS, LS-SVM 85.00 % 85.00% CNN、PCA、LDA、LS-SVM CNN, PCA, LDA, LS-SVM 77.00 % 77.00% CNN、KBCS、PCA、LS-SVM CNN, KBCS, PCA, LS-SVM 84.00 % 84.00% CNN、KBCS、LDA、LS-SVM CNN, KBCS, LDA, LS-SVM 79.00 % 79.00% MCCNN、LSTM MCCNN, LSTM 76.00 % 76.00%

綜上所述,本發明之系統與使用者之間的互動,也就是說,使用者根據本發明顯示的畫面進行運動訓練,同時,本發明的疲勞辨識模型根據使用者的眼動信號及/或腦波信號有效辨識使用者是處在疲勞狀態或非疲勞狀態,並自動對應調整該頭戴式眼鏡裝置所播放的畫面,藉此達成互動式運動訓練之目的。In summary, the interaction between the system of the present invention and the user, that is to say, the user performs exercise training according to the screen displayed by the present invention, and at the same time, the fatigue recognition model of the present invention is based on the user's eye movement signals and/or Or the brain wave signal can effectively identify whether the user is in a fatigue state or a non-fatigue state, and automatically adjust the picture played by the head-mounted glasses device accordingly, thereby achieving the purpose of interactive sports training.

10:頭戴式眼鏡裝置 11:眼鏡本體 12:顯示單元 120:顯示器面板 121:顯示控制器 13:眼動追蹤單元 130:相機 131:眼動信號處理器 14:腦波感測單元 140:電極墊 141:腦波訊號處理器 15:傳輸介面 20:即時運算裝置 21:處理單元 22:儲存單元 23:傳輸介面 31:取像元件 32:觸覺回饋元件 33:電子配件 34:遠端裝置 35:嗅覺刺激元件 40:疲勞辨識模型 401:資料擷取步驟 402:信號預處理步驟 403,408, 411,416,419:特徵萃取步驟 404,406,409,417:特徵降維/選取步驟 405,407,410,412,413,415,418,420,421,423:智慧辨識步驟 414,422:時頻分析步驟 10:Head-mounted glasses device 11: Glasses body 12:Display unit 120:Monitor panel 121:Display controller 13: Eye tracking unit 130:Camera 131:Eye movement signal processor 14: Brainwave sensing unit 140:Electrode pad 141:Brainwave signal processor 15:Transmission interface 20:Real-time computing device 21: Processing unit 22:Storage unit 23:Transmission interface 31: Imaging component 32: Tactile feedback component 33: Electronic accessories 34:Remote device 35: Olfactory stimulation element 40: Fatigue identification model 401: Data retrieval steps 402: Signal preprocessing steps 403,408, 411,416,419: Feature extraction step 404,406,409,417: Feature dimensionality reduction/selection step 405,407,410,412,413,415,418,420,421,423: Smart identification steps 414,422: Time-frequency analysis steps

圖1:本發明可辨識使用者疲勞的運動訓練系統的方塊示意圖。 圖2:本發明中,頭戴式眼鏡裝置的使用狀態示意圖。 圖3:本發明中,頭戴式眼鏡裝置的使用狀態示意圖。 圖4:本發明中,頭戴式眼鏡裝置的背面平面示意圖。 圖5:本發明中,疲勞辨識模型的流程示意圖(1)。 圖6A~6F:本發明中,使用者眼睛瞳孔在一段時間內的位置分佈示意圖(快速平球)。 圖7A、7C:本發明中,使用者眼睛在一段時間內的眼動速度示意圖(快速平球)。 圖7B、7D:對應於圖7A、7C的龐卡萊圖。 圖8A~8F:本發明中,使用者眼睛瞳孔在一段時間內的位置分佈示意圖(被殺球之防守)。 圖9A、9C:本發明中,使用者眼睛在一段時間內的眼動速度示意圖(被殺球之防守)。 圖9B、9D:對應於圖9A、9C的龐卡萊圖。 圖10A~10F:本發明中,使用者眼睛瞳孔在一段時間內的位置分佈示意圖(動態殺球)。 圖11A、11C:本發明中,使用者眼睛在一段時間內的眼動速度示意圖(動態殺球)。 圖11B、11D:對應於圖11A、11C的龐卡萊圖。 圖12:本發明中,疲勞辨識模型的流程示意圖(2)。 圖13:本發明中,疲勞辨識模型的流程示意圖(3)。 圖14:本發明中,疲勞辨識模型的流程示意圖(4)。 圖15:本發明中,疲勞辨識模型的流程示意圖(5)。 圖16A、16B:本發明中,使用者的腦波示意圖(靜態休息)。 圖17A、17B:本發明中,使用者的腦波示意圖(快速平球)。 圖18A、18B:本發明中,使用者的腦波示意圖(被殺球之防守)。 圖19A、19B:本發明中,使用者的腦波示意圖(動態殺球)。 圖20:本發明中,疲勞辨識模型的流程示意圖(6)。 圖21:本發明中,疲勞辨識模型的流程示意圖(7)。 圖22:本發明中,疲勞辨識模型的流程示意圖(8)。 圖23:本發明中,疲勞辨識模型的流程示意圖(9)。 圖24:本發明中,疲勞辨識模型的流程示意圖(10)。 圖25:本發明中,疲勞辨識模型的流程示意圖(11)。 圖26:本發明中,疲勞辨識模型的流程示意圖(12)。 圖27:本發明中,疲勞辨識模型的流程示意圖(13)。 圖28:本發明中,疲勞辨識模型的流程示意圖(14)。 圖29:本發明中,疲勞辨識模型的流程示意圖(15)。 圖30:本發明中,疲勞辨識模型的流程示意圖(16)。 圖31:本發明中,疲勞辨識模型的流程示意圖(17)。 圖32:本發明中,疲勞辨識模型的流程示意圖(18)。 圖33:本發明中,疲勞辨識模型的流程示意圖(19)。 圖34:本發明中,疲勞辨識模型的流程示意圖(20)。 圖35:本發明中,疲勞辨識模型的流程示意圖(21)。 圖36:本發明中,疲勞辨識模型的流程示意圖(22)。 圖37:本發明中,疲勞辨識模型的流程示意圖(23)。 Figure 1: Block diagram of a sports training system capable of identifying user fatigue according to the present invention. Figure 2: A schematic diagram of the use state of the head-mounted glasses device in the present invention. Figure 3: Schematic diagram of the use state of the head-mounted glasses device in the present invention. Figure 4: A schematic back plan view of the head-mounted glasses device in the present invention. Figure 5: Flowchart (1) of the fatigue identification model in the present invention. Figures 6A ~ 6F: In the present invention, a schematic diagram of the position distribution of the user's eye pupils within a period of time (fast flat ball). Figures 7A and 7C: Schematic diagram of the eye movement speed of the user's eyes within a period of time in the present invention (fast flat ball). Figures 7B, 7D: Poincaré diagrams corresponding to Figures 7A, 7C. Figures 8A ~ 8F: Schematic diagram of the position distribution of the user's eye pupils within a period of time (defense of killing balls) in the present invention. Figures 9A and 9C: Schematic diagram of the eye movement speed of the user's eyes within a period of time in the present invention (defense of the ball being killed). Figures 9B, 9D: Poincaré diagrams corresponding to Figures 9A, 9C. Figures 10A ~ 10F: In the present invention, a schematic diagram of the position distribution of the user's eye pupils within a period of time (dynamic ball killing). Figures 11A and 11C: Schematic diagram of the eye movement speed of the user's eyes within a period of time (dynamic ball killing) in the present invention. Figures 11B, 11D: Poincaré diagrams corresponding to Figures 11A, 11C. Figure 12: Schematic flow chart (2) of the fatigue identification model in the present invention. Figure 13: Schematic flow chart (3) of the fatigue identification model in the present invention. Figure 14: Schematic flow chart (4) of the fatigue identification model in the present invention. Figure 15: Schematic flow chart (5) of the fatigue identification model in the present invention. Figures 16A and 16B: Schematic diagram of the user's brain waves in the present invention (static rest). Figures 17A and 17B: Schematic diagram of the user's brain waves (fast flat ball) in the present invention. Figures 18A and 18B: Schematic diagram of the user's brain waves (defense of the ball being killed) in the present invention. Figures 19A and 19B: Schematic diagram of the user's brain waves (dynamic ball killing) in the present invention. Figure 20: Schematic flow chart (6) of the fatigue identification model in the present invention. Figure 21: Flowchart (7) of the fatigue identification model in the present invention. Figure 22: Schematic flow chart (8) of the fatigue identification model in the present invention. Figure 23: Flowchart (9) of the fatigue identification model in the present invention. Figure 24: Schematic flow chart (10) of the fatigue identification model in the present invention. Figure 25: Flowchart (11) of the fatigue identification model in the present invention. Figure 26: Schematic flow chart (12) of the fatigue identification model in the present invention. Figure 27: Schematic flow chart (13) of the fatigue identification model in the present invention. Figure 28: Schematic flow chart (14) of the fatigue identification model in the present invention. Figure 29: Schematic flow chart (15) of the fatigue identification model in the present invention. Figure 30: Schematic flow chart (16) of the fatigue identification model in the present invention. Figure 31: Schematic flow chart (17) of the fatigue identification model in the present invention. Figure 32: Schematic flow chart (18) of the fatigue identification model in the present invention. Figure 33: Schematic flow chart (19) of the fatigue identification model in the present invention. Figure 34: Schematic flow chart (20) of the fatigue identification model in the present invention. Figure 35: Schematic flow chart (21) of the fatigue identification model in the present invention. Figure 36: Schematic flow chart (22) of the fatigue identification model in the present invention. Figure 37: Schematic flow chart (23) of the fatigue identification model in the present invention.

10:頭戴式眼鏡裝置 10:Head-mounted glasses device

12:顯示單元 12:Display unit

120:顯示器面板 120:Monitor panel

121:顯示控制器 121:Display controller

13:眼動追蹤單元 13: Eye tracking unit

130:相機 130:Camera

131:眼動信號處理器 131:Eye movement signal processor

14:腦波感測單元 14: Brainwave sensing unit

140:電極墊 140:Electrode pad

141:腦波訊號處理器 141:Brainwave signal processor

15:傳輸介面 15:Transmission interface

20:即時運算裝置 20:Real-time computing device

21:處理單元 21: Processing unit

22:儲存單元 22:Storage unit

23:傳輸介面 23:Transmission interface

31:取像元件 31: Imaging component

32:觸覺回饋元件 32: Tactile feedback component

33:電子配件 33: Electronic accessories

34:遠端裝置 34:Remote device

35:嗅覺刺激元件 35: Olfactory stimulation element

Claims (10)

一種可辨識使用者疲勞的運動訓練系統,包含:一頭戴式眼鏡裝置,包含一眼鏡本體,在該眼鏡本體上設置一顯示單元與至少一生理資訊感測單元,該至少一生理資訊感測單元包含一眼動追蹤單元與一腦波感測單元中的至少一者;及一即時運算裝置,訊號連接該頭戴式眼鏡裝置以進行雙向資料傳輸,並儲存有一疲勞辨識模型的程式資料;該即時運算裝置執行該疲勞辨識模型,以根據該至少一生理資訊感測單元在運動訓練過程中所測得的感測資料產生一疲勞辨識結果,並透過該顯示單元顯示對應於該疲勞辨識結果的畫面;其中,該疲勞辨識模型為基於機器學習或基於深度學習的模型。 A sports training system that can identify user fatigue, including: a head-mounted glasses device, including a glasses body, a display unit and at least one physiological information sensing unit provided on the glasses body, the at least one physiological information sensing unit The unit includes at least one of a gaze tracking unit and a brain wave sensing unit; and a real-time computing device, which is signal-connected to the head-mounted glasses device for two-way data transmission, and stores program data of a fatigue recognition model; the The real-time computing device executes the fatigue identification model to generate a fatigue identification result based on the sensing data measured by the at least one physiological information sensing unit during exercise training, and displays a fatigue identification result corresponding to the fatigue identification result through the display unit. Picture; wherein, the fatigue identification model is a model based on machine learning or deep learning. 如請求項1所述之可辨識使用者疲勞的運動訓練系統,其中,該至少一生理資訊感測單元至少包含該眼動追蹤單元,該疲勞辨識模型在一特徵萃取步驟中,輸出對應於該眼動追蹤單元所產生之一眼動信號的眼動速度、注視時間、注視次數、瞳孔大小、掃視速度與眨眼頻率中的至少一者,並據以產生該疲勞辨識結果。 The sports training system capable of identifying user fatigue as described in claim 1, wherein the at least one physiological information sensing unit at least includes the eye tracking unit, and the fatigue identification model outputs the corresponding feature in a feature extraction step. At least one of the eye movement speed, fixation time, number of fixations, pupil size, scan speed and blink frequency of an eye movement signal generated by the eye tracking unit, and generate the fatigue identification result accordingly. 如請求項2所述之可辨識使用者疲勞的運動訓練系統,其中,該疲勞辨識模型是基於機器學習的模型;該疲勞辨識模型採取主成分分析、線性識別分析、無參數加權特徵萃取、核無參數加權特徵萃取法、及基於核函數之類別可分離方法中的至少一者,以產生對應於所述眼動速度、注視時間、注視次數、瞳孔大小、掃視速度與眨眼頻率中的至少一者之疲勞特徵向量;該疲勞辨識模型在一智慧辨識步驟中,依據所述疲勞特徵向量採取最小平方支持向量機與機率神經網路中的一者進行分類而產生對應於疲勞或非疲勞的該疲勞辨識結果。 The sports training system capable of identifying user fatigue as described in claim 2, wherein the fatigue identification model is a model based on machine learning; the fatigue identification model adopts principal component analysis, linear identification analysis, parameter-free weighted feature extraction, kernel At least one of a parameter-free weighted feature extraction method and a class separable method based on a kernel function to generate a response corresponding to at least one of the eye movement speed, fixation time, number of fixations, pupil size, saccade speed and blink frequency. The fatigue feature vector of the person; in a smart identification step, the fatigue identification model adopts one of the least squares support vector machine and the probabilistic neural network to classify according to the fatigue feature vector to generate the fatigue or non-fatigue corresponding to Fatigue identification results. 如請求項2所述之可辨識使用者疲勞的運動訓練系統,其中,該疲勞辨識模型是基於深度學習的模型,該疲勞辨識模型在該特徵萃取步驟中,採取卷積神經網路與多通道卷積神經網路中的一者;該疲勞辨識模型採取主成分分析、線性識別分析、無參數加權特徵萃取、核無參數加權特徵萃取法、及基於核函數之類別可分離方法中的至少一者,以產生對應於所述眼動速度、注視時間、注視次數、瞳孔大小、掃視速度與眨眼頻率中的至少一者之疲勞特徵向量;該疲勞辨識模型根據所述疲勞特徵向量採取最小平方支持向量機與長短期記憶法中的一者進行分類而產生對應於疲勞或非疲勞的該疲勞辨識結果。 The sports training system capable of identifying user fatigue as described in claim 2, wherein the fatigue identification model is a model based on deep learning. In the feature extraction step, the fatigue identification model adopts a convolutional neural network and multi-channel One of the convolutional neural networks; the fatigue identification model adopts at least one of principal component analysis, linear identification analysis, parameter-free weighted feature extraction, kernel parameter-free weighted feature extraction method, and class separable method based on kernel function Or, to generate a fatigue feature vector corresponding to at least one of the eye movement speed, fixation time, fixation number, pupil size, scanning speed and blink frequency; the fatigue identification model adopts least square support based on the fatigue feature vector The vector machine performs classification with one of long short-term memory methods to generate the fatigue identification result corresponding to fatigue or non-fatigue. 如請求項1所述之可辨識使用者疲勞的運動訓練系統,其中,該疲勞辨識模型是基於深度學習的模型;該至少一生理資訊感測單元至少包含該眼動追蹤單元,該疲勞辨識模型執行一分類演算法,以根據對應於該眼動追蹤單元所產生之一眼動信號的眼動速度、注視時間、注視次數、瞳孔大小、掃視速度與眨眼頻率中的至少一者進行分類,而產生對應於疲勞或非疲勞的該疲勞辨識結果,該分類演算法採用GoogLeNet卷積神經網路及AlexNet卷積神經網路中的一者;該疲勞辨識模型執行該分類演算法之前包含一時頻分析步驟,該時頻分析步驟包含一短時距傅利葉轉換與一映像縮放。 The sports training system capable of identifying user fatigue as described in claim 1, wherein the fatigue identification model is a model based on deep learning; the at least one physiological information sensing unit at least includes the eye tracking unit, and the fatigue identification model Execute a classification algorithm to classify according to at least one of eye movement speed, fixation time, fixation number, pupil size, scan speed and blink frequency corresponding to an eye movement signal generated by the eye tracking unit. Corresponding to the fatigue identification result of fatigue or non-fatigue, the classification algorithm uses one of GoogLeNet convolutional neural network and AlexNet convolutional neural network; the fatigue identification model includes a time-frequency analysis step before executing the classification algorithm. , the time-frequency analysis step includes a short-time Fourier transform and an image scaling. 如請求項1所述之可辨識使用者疲勞的運動訓練系統,其中,該疲勞辨識模型是基於機器學習的模型;該至少一生理資訊感測單元至少包含該腦波感測單元,該疲勞辨識模型採取主成分分析、線性識別分析、無參數加權特徵萃取、核無參數加權特徵萃取法、及基於核函數之類別可分離方法中的至少一者,以產生對應於該腦波感測單元所產生之一腦波信號之疲勞特徵向量; 該疲勞辨識模型在一智慧辨識步驟中,依據所述疲勞特徵向量採取最小平方支持向量機與機率神經網路中的一者進行分類以產生對應於疲勞或非疲勞的該疲勞辨識結果。 The sports training system capable of identifying user fatigue as described in claim 1, wherein the fatigue identification model is a model based on machine learning; the at least one physiological information sensing unit at least includes the brainwave sensing unit, and the fatigue identification model The model adopts at least one of principal component analysis, linear identification analysis, parameter-free weighted feature extraction, kernel parameter-free weighted feature extraction method, and class separable method based on kernel function to generate the data corresponding to the brain wave sensing unit. Generate a fatigue feature vector of a brain wave signal; In a smart identification step, the fatigue identification model adopts one of a least squares support vector machine and a probabilistic neural network to classify according to the fatigue feature vector to generate the fatigue identification result corresponding to fatigue or non-fatigue. 如請求項1所述之可辨識使用者疲勞的運動訓練系統,其中,該疲勞辨識模型是基於深度學習的模型,該至少一生理資訊感測單元至少包含該腦波感測單元,該疲勞辨識模型在一特徵萃取步驟中,採取卷積神經網路與多通道卷積神經網路中的一者;該疲勞辨識模型採取主成分分析、線性識別分析、無參數加權特徵萃取、核無參數加權特徵萃取法、及基於核函數之類別可分離方法中的至少一者,以產生對應於該腦波感測單元所產生之一腦波信號之疲勞特徵向量;該疲勞辨識模型根據所述疲勞特徵向量採取最小平方支持向量機與長短期記憶法中的一者進行分類而產生對應於疲勞或非疲勞的該疲勞辨識結果。 The sports training system capable of identifying user fatigue as described in claim 1, wherein the fatigue identification model is a model based on deep learning, the at least one physiological information sensing unit at least includes the brain wave sensing unit, and the fatigue identification model In a feature extraction step, the model adopts one of a convolutional neural network and a multi-channel convolutional neural network; the fatigue identification model adopts principal component analysis, linear identification analysis, parameter-free weighted feature extraction, and kernel parameter-free weighting. At least one of a feature extraction method and a class separable method based on a kernel function to generate a fatigue feature vector corresponding to an brainwave signal generated by the brainwave sensing unit; the fatigue identification model is based on the fatigue feature The vectors are classified using one of a least squares support vector machine and a long short-term memory method to generate the fatigue identification result corresponding to fatigue or non-fatigue. 如請求項1所述之可辨識使用者疲勞的運動訓練系統,其中,該疲勞辨識模型是基於深度學習的模型;該至少一生理資訊感測單元至少包含該腦波感測單元,該疲勞辨識模型執行一分類演算法,以根據對應於該腦波感測單元所產生之一腦波信號進行分類,而產生對應於疲勞或非疲勞的該疲勞辨識結果,該分類演算法採用GoogLeNet卷積神經網路及AlexNet卷積神經網路中的一者;該疲勞辨識模型執行該分類演算法之前包含一時頻分析步驟,該時頻分析步驟包含一短時距傅利葉轉換與一映像縮放。 The sports training system capable of identifying user fatigue as described in claim 1, wherein the fatigue identification model is a model based on deep learning; the at least one physiological information sensing unit at least includes the brainwave sensing unit, and the fatigue identification model The model executes a classification algorithm to classify a brain wave signal corresponding to the brain wave sensing unit to generate the fatigue identification result corresponding to fatigue or non-fatigue. The classification algorithm uses GoogLeNet convolutional neural One of the network and the AlexNet convolutional neural network; the fatigue identification model includes a time-frequency analysis step before executing the classification algorithm. The time-frequency analysis step includes a short-time Fourier transform and an image scaling. 如請求項1所述之可辨識使用者疲勞的運動訓練系統,其中,該疲勞辨識模型是基於深度學習的模型; 該疲勞辨識模型於一資料擷取步驟和一信號預處理步驟後,直接執行一智慧辨識步驟,該智慧辨識步驟採用卷積神經網路及多通道卷積神經網路中的一者。 The sports training system capable of identifying user fatigue as described in claim 1, wherein the fatigue identification model is a model based on deep learning; The fatigue identification model directly executes a smart identification step after a data acquisition step and a signal preprocessing step. The smart identification step uses one of a convolutional neural network and a multi-channel convolutional neural network. 如請求項1所述之可辨識使用者疲勞的運動訓練系統,進一步包含:一取像元件,設置於該頭戴式眼鏡裝置;該即時運算裝置訊號連接該取像元件,以接收該取像元件所拍攝的影像;一觸覺回饋元件,該即時運算裝置訊號連接該觸覺回饋元件,以輸出一驅動訊號控制該觸覺回饋元件作動;一電子配件,該即時運算裝置訊號連接該電子配件,以進行雙向資料傳輸;一遠端裝置,該即時運算裝置訊號連接該遠端裝置,以進行指導資料的傳輸;一嗅覺刺激元件,該即時運算裝置訊號連接該嗅覺刺激元件,以輸出一驅動訊號控制該嗅覺刺激元件作動。 The sports training system capable of identifying user fatigue as described in claim 1 further includes: an imaging element provided on the head-mounted glasses device; the real-time computing device signal is connected to the imaging element to receive the imaging element an image captured by the element; a tactile feedback element, the real-time computing device signal is connected to the tactile feedback element to output a drive signal to control the operation of the tactile feedback element; an electronic accessory, the real-time computing device signal is connected to the electronic accessory to perform Two-way data transmission; a remote device, the real-time computing device signal is connected to the remote device for transmission of guidance data; an olfactory stimulation element, the real-time computing device signal is connected to the olfactory stimulation element to output a driving signal to control the The olfactory stimulation element is activated.
TW111135609A 2021-11-26 2022-09-20 Exercise training system able to recognize fatigue of user TWI823577B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163283397P 2021-11-26 2021-11-26
US63/283,397 2021-11-26

Publications (2)

Publication Number Publication Date
TW202327517A TW202327517A (en) 2023-07-16
TWI823577B true TWI823577B (en) 2023-11-21

Family

ID=88147800

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111135609A TWI823577B (en) 2021-11-26 2022-09-20 Exercise training system able to recognize fatigue of user

Country Status (1)

Country Link
TW (1) TWI823577B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108294759A (en) * 2017-01-13 2018-07-20 天津工业大学 A kind of Driver Fatigue Detection based on CNN Eye state recognitions
CN109953763A (en) * 2019-02-28 2019-07-02 扬州大学 A kind of vehicle carried driving behavioral value early warning system and method based on deep learning
US20200294652A1 (en) * 2019-03-13 2020-09-17 Bright Cloud International Corporation Medication Enhancement Systems and Methods for Cognitive Benefit
TW202117743A (en) * 2019-10-17 2021-05-01 孫光天 System for diagnosing dyslexia by combining brain wave and artificial intelligence including brain wave detection units and a central processing unit
CN113171095A (en) * 2021-04-23 2021-07-27 哈尔滨工业大学 Hierarchical driver cognitive distraction detection system
US20210275050A1 (en) * 2017-04-05 2021-09-09 LR Technologies, Inc. Human bioelectrical signal detection and monitoring

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108294759A (en) * 2017-01-13 2018-07-20 天津工业大学 A kind of Driver Fatigue Detection based on CNN Eye state recognitions
US20210275050A1 (en) * 2017-04-05 2021-09-09 LR Technologies, Inc. Human bioelectrical signal detection and monitoring
CN109953763A (en) * 2019-02-28 2019-07-02 扬州大学 A kind of vehicle carried driving behavioral value early warning system and method based on deep learning
US20200294652A1 (en) * 2019-03-13 2020-09-17 Bright Cloud International Corporation Medication Enhancement Systems and Methods for Cognitive Benefit
TW202117743A (en) * 2019-10-17 2021-05-01 孫光天 System for diagnosing dyslexia by combining brain wave and artificial intelligence including brain wave detection units and a central processing unit
CN113171095A (en) * 2021-04-23 2021-07-27 哈尔滨工业大学 Hierarchical driver cognitive distraction detection system

Also Published As

Publication number Publication date
TW202327517A (en) 2023-07-16

Similar Documents

Publication Publication Date Title
US10446051B2 (en) Interactive cognitive-multisensory interface apparatus and methods for assessing, profiling, training, and improving performance of athletes and other populations
US10478698B2 (en) Interactive cognitive-multisensory interface apparatus and methods for assessing, profiling, training, and/or improving performance of athletes and other populations
US11497440B2 (en) Human-computer interactive rehabilitation system
CN113709411B (en) Sports auxiliary training system of MR intelligent glasses based on eye tracking technology
US10610143B2 (en) Concussion rehabilitation device and method
US20220019284A1 (en) Feedback from neuromuscular activation within various types of virtual and/or augmented reality environments
US20120194648A1 (en) Video/ audio controller
US20200219468A1 (en) Head mounted displaying system and image generating method thereof
WO2013040642A1 (en) Activity training apparatus and method
KR20200109614A (en) Training and exercise system based on virtual reality or augmented reality for dizziness and disequilibrium
TWI823577B (en) Exercise training system able to recognize fatigue of user
Nijholt et al. BrainGain: BCI for HCI and Games
Verhulst et al. Physiological-based dynamic difficulty adaptation in a theragame for children with cerebral palsy
Gonzalez et al. Fear levels in virtual environments, an approach to detection and experimental user stimuli sensation
CN113633870B (en) Emotion state adjustment system and method
US20210187374A1 (en) Augmented extended realm system
TW201816545A (en) Virtual reality apparatus
WO2020213301A1 (en) Information processing device and information processing system
CN107050825B (en) Conventional action training device and its method
Oh Exploring Design Opportunities for Technology-Supported Yoga Practices at Home
WO2022152970A1 (en) Method of providing feedback to a user through segmentation of user movement data
de Sousa Rego Serious games for health rehabilitation
JP2023113275A (en) Motion support device for adjusting visibility of prescribed object, system, program and method
Cordeiro et al. The development of a machine learning/augmented reality immersive training system for performance monitoring in athletes
WO2022152971A1 (en) Method of providing feedback to a user through controlled motion