TW201234284A - Power-saving based on the human recognition device and method - Google Patents

Power-saving based on the human recognition device and method Download PDF

Info

Publication number
TW201234284A
TW201234284A TW100104068A TW100104068A TW201234284A TW 201234284 A TW201234284 A TW 201234284A TW 100104068 A TW100104068 A TW 100104068A TW 100104068 A TW100104068 A TW 100104068A TW 201234284 A TW201234284 A TW 201234284A
Authority
TW
Taiwan
Prior art keywords
image
human body
algorithm
identification
recognition algorithm
Prior art date
Application number
TW100104068A
Other languages
Chinese (zh)
Inventor
Hsiao-Chung Wang
Original Assignee
Avit Corp
Hsiao-Chung Wang
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Avit Corp, Hsiao-Chung Wang filed Critical Avit Corp
Priority to TW100104068A priority Critical patent/TW201234284A/en
Publication of TW201234284A publication Critical patent/TW201234284A/en

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

A power-saving device and method mainly uses human recognition algorithms to identify whether any human is present, so as to effectively control open/close operations of a power switch or control an operation current/voltage of a load for power saving purposes. To achieve a human recognition result with a higher recognition rate, the human recognition algorithm may be human skeleton identification algorithms, gesture recognition algorithm and face recognition algorithms. After achieving the human recognition algorithms, the power-saving device can use wired or wireless to control energy load. Practically, using the human recognition energy-saving devices is actually to make the identification of the power load device which is controlled by the human to achieve the purpose of power-saving.

Description

201234284 六、發明說明: 【發明所屬之技術領域】 本發明係為一種辨識處理’特別是一種關於人員辨識節能裝置與方法。 【先前技術】 隨著世界人口繼續上升,並伴隨著發展令國家經濟的增長,使得電源 消耗不斷增大,不可逆轉的氣候變化和氣候災難相應地變得更大。有鐘於 此,相關綠色能源的概念被提出,主要分為二種,一則是從能源方面提出 研究,例如:採用太陽熱能及風力/海浪/ 動能等。另一則是從節約能源 方面上手,所以本專利提出人員辨識節能裝置的概念,所謂的人員辨識節 能裝置是依據人員在場與否,來有效的控制電源開關的開啟或閉合或控制 電源使用電流或電壓的大小,例如:控制室内燈具,當人員在場時,由人 員辨識節能裝置感綱人貞的存在’飾賊或調整燈具的電源;當人員的 不在場時,人員辨識節能裝置進而降低或關閉燈具的電源。 而傳統控制燈源的方式,大多採用紅外線感測裝置。紅外線(lnfrared)在 感測裝置上’可區分駐動式紅外線域絲本身會㈣紅外線絲侧物 體移動)’與被動式遠紅外線p|R(passjve丨nfrarecj seris〇r,簡稱p|R)(感 應|§本身不會鶴紅外縣束,而是靠碰之減移騎發感應)。 主動式紅外線細器大多數是由二個裝置所組成,-為發射紅外線光 束(Transmitter) ’而另一個則為接收紅外線光束(Receiver)。主動式紅外線 損測器’本身會持續發射及接收紅外線光束,適用於點對點之直線距離使 用’例如牆上用來防盜㈣紅外線投射器。而被動式紅外線侧器則適 用於室内或戶外空·圍使用,例如:―般室内及門外常用之㈣設襟或防 201234284 盜感應器。 接著’請參考第1 ®,為先前技術紅外線感測器系統10之架構圖,包 含·紅外線細器12、信號檢測放大器14、信號比較器16與電源裝置2〇。 接著’以下將敛述之動作流程:當物件靠近紅外線感測系·统1〇時,红外線 感測器12所感應之電壓量會隨著物件而產生變動,此時,由信號檢測放大 器14將此類比的電壓值轉換成信號再送至信號比較器16。再糊相關熱 感應演算判斷此電壓大小,當電綱達—預設值時,信號比較器16即可判 斷移動物件存在,反之’ f電壓一直小於—預設值時,信號比較器16即可 判if移動物件不存在。當信號比較器16判斷移動物件存在時,則可發出控 制指令至電源裝置2〇 ’電源裝置2〇外接負載,可以是燈具,當燈具接收 此控制指令時’便可發出亮光。此—先前技術,常用在室_燈源控制上。 所以先前技術有幾個問題點,以下分別列出: 1 ·若採用絲拉外職測方^,很容《會目為物件的反祕件等因素 而造成反射信號的偏差;又因使用於點對點之直線距離,其所能伽 的範圍僅限於轉點之直線轉,不脑蓋整健_範圍。 2,若採用被動式紅外線读測方式,則會因為溫差變化大,例如:馬路溫 度、水蒸氣溫度等等外在因素,有可能會導致被動式紅外線誤報; 又因只針對熱源變化’無法判斷是人或動物’所產生的感應信號,造 成判斷偏差。 3_整體而吕以紅外線(|nfrared)在感測裝置上,兩大問題出現。不論主 動式或被動式紅外線_方式,均無法區分人員或動物的移動,而另 -方面採用被動式遠紅外線翻方式(P丨Rs_r),亦無法確認靜止 201234284 形態人員的存在。 為了改善先前技術利用紅外線感測裝置關題點,本專利提出一種新 的架構,以達成節能控制裝置。 【發明内容】 本發明之-目的為提供-種人員辨識節能裝置,其中人員辨識節能裝 置,包含:影像感測模組、影像錢處理單元與人員辨識處理單元及記憶 體。-影像感測模組,&含-鏡頭與-影像感測單元之組合,依據至少一 個物件產生複油影像細鶴;-影像信號處理單元,連接影像感測單 元,接收些感測信號並進行影像處理以產生一影像處理資料;及—人員辨 識處理單元’連接影像鶴處理單s,触並齡至少—健影像處理資 料至記憶體,並利用一人體圖像辨識演算法進行辨識,以產生至少一個控 制指令,並傳送於控制至少一個電源控制裝置。 本發明另一目的為一種人員辨識的朗能方法,包含以下步驟:拍攝至. 少一個物件而產生複數個影像感測信號;對該些影像感測信號進行影像處 理,以產生一影像處理資料;及依據一人體圖像辨識演算法對至少一個該 影像處理資料,進行辨識以確認人員之存在,而產生至少一個控制指令, 並將至少一個控制指令’傳送至少一個電源控制裝置。 本發明再一目的為一種人員辨識的節能方法,包含以下步驟:拍攝至 少一個物件而產生複數個影像感測信號;對該些影像感測信號進行影像處 理’以產生一影像處理資料;及依據一人體圖像辨識演算法對至少一個該 影像處理資料,進行辨識以確認人員之存在,而產生至少一個控制信號, 並將至少一個控制信號,傳送至少一個電源控制裝置。 201234284 為讓本發狄上述和其他目的、特徵、和優雖更鴨紐,下文特 舉數個較佳實_,舰合所關式,作詳細說明如下: 【實施方式】 所以本發明為了改善利用紅外線感測裝置的問題點,故提出一種新的 ' 人員辨識的架構,來以達成人員辨識節能裝置。例如:燈具的控制,由於 燈具的控制上’先前技術常會因無法區別人或動物,亦無法確認靜止形態 人員的存在’而導致判斷的偏差。運用本發明將可以改進先前技術的所造 籲成偏差判斷,且也不容糾為所在環境之問題,而造成環境的誤判。 且運用本發明將容悬辨識人員或動物的之間的區別,藉以改進之前請 先參考第2 ®,其說明本發明之人員辨識節能裝置之功能方塊圖。其中, 人員辨識節能裝置240,包含:影像感測模組122、影像信號處理單元2〇〇 與人員辨識處理單元150與記憶體16〇。一影像感測模組122,包含一鏡 頭11〇與-影像感測單元120之組合,依據至少一個物件產生複數個影像 感測信號;影像信號處理單元2〇〇,連接影像感測單元12〇,接收影像感測 眷信號,並進行影像處理以產生影像將,並儲存至少_個該影像資料至記 ‘髓160,及人員辨識處理單元15〇,連接影像信號處理單元2〇〇,接收至 -少-娜像龍,並_人體圖像辨識演算法,進行顺人員之存在與否, 而產生至少一個控制指令,並送至電源控制裝置19〇。 其中,鏡頭110是搭配數個凸透鏡及凹透鏡所组成的,鏡頭11〇的透 鏡數量及結構’是影響畫質的重要因素之一。在本專利中鏡頭110最大區 別是視角(Viewangle)的不同,而視角可以決定影像涵蓋空間範圍的大小。 其令,鏡頭可以是魚眼鏡頭、超廣角鏡頭、廣角鏡頭、標準鏡頭、望遠鏡 201234284 頭、定焦鏡頭、變焦鏡頭、反射式鏡頭與微鏡頭等。其中微鏡頭可於丨c製 造流程時,直接設置影像感測單元120上。 其中,影像感測單元120可以是電荷耦合元件感測器(charge_c〇up|ed Device,簡稱CCD)或互補式金屬氧化物半導感測器(c〇mp|ementary Metal-Oxide Semiconductor,簡稱CMOS)。以影像感測單元來說,可記 錄光線變化的感光元件,當影像感測單元表面接受到鏡頭11〇進來的光線 照射時,即會將光線的能量轉換成電荷’光線越強、其所儲存的電荷也就 越多,而這些電荷就成為判斷光線強弱大小的依據。影像感測單元上安排 有通道線路’將這些電荷產生的信號,放大傳輪至解瑪,就能還原和解讀 成影像,並構成了一幅完整的晝面。 接著’請參考第3圖’其說明影像信號處理單元2〇〇之細部功能方塊。 其中,影像信號處理早元200,包含.暗點補償器201、壞點補償器202、 鏡頭壳度補償器203、色彩或灰度調整器204、銳利度調整器205、雜訊渡 除器206、曝光調整器207與白平衡調整器208。 其中,當影像資料從影像感測單元120進來時,由於影像資料會有暗 點不平衡的情形發生’此時,即需要影像感測單元暗點補償2〇1來補償不 必要的電壓準位。因為影像感測單元12〇有時會有壞點的存在,所以壞點 補償器2〇2用以補償影像感測單元120之壞點。因為鏡頭會產生光學 亮度誤差,所《利用鏡頭亮度補償器2〇3來較正成較均勾亮度。色彩或灰 度調整器204用以處理亮度校正或色調校正,若影像為彩色的RGB值為 了還原每-闕RGB健需要赌點爾,來讓❹m產生每點都有RGB 的像素點。絲像值是灰贼是黑自時,就不需要麵關償。而利用銳 201234284 利度調整器205及雜訊濾除器206來增加影像識別度。而曝光調整器207 與白平衡調整器208用以調整,整幅影像動態範圍及色彩修正,一般可利 用自動調整來達成。白平衡調整器208則是由影像的參考白色值來設定標 準β 請注意,上述影像信號處理的方塊及其參數並非特別限定,方塊的選 用與數值大小的設定當可視系統的實際應用來加以選擇與改變。 其中’人員辨識處理單元150,連接影像信號處理單元200,接收至少 一個影像處理資料,並利用人體圖像辨識演算法進行辨識,以產生至少一 個控制指令,並傳送至電源控制裝置19(^其中,電源控制裝置19〇可以採 用控制一個或是多個能源負載250。其中’能源負載25〇可以是室内燈源、 室外燈源與空調系統等裝置。 控制指令除利用上述有線方式傳送至電源控制裝置19〇,亦可透過無線 收發模組(未繪出)’經由無線電波傳送至送至電源控制裝置19〇。第一無線 收發模组設置於人員辨識節能裝置240端,另一第二無線收發模組則設置於 電源控制裝置190。其中,人員辨識節能裝置240所傳送的控制指令將透過 第一無線收發模組進行一載波調變,經由無線方式傳輸至第二無線收發模 組,再由第二無線收發模組進行一載波解調變而還原控制指令,再透過電 源控制裝置190控制能源負載250。 且控制指令係為可包含··多個訊息與多個控制指令,或者,控制指令, 亦可只控制信號的大小,且本發明並不限定該控制指令所包含的形式,將 視實際上的應用來加以改變與設定。其中,控制指令可以是一位元資料或 是多位元資料’例如:8bit,其值為〇至255,控制指令則用以調整或開啟或 201234284 關閉電源再一例:1bit ’控制指令則用以開啟或關閉電源》其中,本發明的 單一人員辨識節能裝置240,可以控制一個能源負載250或多個能源負載 250 ’再者’多個人員辨識節能裝置24〇配合演算法亦可控制一個能源負載 250或多個能源負載25〇。 接著’以下列舉二個人員辨識節能裝置24〇並設置於室内,且加以控制 室内燈源的開、關與明暗度之詳細說明: 範例一: 當在室内時,可以利用一個人員辨識的節能裝置24〇控制一個燈具為 例,並設置於天花板或室内的牆上,其中,燈具可以採用LED燈具。扣始 狀態為LED燈具為關閉的狀態,當人員進入感測器之範圍,即可利用人員 辨識節能裝置240進行人員辨識,且人員辨識處理單元15〇辨識一人或是 多人在室内時,立即發出一個控制指令,控制室内燈源的明或暗或調整亮 度。且當人員辨識處理單元150辨識為室内無人員時,立即發出控制指令 關閉燈源或降低燈源亮度,以達到最佳狀態與有效的節能。例如:若控制 指令為4位元之資料,其值為〇至15。當訊息為〇時,則代表要關閉燈源, 當控制指令為15時,則代表要燈源全亮,當值為,至14時,代表燈源由 暗至亮逐步調整的狀態。又例如:控制指令若為彳位元之資料,當控制指 令為〇時,則代表要_燈源,當控儀令為彳時,則代表要開啟燈源。 於實務上的顧led燈具的亮度可雜據人員的數量來純控制調整 亮度。且當人員辨識處理單元150辨識為室内無人員時,立即發出控制指 令中含有訊息為〇時之資訊,則代表要關閉燈源。 範例二: 201234284 當在室内時,可以利用一人員辨識的節能裝置240控制多個能源負載 250為例,並設置於天花板或室内的牆上。當人員進入感測器之範圍,即 可利用人員辨識節能裝置240進行人員辨識,當人員辨識處理單元15〇辨 識為人員時,且可辨識-人妓多人在室内時,立即發出—個❹個控制201234284 VI. Description of the Invention: [Technical Field of the Invention] The present invention is an identification process', particularly an energy saving device and method for personnel identification. [Prior Art] As the world's population continues to rise, and with the development of the national economy, power consumption is increasing, and irreversible climate change and climate disasters are correspondingly larger. In view of this, the concept of related green energy has been proposed. It is mainly divided into two types. One is to study from the perspective of energy, such as solar thermal energy and wind/wave/kinetic energy. The other is to save energy, so this patent proposes to identify the concept of energy-saving devices. The so-called personnel identification energy-saving device is based on the presence or absence of personnel to effectively control the opening or closing of the power switch or control the power supply current or The size of the voltage, for example: control indoor lighting, when the personnel are present, the personnel identify the presence of the energy-saving device, the presence of the thief or adjust the power of the luminaire; when the personnel are not present, the personnel identify the energy-saving device and then reduce or Turn off the power to the fixture. In the traditional way of controlling the light source, most of the infrared sensing devices are used. Infrared (infrared) on the sensing device 'differentiated parking infrared field filament itself will (4) infrared wire side object movement) 'and passive far infrared ray p|R (passjve 丨nfrarecj seris〇r, referred to as p|R) (induction | § It will not be the crane of the infrared county, but by the touch of the reduction of the riding sensor). The active infrared ejector is mostly composed of two devices, a transmitting infrared beam (Transmitter) and the other receiving an infrared beam (Receiver). The Active Infrared Detector itself continuously emits and receives infrared beams, which are suitable for point-to-point linear distance use. For example, the wall is used for anti-theft (4) infrared projectors. The passive infrared side device is suitable for indoor or outdoor use, such as: "usually used indoors and outside the door (4) set or prevent 201234284 stealing sensor. Next, please refer to the 1st, which is a schematic diagram of the prior art infrared sensor system 10, including an infrared ray cooler 12, a signal detecting amplifier 14, a signal comparator 16, and a power supply device 2A. Then, the following will follow the action flow: when the object is close to the infrared sensing system, the amount of voltage induced by the infrared sensor 12 will vary with the object, and at this time, the signal detecting amplifier 14 will The voltage values of such ratios are converted to signals and sent to signal comparator 16. The paste-related thermal induction calculation determines the magnitude of the voltage. When the power level reaches the preset value, the signal comparator 16 can determine that the moving object exists, and when the 'f voltage is always less than the preset value, the signal comparator 16 can The if object is not present. When the signal comparator 16 judges that the moving object is present, it can issue a control command to the power supply unit 2 〇 'power supply unit 2 〇 external load, which can be a luminaire, and when the luminaire receives the control command, the light can be emitted. This - prior art, is commonly used in room_light source control. Therefore, there are several problems in the prior art, which are listed below: 1 · If the silk-external tester is used, it is very difficult to cause the deviation of the reflected signal due to factors such as the anti-mystery of the object; The linear distance from point to point, the range of the gamma that can be used is limited to the straight line of the turning point, and the range is not covered. 2, if passive infrared reading method is used, it will change greatly due to temperature difference, such as: external temperature such as road temperature, water vapor temperature, etc., may lead to passive infrared false alarm; and because it only changes to heat source, 'cannot judge is human Or the induced signal generated by the animal, causing a judgment bias. 3_ overall and Lu with infrared (|nfrared) on the sensing device, two major problems. Regardless of the active or passive infrared _ mode, it is impossible to distinguish the movement of people or animals, and the passive infrared ray reversal method (P丨Rs_r) is also used, and the presence of the stationary 201234284 personnel cannot be confirmed. In order to improve the prior art's use of infrared sensing devices, this patent proposes a new architecture to achieve energy saving control devices. SUMMARY OF THE INVENTION The present invention is directed to providing a human identification energy saving device, wherein the personnel identification energy saving device comprises: an image sensing module, an image money processing unit, a personnel recognition processing unit, and a memory. - an image sensing module, & combined with a lens and an image sensing unit, generating a resurfacing image according to at least one object; - an image signal processing unit, connecting the image sensing unit, receiving the sensing signals and Performing image processing to generate an image processing data; and - the personnel identification processing unit 'connects the image crane processing sheet s, touches at least the image processing data to the memory, and uses a human body image recognition algorithm for identification, At least one control command is generated and transmitted to control at least one power control device. Another object of the present invention is a method for recognizing a human being, comprising the steps of: capturing a plurality of objects to generate a plurality of image sensing signals; performing image processing on the image sensing signals to generate an image processing data And identifying at least one of the image processing data according to a human body image recognition algorithm to confirm the presence of the person, generating at least one control command, and transmitting at least one control command to the at least one power control device. A further object of the present invention is an energy saving method for personnel identification, comprising the steps of: capturing at least one object to generate a plurality of image sensing signals; performing image processing on the image sensing signals to generate an image processing data; A human body image recognition algorithm identifies at least one of the image processing data to confirm the presence of a person, generates at least one control signal, and transmits at least one control signal to the at least one power control device. 201234284 In order to improve the above and other purposes, features, and advantages of the present invention, the following is a detailed description of several preferred embodiments, as follows: [Embodiment] Therefore, the present invention is to improve Taking advantage of the problem of the infrared sensing device, a new 'personnel identification architecture' has been proposed to achieve personnel identification of energy saving devices. For example, the control of the luminaire, because of the control of the luminaire, the prior art often fails to distinguish between people or animals, and it is impossible to confirm the presence of a static person. The use of the present invention will improve the judgment of the prior art, and it is not to be corrected as a problem in the environment, resulting in misjudgment of the environment. Further, the present invention is used to accommodate the difference between the identification person or the animal, and before the improvement, please refer to the second ®, which explains the functional block diagram of the energy-saving device of the present invention. The personnel identification energy saving device 240 includes an image sensing module 122, a video signal processing unit 2, a person recognition processing unit 150, and a memory 16A. An image sensing module 122 includes a combination of a lens 11 and an image sensing unit 120, and generates a plurality of image sensing signals according to at least one object; the image signal processing unit 2 is connected to the image sensing unit 12〇 Receiving an image sensing chirp signal, performing image processing to generate an image capturing device, and storing at least one of the image data to the inscription 'Principal 160, and the person recognition processing unit 15〇, connecting the image signal processing unit 2〇〇, receiving the - Less - Na like a dragon, and _ human body image recognition algorithm, to carry out the presence or absence of a person, and generate at least one control command, and sent to the power control device 19 〇. Among them, the lens 110 is composed of a plurality of convex lenses and concave lenses, and the number and structure of the lenses of the lens 11 are one of the important factors affecting the image quality. In this patent, the maximum difference of the lens 110 is the difference in the viewing angle, and the viewing angle can determine the size of the image covering the spatial extent. The lens can be a fisheye lens, a super wide-angle lens, a wide-angle lens, a standard lens, a telescope 201234284 head, a fixed-focus lens, a zoom lens, a reflective lens, and a micro lens. The micro lens can be directly disposed on the image sensing unit 120 during the manufacturing process. The image sensing unit 120 may be a charge coupled device sensor (charge_c〇up|ed Device, CCD for short) or a complementary metal oxide semiconductor sensor (c〇mp|ementary Metal-Oxide Semiconductor, CMOS for short) ). In the image sensing unit, the photosensitive element that can record the light changes, when the surface of the image sensing unit receives the light from the lens 11, the energy of the light is converted into a charge. The stronger the light, the more it is stored. The more the charge is, and these charges become the basis for judging the intensity of light. A channel line is arranged on the image sensing unit to amplify and interpret the signals generated by these charges into an image, and form a complete surface. Next, please refer to Fig. 3 for a detailed functional block of the video signal processing unit 2''. The image signal processing early 200 includes a dark point compensator 201, a dead point compensator 202, a lens shell compensator 203, a color or gray scale adjuster 204, a sharpness adjuster 205, and a noise passer 206. The exposure adjuster 207 and the white balance adjuster 208. Wherein, when the image data comes in from the image sensing unit 120, there is a dark point imbalance in the image data. At this time, the image sensing unit dark spot compensation 2〇1 is required to compensate for unnecessary voltage level. . Since the image sensing unit 12 sometimes has a dead pixel, the dead pixel compensator 2〇2 is used to compensate for the dead pixels of the image sensing unit 120. Because the lens will produce an optical brightness error, the lens brightness compensator 2〇3 is used to correct the brightness. The color or gray adjuster 204 is used to process the brightness correction or the tone correction. If the image is a color RGB value, it is necessary to restore each 阙 RGB key, so that ❹m produces RGB pixels per point. The value of the silk image is that the gray thief is black and self-contained. The sharp 201234284 luma adjuster 205 and the noise filter 206 are used to increase the image recognition. The exposure adjuster 207 and the white balance adjuster 208 are used to adjust, the entire image dynamic range and color correction, which can generally be achieved by using automatic adjustment. The white balance adjuster 208 sets the standard β by the reference white value of the image. Note that the above-mentioned image signal processing block and its parameters are not particularly limited, and the selection of the block and the setting of the numerical value are selected according to the practical application of the visual system. And change. The 'personal identification processing unit 150 is connected to the image signal processing unit 200, receives at least one image processing data, and is identified by the human body image recognition algorithm to generate at least one control command and transmitted to the power control device 19 (^ The power control device 19 can control one or more energy loads 250. The 'energy load 25 〇 can be an indoor light source, an outdoor light source, and an air conditioning system. The control command is transmitted to the power control using the above wired method. The device 19〇 can also be transmitted to the power control device 19 via radio waves via a wireless transceiver module (not shown). The first wireless transceiver module is disposed at the end of the personnel identification energy saving device 240, and the second wireless device The transceiver module is disposed in the power control device 190. The control command transmitted by the personnel identification energy saving device 240 is modulated by the first wireless transceiver module and transmitted to the second wireless transceiver module via the wireless transceiver module. The second wireless transceiver module performs a carrier demodulation to restore the control command, and then transmits the power control The device 190 controls the energy load 250. The control command may include a plurality of messages and a plurality of control commands, or a control command, or may only control the size of the signal, and the present invention does not limit the control command. The form will be changed and set according to the actual application. The control command can be one-bit data or multi-bit data 'for example: 8bit, its value is 〇 to 255, and the control command is used to adjust or turn on. Or 201234284 Turn off the power supply. Another example: 1bit 'control command is used to turn the power on or off.》 Among them, the single person identification energy saving device 240 of the present invention can control one energy load 250 or multiple energy loads 250 'and more' multiple personnel The energy-saving device 24 can also control an energy load 250 or a plurality of energy loads 25 〇 with the algorithm. Then, two people identify the energy-saving device 24 〇 and set it indoors, and control the indoor light source to be turned on and off. Detailed description of the brightness and darkness: Example 1: When indoors, you can use a person-identified energy-saving device 24〇 to control a light fixture For example, it is installed on the ceiling or indoor wall, wherein the luminaire can adopt LED luminaire. The initial state is that the LED luminaire is in a closed state, and when the person enters the range of the sensor, the personnel identification energy saving device 240 can be used. Personnel identification, and the person identification processing unit 15 immediately sends a control command to control the brightness or brightness of the indoor light source or adjust the brightness when one or more people are indoors, and when the person identification processing unit 150 recognizes that there is no person in the room. Immediately issue a control command to turn off the light source or reduce the brightness of the light source to achieve the best condition and effective energy saving. For example, if the control command is 4 bits of data, the value is 〇 to 15. When the message is 〇, It means to turn off the light source. When the control command is 15, it means that the light source is fully lit. When the value is up to 14, it means that the light source is gradually adjusted from dark to bright. For example, if the control command is the data of the 彳 bit, when the control command is 〇, it means _ light source, when the control device is 彳, it means to turn on the light source. In practice, the brightness of the led lamp can be adjusted to adjust the brightness purely by the number of personnel. And when the personnel identification processing unit 150 recognizes that there is no person in the room, immediately issuing the information in the control instruction that the message is , indicates that the light source is to be turned off. Example 2: 201234284 When indoors, a person-identified energy-saving device 240 can be used to control multiple energy loads 250 as an example, and is placed on a ceiling or indoor wall. When the person enters the range of the sensor, the personnel identification energy saving device 240 can be used for personnel identification. When the person identification processing unit 15 is recognized as a person, and the person can be identified, when the person is indoors, the person is immediately issued. Control

指令來控健内能源負載25Q關啟合或調整,以達到最佳狀態與節 能省電。 ""一 P 其中,控制指令包含’多個號碼、多個訊息與多健制指令。其中,The command is used to control the internal energy load 25Q to turn on or adjust to achieve the best state and save energy. "" A P where the control command contains 'multiple numbers, multiple messages, and multiple health commands. among them,

號瑪是代表第幾個電源控制置裝置伽,接著,訊息與控制指令之詳細說 明與範例一相同,於此,就不加以重新敘述。 其中,若是以空⑽統為例:-般的空調純是以溫度或人員移動來 重覆控麵以啟動及敍,錢她職運碰,絲法禮切得 知靜止職人員是否存於室内,進而造成冷氣—直被運轉,樣費電源。 若是_以本發明之人_識節能裝置24Q,當人員辨識處理單元伽辨 識人貝存在時,就依據溫度調節機來控制室内之溫度,而當人員辨識處理 早兀150辨識員不存在或人員減少時,就將空調系統停止或降低導並 可由辨識-人或是多人在室内時,適度空調能力 源之浪費。 ,如此更能夠·節.省電 以上’只列舉燈源與空調***為例’當然本發明並不居局限於此,任 何可應聯絲置的設備,本發财可適用之。 接著’請參考第4圖’其說明本發明之人員辨識節能裝置24〇之另一 功能方塊圖。其中,第4圖與第2 圖中主要的差異為在第4圖中多增加了 廣角幾何修正單元140, 田鏡頭110使用超廣角或魚眼透鏡,以擴大可視 201234284 區域範圍’但相對的會產生影像變形,而造成人員辨識處理單元150容易 產生誤判。此時利用廣肖幾何修正單元14〇,解決視角過大所產生影像變 形,將超廣角透鏡或魚眼所產生之變形影像修正成標準視角影像 ,以增加 人員辨識處理單元150辨識準碟率。其中,廣角幾何修正器14〇可於影像 信號處理器200之後處理’或置入影像信號處理器2〇〇中前部來處理,於 第4圖主要是放置於像信號處理器2〇〇之後處理。 接著,請參考第5圖’其為-種人員辨識節能裝置24〇侧方法之流 程圖,包含以下之步驟: 步驟10 :拍攝至少-個物件,而產生複數個影像感測信號❶ 步驟20.對該些影像感測信號,進行影像處理以產生一影像處理資料。 步驟30 :依據-人體®像辨識演算法對該影像處理資料進行辨識以確 認人員之存在或離開而產生至少一健制指令,並將該控制指令傳送到至 少-個電雜制裝置錢行至少—個電職置之節能控制^ 其中,人體圖像辨識演算法,可以是人體圖像之骨架辨識演算法、手 勢辨識演算法或人臉辨識演算法等相關演算法之任意組合。其中若以燈源 裝置來列舉’本發明的人員辨識節能裝置24〇的應用之—例。其最重要的就 是人體圈像之骨架_演算法的方法,以下先以人體圓像之骨架辨識演算 法來做一說明。 在開始對人艘圖像之骨架辨識演雜開始說明前,首先必需先對人體 圖像的動作加以分類。首先,將人趙圖像的動作區分為三大類,人體圖像 靜止不動、人趙圊像動態移動時與人體@像微動時,以下將分別列舉說明 之0 12 201234284 第一種類為人體圖像靜止不動時,主要在描述人員靜止的整體的動 作’例如站、坐、躺等。 第種類為人體圖像動態移動時,若是動作之間的轉換,則屬於這一 類,例如:走路、跑步笙# ¥仃為。且—開始人員靜止不動時,而賴始動作的 狀態,這類由靜__轉化,本發將料_的類別。 s第三種類為人體圖像微動時,與第二類相比較,此類是說明人員有小 作時的行為,例如:手部的動作、頭部的動作、腳部的動作都會被 歸成V類。本發日肋基本概念分析人體圖像之骨架運動的分類,但並不 非特別限定喻,爾壯咖伽議分析。 、—輯之β軸識演县法’主要區分為靜態、動驗微動態辨 識等。在辨識人__這—方面也有好相_討論,最多人使用的 雜稱為隱藏式馬可夫_,是屬於—種機率的方法。在本發明中,靜態 _識則_影像分離、重心法之任意組合,並配合類神經演算法、基因 肩算法或她演算法等蝴演算法之任意組合來實行之。而雜與微動態 的辨識則採Μ彡像錄、重心法或向量特錄之任意組合,姐合類神經 演算法、基目演算法糊演算料侧演算法之任纽合來實行之。其 中’所謂的向量槪值就是物件_動軌跡,此物件可以是人體骨架、人 臉,或手勢軌跡之任意組合。 以下,將先以一種類之人體圖像靜態時來加以說明本發明的實務上的 應用’接著’以下將說明人體圖像之骨架辨識演算法的相關演算法。請參 考第6Α圖’其為本發明之人體圖像靜•態辨識法之流程圖,包含以下步驟: 步驟100 :讀取該影像處理資料。 13 201234284 步驟102:對該影像處理資料進行影像分離,以找出至少一個人體圖像 輪廓。 步驟104 .計算每個人體圊像輪靡之重心,而取得複數個特徵值。 步驟:利用一算術辨識演算法,對該些特徵值進行辨識。 以下將針對不同步驟說明之: 首先,步驟102 :對於人體圖像之骨架辨識演算法一開始時,必需要把 辨識人的物件,而在一張靜止的圖片中,必需使用影像分割之技術。影像 分割是目前許多研究領域的基礎題目,其方法可以分為兩大類·半自動分 割與全自動(Fully-automatic)分割等方式。而在醫學影像(med丨印丨―⑽、 數位視訊(digita丨video)以及許多影像處理的相關應用中都必須採用到影像 分割之技術,其目的是從圖片中找出特定的物件或區域,以利下一步的進 行辨識或壓縮之動作。在Video的壓縮技術中,從Mpeg 4以後的標準是 以物件為基礎(object-based)的方式,將晝面中的不同物件分割出來後再分 別處理·’以求違到更高的壓縮率。因此,自動分割物件是最基本且重要的 步驟。影像分割的方法有許多,最常被大家使用的為分水嶺法。 所謂的分水嶺法(Watershed) ’影像被當成是地形的表面(topographjC surface)每個像素(pjxe丨)的的值表示它的高度。匯水盆(。此㈣㈣加引阳) 代表被分割制象的區域。Watershed的優點是-次就能分割出多個物件,於 本發明此物件亦即為單—人趙时輪廓與多個人體圖像輪廓,並且能確保 這些物件都具有封閉輪廓1點是不同種_影像必須調整門植值 (threshold) ’經常會有齡過重叠敝應產生。而且門檻值若稍微加以 變動即可能造成分割結果的極大改變。接著,再請參考第6B圓,即是利 201234284 刀水嶺法所刀割丨來的早一人體圖像輪廓影像,第6日圖中的人形圖像, 即是分割出來的圖像。 但本發明並不非特別限定只有以上的分水韻法,可視實務上的應用再 加以區分與應用該種不同的分割技術。 對於步釋104:由於分割完的物件其影像太多,若直接進入辨識的演算 ' 法’將使得演算法必須處理過於龐大的影像資料,若以灰階來說,-張圖 片640X480的像素點,以帛册圖中的人形圖像所占的像素可以有伽*128 鲁 !·素.·右直接人體圖像靜態辨識演算法,會使得人員辨識處理單元伽 的運算&力耗費太多時間,進而造成反應過慢。所以經過分割完人體圖像 之後’就必需要找尋人體圖像的特徵值,亦即,降低人體處理的像素點。 接著,請參考第6C圖’由圖中可知人物的物件,可由N個不同的影像方塊 所組成,而每個影像方塊可以4χ4像素點所组成,或是3χ3像素點,又或是 8X8像素點所組成,且每個像素點可以是黑白或是灰階或是彩色,而通常經 過分水嶺法後之影像為黑白或是灰階。接著,請參考第6c與第6D圖,圖中 籲即疋利用重〜法,找哥人體圖像物件之重心位置,如圖中的^至^7的重 心點位置’崎些重心狀,特_的人_像之縣槪值,所以將 多個特徵值連線之結果’即可顯示人體圖像最重要的骨架,這骨架外形為 棍棒狀。 -中特徵值在工間對應—X軸座標、一 γ轴座標,例如:整張影像為 以640X480的灰階解析度來說,亦為空財柳测像素點,且每個像素 點的值為0至2…而空間中雜上有咖9座標點,且丫轴上有〇細 座標點。接者’把人體®像麵巾部分的影像鋪时心法,神在每部 15 201234284 份的所定義_像值’可以為每一做塊#1GX9像素財,所有影像值中 具有人體@像輪細影像值的重心,而可以求得所㈣㈤圖中之特徵值的 座標為W1 (320,20)的值等。 再者’另-種常用且簡單的重心法(未、繪出),也亦可先統計人體圖像輪 廓之中心點位置之重心,然後,再將人艘圖像最邊緣的輪廟重心.點找出後, 再把邊緣的重心點與人體圖像輪廓之中心點位置的重心相互連線。此時, 將使這f*架外觀岭人職,這_單的方式亦可絲人關像之重要骨 架之辨識。 所以在本發s月中,實務上確實可利用不同的影像方塊與影像方塊輪 廓’並採用重^域尋特徵值,藉錄完成人體@像之最重要的骨架圖。 而重心法此為熟悉該項技藝的人士所知悉,這裡就不加以贅述。但本 發明並不非制限定只有以上的重心法找尋特徵值,另其它重碰亦可配 合使用。且實務上的應用,不再加以限定與其它不同的技術找尋特徵值。 對於步驟106 :算術纖演算法的種類有很多,且該人體圖像之骨架 辨識演算法,所運用的算術辨識演算法,係為類神經演算法、基因演算法 或模糊演算法等之任意組合相關法則i . 其中,類神經演算法為每一個神經細胞網路模型,其特性是由網路的 拓樸圖形,㈣的特性加赠定。鋪歸算法最重要的,即是初始的訓 練’在一開始的過程中這些法則,最初是由一組初始加權(丨姻如We|.ghts) 值來決定。系統在學習過程中必須經由不斷的調整和學習,使得真正的網 路輸出與目標值能達到相同值後’才固定網路中的加權值,此時才算訓練 完成,並把訓練完的權位值儲存起。 16 201234284 其中’模糊演雜模糊動作的判定, 構權單:人_動作可能以•權重值· 此權重疋決疋動作的重要參數。 無心那_識演算都需要-人體圖像龍縣當觸的基礎,且 資料庫都纖過術且把相_觀值儲嫌果心,請參伽 圖至第不同人_像之正,在本發 人體圖像之正視圖,唯,尚有更多人體圖像正視圖未列出本: 終躺細人崎正視圖。秦圖,則表示人體圖像的輪廊可以 疋-人,而後,物·多人嶋。相__隊的侧視圖, 可先參考测至_所表示之糊。但在實務上,人體圖像側視圖若 是人體職彻觸嫩,祕A_嫩,才容易加 以辨識是妓人«像的存在。㈣發砸轉特職定只扣上的正視 圖與側視圖’爾後尚可加人多個不同的人體圖像,將依據實務上的應用加 入擴充之。 由於實務上’人體圖像輪廟亦有可能是動物的輪磨,但利用重心找尋 人體圖像之骨架後’將會發現人體0像的骨架將會與祕的骨架(未緣出) 有所不同’所以運用本發明的人體圓像龍庫,將可以事先儲存人體圖像 的骨架將會與驗財紅不M,再藉以_諸觸鮮法來加以辨 識之。所以運財發明將更容易,區別人體曝與動物的差別,實務上也 確定改能進先前紅外線的常常誤判問題點。 請參考第8A圖’其為本發明之人趙圖像動態辨識法之流程圖,包含以 下步騍: 17 201234284 步驟112:讀取影像處理資料。由記憶體中請取這-張圖的影像,並跳 至步騾114。 步驟114 :對每個影像處理資料進行影像分離,以找出至少—個人體圖 像輪廓。利用影像分割法找A這一張圓的人體圖像輪廟,並跳至步驟116。 步驟116 :計算每個人體圖像輪廓之重心,而取得複數個特徵值。找出 人體圖像特徵值這-張圖的特徵值,並把特徵值的結果儲存起來,並跳至 步驟118» 步称118 : η=η+1。跳回步驟12〇下一張圖片,準備讀取一下張圖片。 步驟120 : n<thresh〇ld。門楹值(threshold)^預設值,若設定為2, 代表系統只讀取二張圖片來做移練估,若設定為3,代表系統只讀取張圖 片來做移動評估。當n達到一預設門雛外就跳至步驟122,準備為移動評 估。 步驟122 :對多個影像處理資料之至少—個人體圖像輪廓,進行一移動 評估’而找尋至少一個向量特徵值。移動評估常用在M网或H 264的塵縮 上’主要疋使用在減少魏量上,在本發卿是用來判斷人體圖像是否有 移動’判斷的標準可以有二張圖、三張圖,視實務上的應來加以設定,本 發明並不加以限定之。當完成移動評估後,即可移至第驟彳24。 步驟124 .利用-算術辨識演算法對該些特徵值,與至少一個向量特徵 值或權重向量概值進行觸。移動評估,並再配合_經演算法、 基因演算法纖滅算法之任纽合,來完从體目像祕職演算法。 其中’步驟112至步驟114已於前述,再此就不加以重述。以下將針對 步驟122與步驟124說明之: 201234284 右要把人的物件從複雜的背景中,分離出有關人體圖像移動的資訊, 許多不R的做法,最f被細方法為利用二張不同侧,其中,以 第張人體圖像虽背景,此背景中的人體圖像為第一次移動標的。而相機 照到的第=張照片,為侧到有人體圖像移動時,此時,即將相機照到的 像片田成疋移動的第二張照片。利用兩張背景相同的圖片,並將二張不同 、片的人體圖像’利用影像分割技術將其分割出來,再找尋各自的重心 後再將一張不同的圖片,利用移動評估(__娜_咖)的方 式’找到移動向量。The numerator is representative of the first power control device gamma. Next, the detailed description of the message and the control command is the same as that of the first example, and thus, it will not be re-described. Among them, if the air (10) system is taken as an example: the general air conditioner is to repeat the control surface by temperature or personnel movement to start and narrate, money her job touch, silk law to know whether the static staff is stored indoors , and then caused cold air - straight to run, sample power. If it is _ the person of the present invention _ knowledge energy-saving device 24Q, when the personnel identification processing unit gamma identification person exists, the temperature is controlled according to the temperature adjustment machine, and when the personnel identification process is early, the identifier does not exist or the personnel When the reduction is made, the air conditioning system is stopped or lowered and can be identified as a source of moderate air conditioning capability when the person or the person is indoors. Therefore, it is more convenient to save the power. The above is only an example of a light source and an air conditioning system. Of course, the present invention is not limited thereto, and any device that can be connected to the wire can be applied. Next, please refer to Fig. 4 for explaining another functional block diagram of the person identifying the energy saving device 24 of the present invention. The main difference between FIG. 4 and FIG. 2 is that the wide-angle geometric correction unit 140 is added in FIG. 4, and the field lens 110 uses a super wide-angle or fish-eye lens to expand the visible range of 201234284', but the relative meeting The image distortion is generated, and the person recognition processing unit 150 is liable to cause misjudgment. At this time, the wide-angle geometric correction unit 14〇 is used to solve the image deformation caused by the excessive viewing angle, and the deformed image generated by the super wide-angle lens or the fisheye is corrected to the standard viewing angle image to increase the recognition speed of the personnel recognition processing unit 150. The wide-angle geometry corrector 14 can be processed after the image signal processor 200 or placed in the front of the image signal processor 2, and is mainly placed after the image processor 2 deal with. Next, please refer to FIG. 5, which is a flow chart of the method for identifying the energy-saving device 24, and includes the following steps: Step 10: Shoot at least one object to generate a plurality of image sensing signals. Step 20. The image sensing signals are subjected to image processing to generate an image processing material. Step 30: Identifying the image processing data according to the human body image recognition algorithm to confirm the presence or absence of the person to generate at least one health instruction, and transmitting the control instruction to at least one electrical device. - Energy-saving control of an electric job ^ Among them, the human body image recognition algorithm can be any combination of the human body image skeleton recognition algorithm, gesture recognition algorithm or face recognition algorithm. Here, an example of the application of the energy-recognizing device 24A of the present invention is exemplified by a light source device. The most important thing is the skeleton of the body circle image. The algorithm is based on the skeleton recognition algorithm of the human body image. Before beginning to explain the skeleton identification of the image of the human image, it is first necessary to classify the motion of the human body image. First of all, the action of the human Zhao image is divided into three categories, the human body image is still, and when the human image is moving dynamically and the human body is like a micro-motion, the following will be separately illustrated. 12 12 201234284 The first type is the human body image When standing still, it is mainly to describe the overall action of the person's stillness, such as standing, sitting, lying, and the like. The first type is when the human body image moves dynamically, and if it is a transition between actions, it falls into this category, for example, walking, running 笙# ¥仃. And - when the person starts to stand still, and the state of the action is started, this type is converted by static __, and the hair will be _. s The third type is the micro-motion of the human body. Compared with the second type, this type of behavior is a description of the behavior of the person when there is a small work. For example, the movement of the hand, the movement of the head, and the movement of the foot are classified into V. class. The basic concept of the rib is to analyze the classification of the skeleton motion of the human body image, but it is not particularly limited to the metaphor. The "β-axis recognition county law" of the series is mainly divided into static and dynamic micro-dynamic identification. There is also a good phase in the identification of people __ this discussion, the most commonly used hybrid called the hidden Markov _, is a method of probability. In the present invention, static _ _ _ image separation, gravity center method any combination, and with any combination of a neurological algorithm, a gene shoulder algorithm or her algorithm, such as a butterfly algorithm. The identification of miscellaneous and micro-dynamics is carried out by any combination of image recording, gravity center method or vector special recording, and the combination of the sister-like neural algorithm and the base algorithm algorithm. The so-called vector 槪 value is the object _ trajectory, which can be any combination of human skeleton, face, or gesture trajectory. Hereinafter, the practical application of the present invention will be described first in the case where the human body image of one type is static. Next, the correlation algorithm of the skeleton recognition algorithm of the human body image will be described below. Please refer to FIG. 6 ' which is a flowchart of the human body image static state identification method of the present invention, and includes the following steps: Step 100: Read the image processing data. 13 201234284 Step 102: Perform image separation on the image processing data to find at least one human body image outline. Step 104: Calculate the center of gravity of each human body image rim to obtain a plurality of eigenvalues. Step: Using an arithmetic identification algorithm to identify the feature values. The following will be explained for different steps: First, step 102: At the beginning of the skeleton recognition algorithm for the human body image, it is necessary to identify the object of the person, and in a still picture, the technique of image segmentation must be used. Image segmentation is a fundamental topic in many research fields. Its methods can be divided into two categories: semi-automatic segmentation and Fully-automatic segmentation. In medical imaging (medium-printer-(10), digital video (digita丨video) and many image processing related applications, the technology of image segmentation must be adopted, the purpose of which is to find a specific object or region from the image. In order to facilitate the identification or compression of the next step, in the video compression technology, the standard from Mpeg 4 is an object-based method, which separates the different objects in the face and then separates them. Processing · 'In order to violate the higher compression rate. Therefore, automatic segmentation of objects is the most basic and important step. There are many methods of image segmentation, the most commonly used is the watershed method. The so-called watershed method (Watershed) 'Image is taken as the topographjC surface. The value of each pixel (pjxe丨) indicates its height. The sink (. (4) (4) plus yang) represents the area where the image is segmented. The advantage of Watershed is The object can be divided into a plurality of objects, and the object is also a single-human Zhao contour and a plurality of human body image contours, and can ensure that the objects have a closed contour. The same kind of _ image must adjust the threshold value (threshold) 'often over age overlap should be generated. And if the threshold value is slightly changed, it may cause a great change in the segmentation result. Then, please refer to the 6B circle, that is, profit 201234284 The contour image of the early human body image cut by the Knife Ridge method, the humanoid image in the sixth day chart is the segmented image. However, the present invention is not limited to the above water distribution. The rhyming method, the application of visual practice, distinguishes and applies this different segmentation technique. For step 104: Since the segmented object has too many images, if you directly enter the recognized calculus 'method', the algorithm must be processed. Too large image data, if the gray level is - the picture is 640X480 pixels, the pixel occupied by the human figure in the map can have gamma * 128 Lu! · Su.. Right direct body image The static identification algorithm will make the personnel recognize that the processing unit gamma operation takes too much time, which causes the reaction to be too slow. Therefore, after dividing the human body image, it is necessary to find the human body image. The eigenvalue, that is, the pixel point of the human body processing. Next, please refer to Figure 6C. The object of the character can be composed of N different image blocks, and each image block can be 4 χ 4 pixels. The composition is composed of 3χ3 pixels or 8×8 pixels, and each pixel can be black and white or grayscale or color, and the image after the watershed method is usually black and white or grayscale. Then, Please refer to the 6c and 6D pictures. In the picture, the 吁 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 人体 人体 人体 人体 人体 人体 人体 人体 人体 人体 人体 人体 人体 人体 人体 人体 人体People _ like the county devaluation, so the result of connecting multiple eigenvalues' can display the most important skeleton of the human body image, which is in the shape of a stick. - The medium eigenvalue corresponds to the X-axis coordinate and the γ-axis coordinate. For example, the entire image is 640X480 grayscale resolution, and the pixel is also measured, and the value of each pixel is It is 0 to 2... and there are 9 punctuation points in the space, and there are 座 fine coordinate points on the 丫 axis. The receiver's image of the human body® face towel is laid out, and God defines the _image value in each of the 15 201234284 copies for each block #1GX9 pixel, all image values have human body @像The center of gravity of the fine image value can be obtained, and the coordinates of the feature values in the (4) and (5) graphs can be obtained as the value of W1 (320, 20). In addition, 'the other kind of common and simple center of gravity method (not, drawn), you can also first count the center of gravity of the contour of the human body image, and then focus on the edge of the image of the wheel. After the point is found, the center of gravity of the edge is connected to the center of gravity of the center point of the outline of the human body image. At this point, the f* frame will be used for the appearance of the person. This way of _ single can also identify the important skeleton of the person. Therefore, in the middle of the month, in practice, it is indeed possible to use different image blocks and image block outlines and use the heavy domain search feature values to borrow the most important skeleton images of the human body image. The focus of the law is known to those familiar with the art, and will not be repeated here. However, the present invention is not limited to the above-mentioned center of gravity method for finding feature values, and other types of bumps may be used in combination. And practical applications, no longer limit the search for eigenvalues with other different technologies. For step 106: there are many types of arithmetic fiber algorithms, and the skeleton recognition algorithm of the human body image, the arithmetic recognition algorithm used is any combination of a neural-like algorithm, a gene algorithm or a fuzzy algorithm. Related Laws i. Among them, the neural-like algorithm is a neural network model of each type, and its characteristics are determined by the topology of the network, and the characteristics of (4). The most important of the paving algorithm is the initial training. In the beginning, these rules are initially determined by a set of initial weights (such as We|.ghts). The system must be constantly adjusted and learned during the learning process so that the true network output and the target value can reach the same value, then the weighting value in the network is fixed, and the training is completed, and the trained weight is calculated. The value is stored. 16 201234284 Where the 'fuzzy fuzzy motion is judged, the constructor: the human_action may be • weight value · this weight is the important parameter of the action. Unintentional _ calculus needs to be - the human body image of Long County when the basis of touch, and the database is fibrillated and the value of the _ view value of the fruit, please refer to the gamma to the different people _ like the positive, in The front view of the human body image, only more front view of the human body image is not listed: the final view of the person. Qin map, it means that the image of the human body can be 疋-人, and then, things and many people 嶋. Side view of the phase __ team, you can refer to the paste indicated by _. However, in practice, if the human body image side view is full of tenderness and secret, A_ tender, it is easy to recognize that it is the existence of the image. (4) The front view and the side view of the special transfer of the special post will be added to a number of different human body images, which will be expanded according to the application of practice. As a matter of practice, the human body image round temple may also be an animal's wheel grinding, but after using the center of gravity to find the skeleton of the human body image, 'the skeleton of the human body 0 image will be found with the skeleton of the secret (not coming out). Different 'so use the human body round image dragon library of the present invention, the skeleton that can store the human body image in advance will be identified with the money verification red, and then by the _ touch method. Therefore, the invention of transportation will be easier, and the difference between human exposure and animals will be distinguished. In practice, it is also determined that the problem of frequent misjudgment of the previous infrared rays can be changed. Please refer to FIG. 8A, which is a flowchart of the Zhao image dynamic identification method of the present invention, and includes the following steps: 17 201234284 Step 112: Read image processing data. Take the image of this image from the memory and skip to step 114. Step 114: Perform image separation on each image processing data to find at least the image of the individual body image. Use the image segmentation method to find A circle of the human body image wheel temple, and skip to step 116. Step 116: Calculate the center of gravity of each human body image contour to obtain a plurality of feature values. Find the feature value of the human image feature value and store the result of the feature value, and skip to step 118»step 118: η=η+1. Skip back to step 12 for the next picture, ready to read the picture. Step 120: n<thresh〇ld. Threshold value ^ preset value, if set to 2, means that the system only reads two pictures to do the estimation. If it is set to 3, it means that the system only reads the picture to do the mobile evaluation. When n reaches a preset threshold, it jumps to step 122 to prepare for the mobile evaluation. Step 122: Perform at least one vector feature value by performing a motion evaluation on at least one of the plurality of image processing data. Mobile evaluation is commonly used in the dust reduction of the M network or H 264. 'The main use is to reduce the amount of Wei. In the case of this hair is used to judge whether the human body image has movement', the standard can have two pictures and three pictures. It is set according to the actual application, and the present invention is not limited thereto. When the mobile assessment is complete, you can move to step 24. Step 124. Using the -arithmetic identification algorithm, the feature values are touched with at least one vector feature value or a weight vector value. Mobile evaluation, and then cooperate with the algorithm of _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Wherein 'Step 112 to Step 114 are already in the foregoing, and will not be repeated here. The following will be explained for step 122 and step 124: 201234284 Right to separate the human object from the complex background, the information about the movement of the human body image, many non-R practices, the most f is a fine method to use two different The side, in which the body image of the first body image is the first time, the body image in this background is the first moving target. The first photo taken by the camera is the second photo of the camera Tian Chengyu moving from the side to the image of the human body. Use two images with the same background, and divide the two different, sliced human body images into pieces by using image segmentation technology, and then find their respective centers of gravity and then use a different image to evaluate the use of mobile (__娜_ _) way 'find the move vector.

接著、參考細圖細,第陶是糊影像分割技街將二張不 同圖片中的人體圖像分離出來後,再放置一起相互比較,其中,不同張圖 會依不同的時序在圖中做移動,而第7c圖中的第一張圖中為在㈣時間時, 利用影像分離與重心法,找尋的特徵值為购細_27,且第二張圖中為 在t=t2時間時’利用影像分離與重心法找尋的特徵值為至。最 簡單的方式可以將W1-1比較W2-1在相片中不同的位置,又例如,第—次出 現的人體@轉雛為VVU,而帛二:欠歧哺徵值肩2_彳,所以可以得 知向量評估是向左方軸’若只是讀—轉徵值不足時,更可以將购 至W1-27與為W2_iW2_27相互比較,找出27個向量特徵值_至 MV27 ’再決定現在人體圖像移動祕重向量概值關_。再者,另一種作 法’是可以將第-張圖的特徵值為w1_l27,依照不同的權重來得到 一權重特徵值MV1-T,而第二張圖的特徵值為WnW2_27,依照不同的 權重來得到權重特徵值MV2-T ’再將不同的MV1-T與MV2T相互比較得到權 重向量特徵值MVT。 201234284 接著’將說明權重的義意,例如:現有特徵值為W1-1至W1-27 ’將每 個特徵值相乘每個權重因素,再將所得的結果相加起來即為權重特徵值, 以27個特徵值來說明,權重因素可.以是1/27,當然亦可重要的特徵值權重 因素之值設較大些,而較不重要的權重因素之值設低些。相同的權重向量 特徵值’亦可將不同的向量魏值做權位加騎刺的結果。 所以以下再定義清楚,特徵值或權重特徵值或向量特徵值,或權重向 量特徵值的差别: 特徵值為人體圓像輪廓,利用重心法所產生的值; 權重特徵值為人體圖像輪靡之多個特徵值,利用權重加總的結果; 向量特徵值係為於-預設時間内,當該人體圖像輪摩移動時,對應不 同時間點的該特徵值所形成之—軌跡’或#該人體圖像輪絲動時,對應 不同時間點的該權重特徵值所形成之一軌跡; 權重向量特徵值為人體圖像輪磨之多個向量特徵值,利用權重加總的 結果。 ... 本發明並不限定如何作法’將依據不同的演算法,適當的得到移動向 量之特徵值,或權重特徵值,或向量特徵值,或權重向量特徵值。 以下例舉二個不同範例說明之: 範例一: 首先要得到移動向量的向量特徵值,必須先定義第犯圊的16種軌跡所形 成之方位,例如:東、西、南、北、東北、東南、西南、西北、東北東、 北北東、東南東、南南東、南南西、西南西、北北西與西北西所組成,其 中本發明並不限定只有此。接著,以第扣圖為巾的_例,第8C1是以二 20 201234284 張圖中的不同時間t的特徵值,相互比較而形成的圖形,其中,第一張圖中 的特徵值為W1-9,而楚_2&rei丄 弟—張圖中的特徵值為W2-9。當人體圖像向左移動時 :以得知侧.9會向___㈣ ,所以由第8E圖中,可以 得方向是向西北西方向移動。 範例二: 月!I的敘述可以得知,向量特徵值的評估,可以由二張圖片 相互比對’也可以由三張圖片相互比對不同時間的特徵值,甚致可以多張 • 以上’完全取決系統的反應的速度與處理的時間,當然利用多張進行向量 特徵值的找尋其辨識率就比較準確。而以第8f圖為以三張圖片相互比較 特徵值’又例如··以第郎圖中的_例,當人體圖像向左移動時可以得知 在時間點為在t-t1的W1-14特徵值,會經過t=t2時的W214特徵值,再到 t t3時的W3-14特徵值,此過程可繞成向量特徵值MV14 ,由第圖中可以 得知人體目魏在移動財喊向西雜動。 最後’由範例一得知’經由單點的特徵值比較得到的向量特徵值,例 籲如’剛,其職結果成辨可能略低,若要提高觸結果成功可以加入 MV1至MV27 ’並以權位加總的方式得到權重向量特徵值mvt,來提高辨識 ' 率,相同的範例二亦可採用此方式,得到一個權重向量特徵值附。 最後,接著利用類神經演算法、基因演算法或模糊演算法之任意組合, 自建的2 D模型與分離出來的向量特徵值做匹配,而常用的人體圓像模型 又可分為圓柱狀以及棍棒狀,以用來做為辨識人體圖像移動方向之用。 接著’請參考第9A圖,其為本發明之人體0像微動態辨識法之流程圖, 包含以下步驟: 21 201234284 步驟142 ·讀取影像處理資料。由記憶體中請取這一張園的影像,並跳 至步靜144 * 步驟144.對每轉像處理資行影齡離,峨丨至少—個人體圖 像輪廓。糊雜分觀找出這-關的人體圓像輪廊 ’並跳至步驟146。 步驟146 ·汁算每個人體圓像輪廓之重心,而取得複數個特徵值。找出 這-張圖之人體圖像特徵值,並祕徵值的結果儲存至記億體中,並跳至 步驟148。 步驟148 η π+1。跳回步驟伽下一張圓片,準備讀取一下張圖片。 步驟150 : n<threshold。門檻值(—d)為-預設值,若設定為2, 代表系統只讀取二張圖片來做移動評估,若設定為3,代表系統只讀取張圖 片來做移自評估。當η達到—預設門捏值後就跳至步帮152,準備為移動評 估。 步雜152.對夕個影像處理資料之至少—個人體目像輪廊進行一移動評 估’而找尋至少-個向量特徵值。移動評估(M〇tin〇n Vect〇「E幼⑴油加, 簡稱MV)常用在Mpeg或Η·264雜縮上,主要是使用在減少壓縮量上在 本發明則;I:用來判斷人體有移動,判斷的標準可以有二張圖、三 張圓’視實務上的應來加以設定’本發明並不加赚定之。當完成移動評 估後,即可移至步驟154。 步驟154:利用一算術辨識演算法,對該些特徵值與至少一個向量特徵 值或權重向量特徵值進行_。湘移動評估完後,再配合_經演算法、 基因/戾算法或模掏演算法等之任意組合,來完成實務上的人體圖像微動態 辨識演算法。 22 201234284 一般而言’通常人類在判斷人員動作時,只會注意到人體圖像上某些 重要的移動特徵,而人腦就會依據這些特徵來進行判斷人員的動作。通常, 人員除在行走外,尚有-鱗候是進行微動作時m著時候,雙腳 會做則後規律的擺動’並且移動速度不快,身體擺幅不大時,又例如:且 雙手在做定點運動時,也會做前後規律的擺動,這些都是用來形容微動時 候的特徵。而就實際上而言,微動時的人體圖像變化是最難預測與判斷的, 所以在這些眾多特徵之間將會有其重要的先後順序,來本發明將根據這些 • 先後順序來進行辨識,且這也是影響辨識結果的重要因素。 所以微動態辨識演算法與動態纖演算法之間的差異為,微動態辨識 演算法只需要特徵值或向量特徵值或權重向量特徵值進行辨識,而不需要 利用到權重特徵值》 接著,請先參考抑B圖為第-張靜止的人體圖像’而第9C圖則為為利 用影像分馳術將四張不同圖片中的人體圖像分離出來後,可進行一起相 互比較’由第9C®我們可赠知不__點時右手制鶴是不同的。 • 其中’請參考細圖巾的特徵值為W1-15至W4-15,分別代表不同時間點 的特徵值’特徵值W1-15為t=t1時間所產生的點,而特徵值^2-15為t=t2時 -間所產生的點’且特徵值W3_15為t=t3時間所產生的點,再者,特徵值·^ 為t=t4時間所產生的點。而當人員手臂往上移動時可以得知在時間點為在 t=t1的W1-15概值,會先到t=t2時的W2_i5特徵值,此過程可繪成向量特 徵值MV15-12。而在時間點為在t=t2_2_14特徵值,會到印時的W314 特徵值,此過程可緣成向量特徵值MV15_t23e而在時間點為在印的.14 特徵值’再到t-t4時的W3-14特徵值,此過程可繪成向量特徵值MV15_t34。 23 201234284 而這些向量特徵值與該至少一個人體圖像輪廓之該些特徵值,可以利用一 算術辨識演算法,以進行人體圖像存在之辨識。 再著’請參考另一範例:請參考9F圊我們可以得右手臂的移動除往上 外,更向會向下擺動。其中,請參考第9G圖中的特徵值為㈧扣朽至W7_15, 分別代表不同時間點的特徵值。而當人員手臂往上移動時可以得知在時間 點為在t=t1的W1-15特徵值’依序會在不同時間點經過W2-15、W3-15、 W4-15、W5-15.、W6-15,最後’再到W7-15。而所形成的這些向量特徵值, 則有 MV15-t12、MV15-t23、MV15-t34、MV15-t45、MV15-t56 與 MV15-t67。 而這些向量特徵值與該至少一個人體圖像輪廓之該些特徵值,可以利用一 算術辨識演算法,以進行人員存在之辨識。 唯,本發明並不限定如何作法,將依據不同的演算法適當的得到移動 向量特徵值。 此微動態的演算法可依據動態時的移動向量,來當向量特徵值的有 MV15-t12、MV15-t23、MV15-t34、MV15-t45、MV15_t56 與 MV15-t67, 最後,再由類神經演算法、基因演算法或模糊演算法之任意組合,自建的 微動2 D模型與分離出來的物體的·向量特徵值做匹配,以尋最合適的微動 變化《在實務上,人體圖像微動辨識演算法有時與再配合原W1-1至W1-27 的特徵值,才能提高辨識率,所用本發明則採用向量特徵值與該些特徵值 相互配合的結果。 而本發明亦可採用人臉辨識的方式,人臉辨識是人類視覺之特別能 力,目前已有相關且多的不同影像處理和人臉辨識技術被提出。在以特徵 為基礎的技術中,以往可以統計測量方法取得由臉部得到之特徵向量集 201234284 合’這健合被視糖部形態並且表示相關特徵和關係。而本發明亦同利 用人體圖像之骨架靜態辨識演算法來加_識人臉,且包含以下步雜: 步驟502:人臉影像分離,找出人臉輪廓。 步驟504 .找出人臉特徵值。即是找尋人體圖像物件的重心位置。 步驟506 :人臉辨識演算法。 ' 詳細的流程與說明可以參依人體圖像之骨架靜態辨識演算法,再此就 不加以重新敝述’唯本發不加嫌以麟顧方法,可以實務上 • 的需求來加以選擇適合的方式。相同的手勢之判斷亦可採用以人體圖像之 骨架靜態辨識演算法中的該動_m;^則,在此,就不加以重新贊述。 所以細本發明確實可以研發出—觀識絲高的人M賴的節能裝 置。而本發明最主要的即是人體圖像辨識演算法,而人體圖像辨識演算法 可搭配人翻像之骨㈣識演算法、手勢麟㈣法或人臉纖演算法等 之任意組合。A&為使得人貞觸結果制更高騎識率,本㈣可搭配不 同的演算法,例如:類神經演算法、基因演算法或模糊演算法等,甚至, 籲眺有適當的演算法亦可加入。當人員辨識的節能裝置辨識為人體圖像辨 識演算法完成後,可利用有線或是無線的方式控制電源裝置,此電源負載 裝置可以是-燈具或是其它的電源負載裝置等。例如台燈、室内燈源、室 外燈源、路燈與空調系統等。最後,在實務上採用在本發明確實可以使得 電源負载裝置,受貞_電祕應所鋪,更料達絲錢源,達成一 種永續綠色電源的概念。 雖然本發明之實闕補如上所述,然其並_ .定本發明, 任何熟習相關技藝者,在不脫離本發明之精神和範圍内,當可作些許之更 25 201234284 動與潤飾,因此本發明之專利保護範圍須視本說明書所附之申請專利範圍 所界定者為準。 【圖式簡單說明】 第1圖係為先前紅外線感測器系統圖(先前技術); 第2圖係為本發明之人員辨識節能裝置之功能方塊圖; 第3圖係為本發明之影像信號處理單元之細部功能方塊圖; 第4圖係為本發明之人員辨識節能裝置之另一功能方塊圖; 第5圊係為本發明之人員動作辨識之流程圖; 第6A圊係為本發明之人體圖像靜態辨識之流程圖; 第6B圖係為本發明之人體圖像靜態辨識之正視圖之第—實施例圓; 第6C圖係為本發明之人體圖像靜態辨識之正視圖之第一實施例圖之 影像分割圖; 第6D圖係為本發明之人體圖像靜態辨識之正視圖之第一實施例圖之 特徵直; 第6E圊係為本發明之人體圓像靜態辨識之正視圖之第二實施例圖; 第6F圊係為本發明之人體圖像靜態辨識之正視圖之第三實施例圖; 笫6G圖係為本發明之人體圖像靜態辨識之正視圖之第四實施例圖; 第6H圊係為本發明之人艎圖像靜態辨識之正視圖之第五實施例囷; 第6丨圖係為本發明之人體圊像靜態辨識之正視圓之第六實施例圖; 第6J圖係為本發明之人體囷像靜態辨識之正視圖之第七實施例圖; 第6K圊係為本發明之人體圖像靜態辨識之正視圖之第八實施例圖; 第7A圓係為本發明之人體圈像靜態辨識之側視圖之第一實施例圖; 26 201234284 第7B圖係為本發明之人體圖像靜態辨識之側視圖之第二實施例圖; 第7C圖係為本發明之人體圖像靜態辨識之侧視圖之第三實施例圖; 第7D圖係為本發明之人體圖像靜態辨識之侧視圖之第四實施例圖; 第8A圖係為本發明之人體圖像動態辨識之流程圖; 第8B圖係為本發明之人體圖像動態辨識之第一實施例圖; 第8C圖係為本發明之人體圖像動態辨識之第一實施例圖之徵特值; 第8D圖係為本發明之定義16方位圖; 第8E ®係為本發日月之人體圖像動態辨識之第—實施例圖移動向量方 向圖; 第8F圖係為本發日月之人體圖像動態辨識之第二實施姻之徵特值; 第8G圖係為本發明之人體圖像動態辨識之第—實施例圖移動向量方 向圖; 第9A圖係為本發明之人體圖像微動態辨識之流程圖; . 第9B圖係為本發明之人體圖像微動態辨識之第—實施例圖; 第9C圖係為本發明之人體圖像微動態辨識之手臂移動圖; 第9D ϋ係為本發曰月之人Μ像微動態辨識之第一實施例圖之手臂移 動特徵值; 第9Ε圖係為本發明之人體圖像微動態辨識之第二實施例圖之移動向 量, 第9F圖係為本發明之人體圖像微動辨識之第二實施例圖之徵特值;及 第9〇圖係為本發明之人體圖像微動態辨識之第二實施例圖之移動向 27 201234284 【主要元件符號說明】 10 紅外線感測器系統 12 紅外線感測器 14 信號檢測放大器 16 信號比較器 20 電源控制裝置 100 人員辨識的節能系統 110 鏡頭 120 影像感測單元 122 影像感測模組 140 廣角幾何修正單元 150 人員辨識處理單元 160 記憶體 190 電源控制裝置 200 影像信號處理單元 201 暗點補償器 202 壞點補償器 203 鏡頭亮度補償器 204 色彩或灰度調整器 205 銳利度調整器 206 雜訊滤除器 207 曝光調整器 28 201234284 208 白平衡調整器 240 人員辨識節能裝置 250 能源負載Then, referring to the detailed picture, Di Tao is the image separation technology street to separate the human body images in two different pictures, and then put them together and compare them. Among them, different pictures will move in the picture according to different timings. In the first picture in Figure 7c, at the time of (4), using the image separation and center of gravity method, the feature value found is _27, and in the second picture is the use of image separation and center of gravity at t=t2 time. The eigenvalue found by the law is to. The easiest way is to compare W1-1 to W2-1 in different positions in the photo. For example, the first occurrence of the human body is changed to VVU, and the second one is: the inferior feeding value is 2_彳, so It can be known that the vector evaluation is to the left axis. If only the read-transition value is insufficient, the W1-27 and the W2_iW2_27 can be compared with each other to find 27 vector eigenvalues _ to MV27'. Image movement secret weight vector value is off _. Furthermore, another method 'is that the feature value of the first picture can be w1_l27, a weight feature value MV1-T is obtained according to different weights, and the feature value of the second picture is WnW2_27, according to different weights. The weighted feature value MV2-T′ is obtained and the different MV1-T and MV2T are compared with each other to obtain the weight vector feature value MVT. 201234284 Then, 'the meaning of weights will be explained, for example, the existing feature values are W1-1 to W1-27'. Each feature value is multiplied by each weight factor, and the obtained results are added together to be the weight feature value. The 27 eigenvalues are used to illustrate that the weighting factor can be 1/27. Of course, the value of the important eigenvalue weighting factor is larger, and the value of the less important weighting factor is lower. The same weight vector eigenvalues can also be used as a result of the spurs of different vector values. Therefore, the following clearly defines the difference between the eigenvalue or weight eigenvalue or vector eigenvalue, or the weight vector eigenvalue: the eigenvalue is the human body round image contour, and the value generated by the centroid method is used; the weight eigenvalue is the human body image rim a plurality of eigenvalues, using a weighted total result; the vector eigenvalue is a trajectory formed by the eigenvalue at different time points when the human body image is moving in a preset time # When the body image wheel is moving, one of the trajectories formed by the weight feature values corresponding to different time points; the weight vector feature value is a plurality of vector feature values of the human body image wheel milling, and the weighted total result is used. The present invention does not limit how to do it. Depending on the algorithm, the eigenvalues of the moving vector, or the weight eigenvalues, or the vector eigenvalues, or the weight vector eigenvalues may be appropriately obtained. The following two examples are illustrated: Example 1: First, to obtain the vector eigenvalues of the motion vector, you must first define the orientation of the 16 trajectories of the first shackles, such as: east, west, south, north, northeast, It is composed of southeast, southwest, northwest, northeast east, north north east, southeast east, south south east, south south west, southwest west, north north west and northwest west, and the present invention is not limited thereto. Then, taking the first map as a towel, the 8C1 is a graph formed by comparing the characteristic values of different times t in the 20 201234284 graphs, wherein the feature values in the first graph are W1-9, and Chu _2&rei丄—The characteristic value in the picture is W2-9. When the human body image moves to the left: It is known that the side .9 will turn to ___ (four), so from the 8E figure, the direction can be moved to the northwest direction. Example 2: Month! I can tell that the evaluation of vector eigenvalues can be compared by two pictures. It can also be compared with the eigenvalues of different times by three pictures. It can be more than one. The above depends entirely on the reaction of the system. The speed and processing time, of course, using multiple images to find the vector feature value is more accurate. In the 8th figure, the feature values are compared with each other by three pictures. For example, in the case of the _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14 eigenvalues, will pass the W214 eigenvalue at t=t2, and then to the W3-14 eigenvalue at t t3, this process can be wound into the vector eigenvalue MV14, which can be seen from the figure Shouting to the West. Finally, 'from the first example, we can see that the vector eigenvalues obtained by comparing the eigenvalues of a single point are like 'just, the result of the job may be slightly lower. If you want to improve the result, you can add MV1 to MV27' and The weighted vector method obtains the weight vector eigenvalue mvt to improve the identification rate. The same example 2 can also use this method to obtain a weight vector eigenvalue. Finally, using any combination of a neural-like algorithm, a genetic algorithm or a fuzzy algorithm, the self-built 2D model is matched with the separated vector eigenvalues, and the commonly used human body image model can be divided into a cylindrical shape and Stick-shaped, used to identify the direction of movement of the human body image. Next, please refer to FIG. 9A, which is a flowchart of the human body 0 image micro-dynamic identification method of the present invention, and includes the following steps: 21 201234284 Step 142 · Read image processing data. From the memory, please take the image of this garden and jump to step 144 * Step 144. For each image of the image processing, the image age is at least - the outline of the individual image. Look at this to close the human round image gallery and go to step 146. Step 146: The juice calculates the center of gravity of each body circle image contour, and obtains a plurality of feature values. Find the human image feature value of the picture and store the result of the secret value in the cell, and skip to step 148. Step 148 η π+1. Jump back to the step to get a picture and prepare to read the picture. Step 150: n<threshold. The threshold value (-d) is the preset value. If it is set to 2, it means that the system only reads two pictures for mobile evaluation. If it is set to 3, it means that the system only reads the picture to do the self-evaluation. When η reaches the preset threshold value, it jumps to step 152 and prepares for the mobile evaluation. Step 152. Find at least one vector feature value for at least one of the image processing data of the evening—the personal body looks like a mobile appraisal. Mobile evaluation (M〇tin〇n Vect〇 “E young (1) oil plus, referred to as MV) is commonly used in Mpeg or Η·264 miscellaneous, mainly used in reducing the amount of compression in the present invention; I: used to judge the human body There is a movement, the standard of judgment can be two pictures, three rounds 'depending on the practical requirements to set up' The invention does not add to the profit. When the mobile evaluation is completed, it can be moved to step 154. Step 154: Use An arithmetic identification algorithm performs the eigenvalues and at least one vector eigenvalue or weight vector eigenvalue _. After the evaluation of Xiang mobile, the _ _ algorithm, the gene/戾 algorithm or the 掏 algorithm is used. Combine to complete the human body image micro-dynamic identification algorithm. 22 201234284 Generally speaking, when humans judge the movement of people, they will only notice some important moving features on the human body image, and the human brain will According to these characteristics, the judgment of the person's movements is carried out. Usually, in addition to walking, there are still - the scales are when the micro-action is performed, the feet are made to follow the regular swing 'and the movement speed is not fast, the body swing Not big For example, when the hands are doing fixed-point motion, they will also perform regular and regular swings. These are used to describe the characteristics of the micro-motion. In fact, the human body image change during the micro-motion is the most difficult to predict. And judging, so there will be an important sequence between these many features, the invention will be identified according to these sequences, and this is also an important factor affecting the identification results. So the micro-dynamic identification algorithm and The difference between the dynamic fiber rendering algorithms is that the micro-dynamic identification algorithm only needs eigenvalues or vector eigenvalues or weight vector eigenvalues for identification, without using the weight eigenvalues. Next, please refer to the B-picture as the first - The image of the still body is still', and the 9C figure is used to separate the images of the human body in the four different pictures by using the image decoupling technique, and can be compared with each other's by the 9C®. _ Point right hand crane is different. • Where 'Please refer to the fine towel's characteristic value is W1-15 to W4-15, respectively representing the characteristic value at different time points' feature value W1-15 is t=t1 time The generated point, and the eigenvalue ^2-15 is the point generated between t=t2 and the eigenvalue W3_15 is the point generated by t=t3 time, and the eigenvalue·^ is t=t4 time The generated point. When the human arm moves up, it can be known that the W1-15 value at t=t1 at the time point will first reach the W2_i5 eigenvalue at t=t2, and the process can be plotted as a vector eigenvalue. MV15-12. At the time point is the t=t2_2_14 eigenvalue, it will reach the W314 eigenvalue at the time of printing. This process can be the vector eigenvalue MV15_t23e and at the time point is the printed .14 eigenvalue 'to t The W3-14 eigenvalue at -t4, this process can be plotted as a vector eigenvalue MV15_t34. 23 201234284 These vector eigenvalues and the eigenvalues of the at least one human body image contour can be determined by an arithmetic recognition algorithm. Identification of the presence of human images. Again, please refer to another example: please refer to 9F圊 We can get the movement of the right arm in addition to the upper and lower, and will swing downwards. Among them, please refer to the eigenvalues in Fig. 9G (8) deduction to W7_15, which respectively represent the eigenvalues at different time points. When the person's arm moves up, it can be known that the W1-15 characteristic value at t=t1 at the time point will pass through W2-15, W3-15, W4-15, W5-15 at different time points. W6-15, finally 'to W7-15. The vector feature values formed are MV15-t12, MV15-t23, MV15-t34, MV15-t45, MV15-t56 and MV15-t67. And the vector feature values and the feature values of the at least one human body image contour may be subjected to an arithmetic recognition algorithm for identification of the presence of a person. However, the present invention does not limit how the method is performed, and the moving vector feature values are appropriately obtained according to different algorithms. The micro-dynamic algorithm can be based on the motion vector of the dynamic time, when the vector feature values are MV15-t12, MV15-t23, MV15-t34, MV15-t45, MV15_t56 and MV15-t67, and finally, the neural-like calculus Any combination of law, gene algorithm or fuzzy algorithm, the self-built fretting 2D model is matched with the vector eigenvalues of the separated objects to find the most suitable fretting change. In practice, human body image micro-motion identification The algorithm sometimes matches the eigenvalues of the original W1-1 to W1-27 to improve the recognition rate. The present invention uses the result that the vector eigenvalues cooperate with the eigenvalues. However, the present invention can also adopt the face recognition method, and face recognition is a special capability of human vision. At present, various related image processing and face recognition technologies have been proposed. In the feature-based technology, it has been possible in the past to obtain a feature vector set obtained from the face by a statistical measurement method and to represent the relevant features and relationships. The present invention also uses the skeleton static identification algorithm of the human body image to add a human face, and includes the following steps: Step 502: The face image is separated to find the contour of the face. Step 504. Find the face feature value. That is to find the center of gravity of the human body image object. Step 506: Face recognition algorithm. 'Detailed process and description can be based on the skeleton static identification algorithm of human body image, and then we will not repeat it. 'Only this method is not too sensational, and can be selected according to the needs of practice. the way. The same gesture can also be judged by the motion _m in the static recognition algorithm of the human body image; here, it will not be re-praised. Therefore, the invention can indeed be developed to realize the energy-saving device of the person who is high in silk. The most important aspect of the present invention is the human body image recognition algorithm, and the human body image recognition algorithm can be matched with any combination of the human image reshaping algorithm, the gesture lining method or the face blast algorithm. A& in order to make people's touch results higher riding rate, this (4) can be combined with different algorithms, such as: neurological algorithms, gene algorithms or fuzzy algorithms, and even, there are appropriate algorithms. Can join. After the personnel-recognized energy-saving device is recognized as the human body image recognition algorithm, the power supply device can be controlled by wired or wireless means, and the power load device can be a lamp or other power load device. For example, table lamps, indoor light sources, outdoor light sources, street lamps and air conditioning systems. Finally, the practice of using the invention in the present invention can indeed make the power load device, and the concept of a sustainable green power supply. Although the present invention is as described above, it will be apparent to those skilled in the art that, in light of the spirit and scope of the present invention, The patent protection scope of the invention is subject to the definition of the scope of the patent application attached to the specification. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a diagram of a prior infrared sensor system (prior art); Fig. 2 is a functional block diagram of a person identifying an energy saving device of the present invention; Fig. 3 is a video signal of the present invention A detailed functional block diagram of the processing unit; FIG. 4 is another functional block diagram of the energy saving device of the present invention; the fifth is a flowchart of the motion recognition of the human body of the present invention; The flow chart of static recognition of human body image; the sixth embodiment is the first embodiment of the front view of the human body image static identification; the sixth embodiment is the front view of the static image recognition of the human body image of the present invention. The image segmentation diagram of an embodiment diagram; the 6D diagram is a feature of the first embodiment of the front view of the human body image static identification of the present invention; the sixth panel is the front view of the human body circular image of the invention. The second embodiment of the figure is a third embodiment of the front view of the human body image static identification of the present invention; the 笫6G image is the fourth aspect of the front view of the human body image static identification of the present invention. Example diagram; 6H The fifth embodiment of the present invention is a front view of the static recognition of the image of the human body of the present invention; the sixth embodiment is a sixth embodiment of the front view circle of the static recognition of the human body image of the present invention; The seventh embodiment of the front view of the human body image of the present invention is statically recognized; the sixth embodiment is the eighth embodiment of the front view of the human body image static identification of the present invention; the 7A circle is the invention FIG. 7B is a second embodiment of a side view of a static image recognition of a human body image of the present invention; and FIG. 7C is a human body diagram of the present invention. FIG. 7D is a fourth embodiment of a side view of a static image of a human body image of the present invention; FIG. 8A is a dynamic image recognition of a human body image according to the present invention; FIG. 8B is a first embodiment of the human body image dynamic identification of the present invention; FIG. 8C is a characteristic value of the first embodiment of the human body image dynamic identification of the present invention; FIG. 8D Is the definition of the invention 16 orientation map; 8E ® is the basis The first part of the human body image dynamic identification of the sun and the moon - the moving picture direction diagram of the embodiment; the 8th figure is the second implementation of the human body image dynamic identification of the hair and the moon; the 8th figure is the The invention discloses a moving image direction diagram of the first embodiment of the human body image dynamic recognition; FIG. 9A is a flow chart of the microscopic dynamic identification of the human body image according to the present invention; and FIG. 9B is a human body image micro dynamic of the present invention. The first embodiment of the identification is shown in the figure; the 9C is the arm movement diagram of the human body image micro-dynamic identification of the present invention; the 9D is the first embodiment of the micro-dynamic identification of the human image of the hairpin. The arm movement characteristic value; the 9th image is the motion vector of the second embodiment of the human body image micro-dynamic identification of the present invention, and the 9F figure is the second embodiment of the human body image micro-motion identification of the present invention. The special value; and the 9th image is the second embodiment of the human body image micro-dynamic identification of the present invention. The moving direction is 27 201234284 [Main component symbol description] 10 Infrared sensor system 12 Infrared sensor 14 Signal detection Amplifier 16 signal comparison 20 Power Control Device 100 Energy Saving System for Personnel Identification 110 Lens 120 Image Sensing Unit 122 Image Sensing Module 140 Wide Angle Geometric Correction Unit 150 Personnel Identification Processing Unit 160 Memory 190 Power Control Device 200 Image Signal Processing Unit 201 Dark Spot Compensator 202 Bad Point Compensator 203 Lens Brightness Compensator 204 Color or Grayscale Adjuster 205 Sharpness Adjuster 206 Noise Filter 207 Exposure Adjuster 28 201234284 208 White Balance Adjuster 240 Personnel Identification Energy Saving Device 250 Energy Load

Claims (1)

201234284 七、申請專利範圍: 1_ 一種人員辨識節能裝置,該裝置包含: 一影像感測模組,包含一鏡頭與一影像感測單元之組合,依據至少一 個物件產生複數個影像感測信號; 一影像信號處理單元,連接該影像感測單元,接收該些影像感測信號 並進行影像處理以產生一影像處理資料;及 一人員辨識處理單元,連接該影像信號處理單元,接收該影像處理資 料並以一人鱧圖像辨識演算法進行辨識以確認人員之存在或離開而產生至 少一個控制指令並傳送到至少一個電源控制裝置,該電源控制裝置依據該 控制指令控制至少一個電源裝置。 2. 如請求項1之裝置,其中該鏡頭係選自:一超廣角鏡頭、一定焦鏡頭、 一望遠鏡頭、一反射式鏡頭、一變焦鏡頭、一微鏡頭與一魚眼鏡頭。 3. 如請求項2之裝置,更包含: 一廣角幾何修正器,連接該影像信號處理單元與該人員辨識處理單元 之間’修正該超廣角透鏡所產生變形的該影像處理資料,以修正視角過大 所產生的影像誤差。 4. 如請求項2之裝置,更包含: 一廣角幾何修正器’連接該影像感測單元與該影像信號處理單元之 間’修正該超廣角透鏡所產生變形的該些影像感測信號,以修正視角過大. 所產生的影像誤差。 5. 如請求項1之裝置,更包含: 一記憶體,連接該人員辨識處理單元,儲存該影像處理資料。 30 201234284 6.如請求項1之裝置’其中該影像信號處理單元係選自:一暗點補償器、 -壞點補償ϋ、-綱亮度補傭、-色彩或灰度調整^銳利度調整 器'-雜訊滤除器、-曝光調整器、-白平衡調整器、以上之任意组合。 7_如請求項1之裝置’其中該該人體圖像辨識演算法係選自:一人體圖像 之骨架辨識演算法…手勢辨識演算法、—人臉觸法、以上之任意 組合。201234284 VII. Patent application scope: 1_ A personnel identification energy-saving device, the device comprises: an image sensing module comprising a combination of a lens and an image sensing unit, generating a plurality of image sensing signals according to at least one object; The image signal processing unit is connected to the image sensing unit, receives the image sensing signals and performs image processing to generate an image processing data; and a person recognition processing unit is connected to the image signal processing unit to receive the image processing data and The at least one control command is generated by the one-person image recognition algorithm to confirm the presence or absence of the person and transmitted to the at least one power control device, and the power control device controls the at least one power supply device according to the control command. 2. The device of claim 1, wherein the lens is selected from the group consisting of: an ultra wide angle lens, a fixed focus lens, a telephoto lens, a reflective lens, a zoom lens, a micro lens and a fisheye lens. 3. The apparatus of claim 2, further comprising: a wide-angle geometry corrector coupled between the image signal processing unit and the personnel identification processing unit to 'correct the image processing data generated by the super wide-angle lens to correct the viewing angle Image error caused by excessive size. 4. The device of claim 2, further comprising: a wide-angle geometry corrector 'connecting the image sensing unit to the image signal processing unit to 'correct" the image sensing signals generated by the super wide-angle lens to Corrected the viewing angle too large. The resulting image error. 5. The device of claim 1, further comprising: a memory connected to the personnel identification processing unit to store the image processing data. 30 201234284 6. The device of claim 1 wherein the image signal processing unit is selected from the group consisting of: a dark spot compensator, - a dead point compensation ϋ, a - ray brightness compensation, a color or a gray scale adjustment ^ a sharpness adjuster '- Noise Filter, - Exposure Adjuster, - White Balance Adjuster, any combination of the above. 7_ The device of claim 1 wherein the human body image recognition algorithm is selected from the group consisting of: a human body image skeleton recognition algorithm, a gesture recognition algorithm, a human face touch method, and any combination of the above. 8·如請求項1之裝置,其中.該人體圖像辨識演算法係依據以下判斷結果輸 出不同之雜織令:當麟人貞存树辦魏錢控制指令以 使該電源控繼置雕該電雜置、當_人員綱時輸丨―酬電源信 號以使該電源控制裝置關閉該電源裝置。 9·—種人員辨識的節能方法,包含以下步驟: 拍攝至少一個物件而產生複數個影像感測信號; 對該些影像感測信號進行影像處理以產生一影像處理資料;及 依據人體®像賴演算法_影像處理資觸行觸以確認人員 之存在或離開而產生至少—健偏令,並將該控制指令傳送到至少一 個電源控制裝置以進行至少—個電源裝置之節能控制。 、月求項如9之方法,該控制指令係選自:―開啟能源裝置指令、一關 '、置心7調整能源電流裝置指令與—調整能源電塵裝置指令。 以上之任意 組合 ^如物如9 W,綱觸像_算峨自:一人體圖像 之架辨識演算法、一手勢辨識演算法、一人臉辨識演算法、 12·如請求項如彳彳之方法, 其中該人體®像之轉觸鱗域該手勢辨 31 201234284 識演算法或該人臉辨識演算法係選自:一靜態辨識法、一動態辨識法、一 微動態辨識法、以上之任意組合。 13.如請求項12之方法,其中該靜態辨識法包含以下之步驟: 讀取該影像處理資料; 對該影像處理資料進行影像分離,以找出至少一個人體圖像輪廓; 计算母個該人體囷像輪靡之重心,而取得複數個特徵值;及 利用一算術辨識演算法對該些特徵值進行辨識。 14.如請求項13之方法,其中該算術辨識演算法係選自以下之任意組合: 一類神經演算法、一基因演算法或一模糊演算法。 15·如請求項12之方法,其中該動態辨識法與該微動態辨識法包含以下之 步驟: 依據不同時序讀取該影像處理資料; 對不同時序每個該影像處理資料進行影像分離,以找出至少一個人 體圖像輪廓; 計算每個該人體圖像輪廓之重心,而取得複數個特徵值; 對不同時序該影像處理資料之該至少》個人體圖像輪廟進行一移 動評估,而找尋至少一個向量特徵值;及 利用一算術辨識演算法對該些特徵值與該至少一個向量特徵值或 一權重向量特徵值進行辨識。 16.如請求項15之方法,其中該算術辨識演算法係選自以下之任意組合: 一類神經演算法、一基因演算法或一模糊演算法。 17·如請求項15之方法,其_該權重向量特雖係對至少一個向量特徵值 32 201234284 進行權重的結果。 月求項15之方法,其中該特徵值在空間上對應一 X軸座標、一 γ 座標。 請求項15之方法,其中該向量特徵值係為於一預設時間内,當該乂 體圖像輪廓移動時對應獨時間點的該特徵值所形成之—軌跡或當該人截8. The device of claim 1, wherein the human body image recognition algorithm outputs different weaving orders according to the following judgment results: when the lining person saves the Wei Qian control command to enable the power control to continue to sculpt the The power is mixed, and the power signal is transmitted to enable the power control device to turn off the power supply device. 9. The energy saving method for identifying a person includes the following steps: capturing at least one object to generate a plurality of image sensing signals; performing image processing on the image sensing signals to generate an image processing data; The algorithm_image processing device touches to confirm the presence or absence of the person to generate at least a strong bias command, and transmits the control command to the at least one power control device to perform energy saving control of at least one power device. The method of monthly demand is as follows: the control command is selected from the group consisting of: “Enable energy device command, one switch”, “heart 7 adjustment energy current device command and—adjust energy dust device command”. Any combination of the above, such as 9 W, the outline of the touch image _ calculation from: a human body image frame recognition algorithm, a gesture recognition algorithm, a face recognition algorithm, 12 · such as the request item such as The method, wherein the human body® image touches the scale field, the gesture recognition 31 201234284 recognition algorithm or the face recognition algorithm is selected from: a static identification method, a dynamic identification method, a micro dynamic identification method, any of the above combination. 13. The method of claim 12, wherein the static identification method comprises the steps of: reading the image processing data; performing image separation on the image processing data to find at least one human body image contour; calculating the parent human body A plurality of eigenvalues are obtained by the center of gravity of the rim; and the eigenvalues are identified by an arithmetic recognition algorithm. 14. The method of claim 13, wherein the arithmetic identification algorithm is selected from any combination of the following: a type of neural algorithm, a gene algorithm, or a fuzzy algorithm. The method of claim 12, wherein the dynamic identification method and the micro-dynamic identification method comprise the following steps: reading the image processing data according to different timings; performing image separation on each of the image processing data at different timings to find Extracting at least one human body image contour; calculating a center of gravity of each of the human body image contours to obtain a plurality of feature values; performing a mobile evaluation on the at least one individual body image round temple of the image processing data at different timings, and searching At least one vector feature value; and identifying the feature values and the at least one vector feature value or a weight vector feature value by using an arithmetic recognition algorithm. 16. The method of claim 15, wherein the arithmetic identification algorithm is selected from any combination of the following: a type of neural algorithm, a gene algorithm, or a fuzzy algorithm. 17. The method of claim 15, wherein the weight vector is a result of weighting at least one vector feature value 32 201234284. The method of claim 15, wherein the feature value spatially corresponds to an X-axis coordinate and a γ coordinate. The method of claim 15, wherein the vector feature value is formed by the feature value corresponding to the unique time point when the outline of the body image moves for a preset time or when the person intercepts ,如”月求項19之方法’其中該權重特徵值係為該人體圖像輪叙該些特 徵值進行權重的結果。 21· 一種人員辨識的靴方法,包含以下步驟: 拍攝至少-個物件而產生複數個影像感測信號; 對該些影像感測信號進行影像處理以產生-影像處理資料;及 依據一人體目像辨識演算法對該影像處理資料進行辨雜確認人員 存在或離開而產生至少—健制指令,並運贿控制指令控制至少一 個電源裝置之動作。 22.如明求項21之方法,其中該人體轉辨識演算法係選自以下之任意組 合:-人體圖像之骨架辨識演算法、—手勢辨識演算法或一人臉辨識演算 法9 2j·如請求項22之方法,其巾該人_像之肅或該手勢辨識 演算法或該人臉辨識演算法包含以下之步驟: 讀取該影像處理資料; 對該影像處理資料進行影像分離,以找出至少一個人體麟輪靡; 計算每個該人體圖像輪廓之重心,而取得複數個#徵值1 33 201234284 利用一算術辨識演算法對該至少一個人體圖像輪廓之該些特徵值進行 辨識。 24.如請求項23之方法,其中該算術辨識演算法係選自:一類神經演算法' 一基因演算法、一模糊演算法、以上之任意組合。 34For example, the method of the monthly claim 19, wherein the weight feature value is a result of weighting the feature values of the human body image. 21· A method for identifying a person's boots, comprising the following steps: capturing at least one object And generating a plurality of image sensing signals; performing image processing on the image sensing signals to generate image processing data; and performing identification on the image processing data according to a human body image recognition algorithm to confirm presence or absence of a person At least a health command, and a bribe control command to control the action of at least one power supply device. 22. The method of claim 21, wherein the human body transfer identification algorithm is selected from any combination of the following: - a skeleton of a human body image Identification algorithm, gesture recognition algorithm or face recognition algorithm 9 2j. The method of claim 22, the method of the person or the gesture recognition algorithm or the face recognition algorithm comprises the following steps : reading the image processing data; performing image separation on the image processing data to find at least one human body rim; calculating each of the human body images The weight of the contour is obtained, and a plurality of # 值值1 33 201234284 is used to identify the feature values of the at least one human body image contour by an arithmetic recognition algorithm. 24. The method of claim 23, wherein the arithmetic identification algorithm The legal system is selected from: a type of neural algorithm's one gene algorithm, one fuzzy algorithm, any combination of the above.
TW100104068A 2011-02-01 2011-02-01 Power-saving based on the human recognition device and method TW201234284A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW100104068A TW201234284A (en) 2011-02-01 2011-02-01 Power-saving based on the human recognition device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW100104068A TW201234284A (en) 2011-02-01 2011-02-01 Power-saving based on the human recognition device and method

Publications (1)

Publication Number Publication Date
TW201234284A true TW201234284A (en) 2012-08-16

Family

ID=47070090

Family Applications (1)

Application Number Title Priority Date Filing Date
TW100104068A TW201234284A (en) 2011-02-01 2011-02-01 Power-saving based on the human recognition device and method

Country Status (1)

Country Link
TW (1) TW201234284A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI506461B (en) * 2013-07-16 2015-11-01 Univ Nat Taiwan Science Tech Method and system for human action recognition
CN105739673A (en) * 2014-12-10 2016-07-06 鸿富锦精密工业(深圳)有限公司 Gesture creating system and method
TWI551828B (en) * 2014-12-15 2016-10-01 國立臺中科技大學 Control system of air conditioner and method thereof
US9804680B2 (en) 2014-11-07 2017-10-31 Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd. Computing device and method for generating gestures
TWI777689B (en) * 2021-07-26 2022-09-11 國立臺北科技大學 Method of object identification and temperature measurement

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI506461B (en) * 2013-07-16 2015-11-01 Univ Nat Taiwan Science Tech Method and system for human action recognition
US9218545B2 (en) 2013-07-16 2015-12-22 National Taiwan University Of Science And Technology Method and system for human action recognition
US9804680B2 (en) 2014-11-07 2017-10-31 Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd. Computing device and method for generating gestures
CN105739673A (en) * 2014-12-10 2016-07-06 鸿富锦精密工业(深圳)有限公司 Gesture creating system and method
CN105739673B (en) * 2014-12-10 2018-06-05 南宁富桂精密工业有限公司 Gesture creates system and method
TWI551828B (en) * 2014-12-15 2016-10-01 國立臺中科技大學 Control system of air conditioner and method thereof
TWI777689B (en) * 2021-07-26 2022-09-11 國立臺北科技大學 Method of object identification and temperature measurement

Similar Documents

Publication Publication Date Title
US9965865B1 (en) Image data segmentation using depth data
Shih A robust occupancy detection and tracking algorithm for the automatic monitoring and commissioning of a building
US11106903B1 (en) Object detection in image data
US9622322B2 (en) Task light based system and gesture control
US20180081434A1 (en) Eye and Head Tracking
CN102081918B (en) Video image display control method and video image display device
Loh et al. Low-light image enhancement using Gaussian Process for features retrieval
KR100612858B1 (en) Method and apparatus for tracking human using robot
JP5287333B2 (en) Age estimation device
CN108205658A (en) Detection of obstacles early warning system based on the fusion of single binocular vision
CN104395856A (en) Computer implemented method and system for recognizing gestures
TW201234284A (en) Power-saving based on the human recognition device and method
US20120121133A1 (en) System for detecting variations in the face and intelligent system using the detection of variations in the face
CN104182721A (en) Image processing system and image processing method capable of improving face identification rate
CN109542233A (en) A kind of lamp control system based on dynamic gesture and recognition of face
CN114842397A (en) Real-time old man falling detection method based on anomaly detection
US10791607B1 (en) Configuring and controlling light emitters
CN104966300A (en) Bearing roller image detection system, method and image detection device
US20240169687A1 (en) Model training method, scene recognition method, and related device
CN114140745A (en) Method, system, device and medium for detecting personnel attributes of construction site
Wilhelm et al. Sensor fusion for vision and sonar based people tracking on a mobile service robot
US9684828B2 (en) Electronic device and eye region detection method in electronic device
US11423762B1 (en) Providing device power-level notifications
KR100543706B1 (en) Vision-based humanbeing detection method and apparatus
CN107563259A (en) Detect method, photosensitive array and the image sensor of action message