TWM586599U - System for analyzing skin texture and skin lesion using artificial intelligence cloud based platform - Google Patents

System for analyzing skin texture and skin lesion using artificial intelligence cloud based platform Download PDF

Info

Publication number
TWM586599U
TWM586599U TW108206572U TW108206572U TWM586599U TW M586599 U TWM586599 U TW M586599U TW 108206572 U TW108206572 U TW 108206572U TW 108206572 U TW108206572 U TW 108206572U TW M586599 U TWM586599 U TW M586599U
Authority
TW
Taiwan
Prior art keywords
skin
feature vector
captured image
artificial intelligence
parameter
Prior art date
Application number
TW108206572U
Other languages
Chinese (zh)
Inventor
李友專
靳嚴博
侯則瑜
Original Assignee
臺北醫學大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 臺北醫學大學 filed Critical 臺北醫學大學
Priority to TW108206572U priority Critical patent/TWM586599U/en
Publication of TWM586599U publication Critical patent/TWM586599U/en

Links

Landscapes

  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

A system for analyzing skin texture and skin lesion using artificial intelligence cloud based platform is provided. The system includes an electronic device and a server. The server includes a storage device and a processor. The processor is coupled to the storage device, accesses and executes a plurality of modules stored in the storage device, the plurality of modules comprising: an information receiving module, receiving a captured image and a plurality of user parameters; a feature vector obtaining module, obtaining the first feature vector from the captured image, and the second feature vector from the plurality of user parameters; a skin parameter acquisition module, obtaining the output result of skin parameters from the first feature vector and the second feature vector; and a skin identification module, determining a skin identification result based on the output result of skin parameters.

Description

人工智慧雲端膚質與皮膚病灶辨識系統Artificial intelligence cloud skin texture and skin lesion identification system

本新型創作是有關於一種膚質及皮膚病灶檢測技術,且特別是有關於一種人工智慧雲端膚質與皮膚病灶辨識系統。 The new creation relates to a skin texture and skin lesion detection technology, and more particularly to an artificial intelligence cloud skin and skin lesion identification system.

一般來說,皮膚科醫生除了從外觀判斷皮膚狀況之外,還會藉由問診來綜合判斷皮膚是否出現異常狀況。藉由外觀及問診結果,醫生可以初步判斷皮膚的狀態。舉例來說,若皮膚上的痣在一段時間內明顯變大或有異常凸起,則有可能是病變的前兆。一旦發生病變便需要花費時間治療而造成身體的負擔,因此提早發現病情以及時進行治療是避免受苦的最好方法。 Generally speaking, in addition to judging the skin condition from the appearance, a dermatologist also comprehensively judges whether the skin has an abnormal condition through consultation. Based on the appearance and the results of the consultation, the doctor can make a preliminary judgment on the condition of the skin. For example, if the mole on the skin becomes significantly larger or has abnormal protrusions over time, it may be a precursor to a lesion. Once a lesion occurs, it takes time to treat and causes a burden on the body, so early detection of the disease and timely treatment is the best way to avoid suffering.

然而,目前皮膚變化狀態均需透過醫生的專業判斷,一般使用者容易忽略皮膚的改變,且難以自己初步判斷皮膚是否出現異常狀況。因此,如何有效且明確地得知皮膚狀況,是本領域技術人員所欲解決的問題之一。 However, the current skin changes require professional judgment of a doctor. It is easy for general users to ignore skin changes, and it is difficult to determine whether the skin is abnormal. Therefore, how to know the skin condition effectively and clearly is one of the problems that those skilled in the art want to solve.

有鑑於此,本新型創作提供一種人工智慧雲端膚質與皮膚病灶辨識系統,其可同時考量皮膚影像及使用者回答問題的內容,藉由皮膚影像及使用者參數決定皮膚辨識結果。 In view of this, the new creation provides an artificial intelligence cloud skin texture and skin lesion recognition system, which can simultaneously consider the skin image and the content of the user's answer to the question, and determine the skin recognition result based on the skin image and user parameters.

本新型創作提供一種人工智慧雲端膚質與皮膚病灶辨識系統,包括電子裝置及伺服器。電子裝置取得擷取影像及多個使用者參數。伺服器連接所述電子裝置,所述伺服器包括儲存裝置及處理器。儲存裝置儲存多個模組。處理器耦接所述儲存裝置,存取並執行儲存於所述儲存裝置的所述多個模組,所述多個模組包括資訊接收模組、特徵向量取得模組、膚質參數取得模組及膚質辨識模組。資訊接收模組接收所述擷取影像及所述多個使用者參數;特徵向量取得模組取得所述擷取影像的第一特徵向量,並計算所述多個使用者參數的第二特徵向量;膚質參數取得模組根據所述第一特徵向量及所述第二特徵向量取得關聯於膚質參數的輸出結果;以及膚質辨識模組根據所述輸出結果決定對應於所述擷取影像的膚質辨識結果。 The novel creation provides an artificial intelligence cloud skin texture and skin lesion identification system, including an electronic device and a server. The electronic device obtains the captured image and a plurality of user parameters. A server is connected to the electronic device, and the server includes a storage device and a processor. The storage device stores a plurality of modules. The processor is coupled to the storage device, and accesses and executes the multiple modules stored in the storage device. The multiple modules include an information receiving module, a feature vector obtaining module, and a skin parameter obtaining module. Group and skin identification module. An information receiving module receives the captured image and the plurality of user parameters; a feature vector acquisition module obtains a first feature vector of the captured image and calculates a second feature vector of the plurality of user parameters The skin parameter acquisition module obtains an output result related to the skin parameter according to the first feature vector and the second feature vector; and the skin identification module determines a corresponding image to the captured image according to the output result Skin identification results.

在本新型創作的一實施例中,上述特徵向量取得模組取得所述擷取影像的所述第一特徵向量的運作包括:利用機器學習模型取得所述擷取影像的所述第一特徵向量。 In an embodiment of the novel creation, the operation of the feature vector obtaining module to obtain the first feature vector of the captured image includes: using a machine learning model to obtain the first feature vector of the captured image. .

在本新型創作的一實施例中,上述特徵向量取得模組計算所述多個使用者參數的所述第二特徵向量的運作包括:利用向 量表示各所述多個使用者參數;將向量化的各所述多個使用者參數合併並輸入至機器學習模型的全連接層以取得所述第二特徵向量。 In an embodiment of the novel creation, the operation of the feature vector obtaining module for calculating the second feature vector of the plurality of user parameters includes: using a vector The quantity represents each of the plurality of user parameters; the vectorized each of the plurality of user parameters is combined and input to a fully connected layer of a machine learning model to obtain the second feature vector.

在本新型創作的一實施例中,上述多個使用者參數包括性別參數、年齡參數、患部面積大小、時間參數或患部變化參數的組合。 In an embodiment of the novel creation, the plurality of user parameters include a combination of a gender parameter, an age parameter, an affected area size, a time parameter, or an affected area change parameter.

在本新型創作的一實施例中,上述膚質參數取得模組根據所述第一特徵向量及所述第二特徵向量取得關聯於所述膚質參數的所述輸出結果的運作包括:合併所述第一特徵向量及所述第二特徵向量以取得合併向量;以及輸入所述合併向量至機器學習模型的全連接層以取得所述輸出結果,其中所述輸出結果關聯於所述膚質參數的目標機率。 In an embodiment of the novel creation, the operation of the skin parameter obtaining module to obtain the output result related to the skin parameter according to the first feature vector and the second feature vector includes: The first feature vector and the second feature vector to obtain a merge vector; and inputting the merge vector to a fully connected layer of a machine learning model to obtain the output result, wherein the output result is associated with the skin quality parameter Target probability.

在本新型創作的一實施例中,上述膚質辨識模組根據所述膚質參數決定對應於所述擷取影像的所述膚質辨識結果的運作包括:根據所述輸出結果決定對應於所述擷取影像的所述膚質辨識結果。 In an embodiment of the novel creation, the operation of the skin identification module to determine the skin identification result corresponding to the captured image according to the skin parameter includes: determining the corresponding skin identification result according to the output result. The skin quality recognition result of the captured image is described.

在本新型創作的一實施例中,上述機器學習模型包括卷積神經網路或深度神經網路。 In an embodiment of the novel creation, the machine learning model includes a convolutional neural network or a deep neural network.

為讓本新型創作的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。 In order to make the above-mentioned features and advantages of the novel creation more comprehensible, embodiments are described below in detail with the accompanying drawings as follows.

1‧‧‧人工智慧雲端膚質與皮膚病灶辨識系統 1‧‧‧ Artificial Intelligence Cloud Skin and Skin Lesion Identification System

10‧‧‧電子裝置 10‧‧‧ electronic device

11、21‧‧‧通訊裝置 11, 21‧‧‧ communication device

12、22‧‧‧處理器 12, 22‧‧‧ processor

13、23‧‧‧儲存裝置 13, 23‧‧‧ storage devices

20‧‧‧伺服器 20‧‧‧Server

231‧‧‧資訊接收模組 231‧‧‧Information receiving module

232‧‧‧特徵向量取得模組 232‧‧‧Feature Vector Acquisition Module

233‧‧‧膚質參數取得模組 233‧‧‧Skin parameter acquisition module

234‧‧‧膚質辨識模組 234‧‧‧Skin Identification Module

S301~S304、S401~S405‧‧‧步驟 S301 ~ S304, S401 ~ S405‧‧‧step

圖1繪示本新型創作一實施例的人工智慧雲端膚質與皮膚病灶辨識系統的示意圖。 FIG. 1 is a schematic diagram of an artificial intelligence cloud skin texture and skin lesion identification system according to an embodiment of the novel creation.

圖2繪示本新型創作一實施例的電子裝置及伺服器的元件方塊圖。 FIG. 2 is a block diagram of components of an electronic device and a server according to an embodiment of the present invention.

圖3繪示本新型創作一實施例的人工智慧雲端膚質與皮膚病灶辨識方法的流程圖。 FIG. 3 is a flowchart of an artificial intelligence cloud skin and skin lesion identification method according to an embodiment of the novel creation.

圖4繪示本新型創作一實施例的人工智慧雲端膚質與皮膚病灶辨識方法的流程圖。 FIG. 4 is a flowchart of a method for identifying skin texture and skin lesions in an artificial intelligence cloud according to an embodiment of the novel creation.

本新型創作同時考量皮膚影像及使用者回答問題的內容,利用機器學習模型取得皮膚影像的特徵向量,並計算使用者參數的特徵向量。接著根據皮膚影像的特徵向量及使用者參數的特徵向量取得關聯於膚質參數的輸出結果以決定皮膚辨識結果。藉此,可同時考量皮膚影像及使用者回答問題的內容來決定皮膚病灶或膚質的辨識結果。 The novel creation considers the skin image and the content of the user's answer at the same time, uses the machine learning model to obtain the feature vector of the skin image, and calculates the feature vector of the user parameter. Then, according to the feature vector of the skin image and the feature vector of the user parameter, an output result related to the skin texture parameter is obtained to determine the skin recognition result. In this way, the skin image and the content of the user's answer to the question can be considered at the same time to determine the skin lesion or skin quality recognition result.

本新型創作的部份實施例接下來將會配合附圖來詳細描述,以下的描述所引用的元件符號,當不同附圖出現相同的元件符號將視為相同或相似的元件。這些實施例只是本新型創作的一部份,並未揭示所有本新型創作的可實施方式。更確切的說,這 些實施例只是本新型創作的專利申請範圍中的人工智慧雲端膚質與皮膚病灶辨識系統的範例。 Some embodiments of this novel creation will be described in detail with reference to the accompanying drawings. The component symbols cited in the following description will be regarded as the same or similar components when the same component symbols appear in different drawings. These embodiments are only a part of the novel creation, and do not disclose all the possible implementations of the novel creation. More precisely, this These embodiments are just examples of the artificial intelligence cloud skin texture and skin lesion identification system in the scope of the patent application of this novel creation.

圖1繪示本新型創作一實施例的人工智慧雲端膚質與皮膚病灶辨識系統的示意圖。參照圖1,人工智慧雲端膚質與皮膚病灶辨識系統1至少包括但不僅限於電子裝置10及伺服器20。其中伺服器20可分別與多個電子裝置10連接。 FIG. 1 is a schematic diagram of an artificial intelligence cloud skin texture and skin lesion identification system according to an embodiment of the novel creation. Referring to FIG. 1, the artificial intelligence cloud skin texture and skin lesion identification system 1 includes at least, but not limited to, an electronic device 10 and a server 20. The server 20 can be connected to multiple electronic devices 10 respectively.

圖2繪示本新型創作一實施例的電子裝置及伺服器的元件方塊圖。參照圖2,電子裝置10可包括但不僅限於通訊裝置11、處理器12及儲存裝置13。電子裝置10例如是具備運算功能的智慧型手機、平板電腦、筆記型電腦、個人電腦或其他裝置,本新型創作不在此限制。伺服器20可包括但不僅限於通訊裝置21、處理器22及儲存裝置23。伺服器20例如是電腦主機、遠端伺服器、後台主機或其他裝置,本新型創作不在此限制。 FIG. 2 is a block diagram of components of an electronic device and a server according to an embodiment of the present invention. Referring to FIG. 2, the electronic device 10 may include, but is not limited to, a communication device 11, a processor 12, and a storage device 13. The electronic device 10 is, for example, a smart phone, a tablet computer, a notebook computer, a personal computer, or other devices having a computing function, and the present invention is not limited thereto. The server 20 may include, but is not limited to, a communication device 21, a processor 22, and a storage device 23. The server 20 is, for example, a computer host, a remote server, a background host, or other devices, and the present invention is not limited thereto.

通訊裝置11及通訊裝置21可以是支援諸如第三代(3G)、***(4G)、第五代(5G)或更後世代行動通訊、Wi-Fi、乙太網路、光纖網路等通訊收發器,以連線至網際網路。伺服器20通過通訊裝置21與電子裝置10的通訊裝置11通訊連接以與電子裝置10互相傳輸資料。 The communication device 11 and the communication device 21 may support mobile communication such as third generation (3G), fourth generation (4G), fifth generation (5G), or later generations, Wi-Fi, Ethernet, and optical fiber networks. Wait for the communication transceiver to connect to the Internet. The server 20 is communicatively connected to the communication device 11 of the electronic device 10 through the communication device 21 to transmit data to and from the electronic device 10.

處理器12耦接通訊裝置11及儲存裝置13,處理器22耦接通訊裝置21及儲存裝置23,並且處理器12及處理器22可以分別存取並執行儲存於儲存裝置13及儲存裝置23的多個模組。在不同實施例中,處理器12及處理器22可以分別例如是中央處理 單元(Central Processing Unit,CPU),或是其他可程式化之一般用途或特殊用途的微處理器(Microprocessor)、數位訊號處理器(Digital Signal Processor,DSP)、可程式化控制器、特殊應用積體電路(Application Specific Integrated Circuits,ASIC)、可程式化邏輯裝置(Programmable Logic Device,PLD)或其他類似裝置或這些裝置的組合,本新型創作不在此限制。 The processor 12 is coupled to the communication device 11 and the storage device 13, the processor 22 is coupled to the communication device 21 and the storage device 23, and the processor 12 and the processor 22 can access and execute the storage in the storage device 13 and the storage device 23, respectively. Multiple modules. In different embodiments, the processor 12 and the processor 22 may be, for example, central processing units, respectively. Unit (Central Processing Unit, CPU), or other programmable general purpose or special purpose microprocessor (Microprocessor), digital signal processor (DSP), programmable controller, special application Application Specific Integrated Circuits (ASIC), Programmable Logic Device (PLD) or other similar devices or a combination of these devices, the creation of this new type is not limited here.

儲存裝置13及儲存裝置23例如是任何型態的固定式或可移動式隨機存取記憶體(Random Access Memory,RAM)、唯讀記憶體(read-only memory,ROM)、快閃記憶體(flash memory)、硬碟或類似元件或上述元件的組合而用以儲存可分別由處理器12及處理器22執行的程式。於本實施例中,儲存裝置23用於儲存緩衝的或永久的資料、軟體模組(例如,資訊接收模組231、特徵向量取得模組232、膚質參數取得模組233及膚質辨識模組234等)等資料或檔案,且其詳細內容待後續實施例詳述。 The storage device 13 and the storage device 23 are, for example, any type of fixed or removable random access memory (RAM), read-only memory (ROM), and flash memory ( flash memory), a hard disk, or a similar component or a combination of the foregoing components to store programs executable by the processor 12 and the processor 22, respectively. In this embodiment, the storage device 23 is used to store buffered or permanent data, software modules (for example, information receiving module 231, feature vector acquisition module 232, skin parameter acquisition module 233, and skin identification module Group 234, etc.) and other materials or files, and the details thereof will be detailed in the subsequent embodiments.

圖3繪示本新型創作一實施例的人工智慧雲端膚質與皮膚病灶辨識方法的流程圖。請同時參照圖2及圖3,本實施例的方法適用於上述人工智慧雲端膚質與皮膚病灶辨識系統1,以下即搭配電子裝置10及伺服器20的各項裝置及元件說明本實施例的人工智慧雲端膚質與皮膚病灶辨識方法的詳細步驟。本技術領域人員應可理解,上述儲存在伺服器20的軟體模組不一定要在伺服器20上執行,也可以是下載並儲存至電子裝置10的儲存裝置13中,而由電子裝置10執行所述軟體模組進行人工智慧雲端膚質與皮膚 病灶辨識方法。 FIG. 3 is a flowchart of an artificial intelligence cloud skin and skin lesion identification method according to an embodiment of the novel creation. Please refer to FIG. 2 and FIG. 3 at the same time. The method of this embodiment is applicable to the above-mentioned artificial intelligence cloud skin and skin lesion identification system 1. The following describes the devices and components of the electronic device 10 and the server 20 in this embodiment. Detailed steps of artificial intelligence cloud skin texture and skin lesion identification method. Those skilled in the art should understand that the software modules stored in the server 20 do not have to be executed on the server 20, but may be downloaded and stored in the storage device 13 of the electronic device 10, and executed by the electronic device 10. The software module performs artificial intelligence cloud skin and skin Lesion identification method.

首先,處理器22存取並執行資訊接收模組231以接收擷取影像及多個使用者參數(步驟S301)。其中,擷取影像及各使用者參數可以由伺服器20中的通訊裝置21自電子裝置10接收。在一實施例中,擷取影像及多個使用者參數先由電子裝置10取得。詳細而言,電子裝置10耦接於影像來源裝置(未繪示)並且從影像來源裝置取得擷取影像。影像來源裝置可以是配置於電子裝置10的相機,也可以是儲存裝置13、外接的記憶卡或遠端伺服器等用以儲存影像的裝置,本新型創作不在此限制。也就是說,使用者例如是操作電子裝置10藉由相機拍攝影像,或者是操作從裝置中取得先前拍攝好的影像,並且將選擇好的影像傳輸至伺服器20作為擷取影像供後續操作使用。 First, the processor 22 accesses and executes the information receiving module 231 to receive the captured image and a plurality of user parameters (step S301). The captured image and user parameters can be received from the electronic device 10 by the communication device 21 in the server 20. In an embodiment, the captured image and a plurality of user parameters are obtained by the electronic device 10 first. In detail, the electronic device 10 is coupled to an image source device (not shown) and obtains a captured image from the image source device. The image source device may be a camera configured in the electronic device 10, or may be a device for storing images such as the storage device 13, an external memory card, or a remote server, and the new creation is not limited thereto. In other words, for example, the user operates the electronic device 10 to capture an image by using a camera, or operates to obtain a previously captured image from the device, and transmits the selected image to the server 20 as a captured image for subsequent operations. .

此外,伺服器20會提供多個問題要求使用者回答,當使用者透過電子裝置10回答這些問題後,回答的結果將傳輸至伺服器20作為使用者參數供後續操作使用。其中,使用者例如是透過電子裝置10顯示的一使用者介面來回答問題,使用者介面可以是通訊軟體的聊天室、網頁、語音助理或其他可供互動功能的軟體介面,本新型創作不在此限制。 In addition, the server 20 provides a number of questions for the user to answer. After the user answers these questions through the electronic device 10, the answer results will be transmitted to the server 20 as user parameters for subsequent operations. Among them, the user answers the question through a user interface displayed on the electronic device 10, for example. The user interface may be a chat room of a communication software, a webpage, a voice assistant, or other software interfaces for interactive functions. limit.

接著,處理器22存取並執行特徵向量取得模組232以取得擷取影像的第一特徵向量,並計算多個使用者參數的第二特徵向量(步驟S302)。 Then, the processor 22 accesses and executes the feature vector obtaining module 232 to obtain a first feature vector of the captured image, and calculates a second feature vector of a plurality of user parameters (step S302).

詳細而言,為了取得擷取影像的第一特徵向量,處理器 22先透過皮膚病變影像樣本及使用者參數樣本訓練機器學習模型內各層的參數值。在一實施例中,上述機器學習模型例如是利用類神經網路(Neural Network)等技術所建構的機器學習模型,以類神經網路為例,其輸入層與輸出層之間是由眾多的神經元和鏈接組成,其中可包含多個隱藏層(hidden layer),各層節點(神經元)的數目不定,可使用數目較多的節點以增強該類神經網路的強健性。在本實施例中,機器學習模型例如是卷積神經網路(Convolutional Neural Network,CNN)或深度神經網路(Deep Neural Networks,DNN),本新型創作不在此限制。以卷積神經網路為例,可以將皮膚病變影像所對應的參數數值作為機器學習模型的輸人至卷積神經網路,並利用反向傳遞(Backward propagation)進行訓練以利用最後的目標函數(loss/cost function)來進行各層參數的更新,而可訓練學習模型內各層的參數值,其中例如是以誤差均方和(mean square error)當作目標函數。其中,各皮膚病變影像樣本可以是用習知的ResNet50、InceptionV3等卷積神經網路模型架構來訓練。 In detail, in order to obtain the first feature vector of the captured image, the processor 22 First train the parameter values of each layer in the machine learning model through skin lesion image samples and user parameter samples. In an embodiment, the above machine learning model is, for example, a machine learning model constructed by using a technology such as a neural network. Taking a neural network as an example, there are many between the input layer and the output layer. Neurons and links consist of multiple hidden layers, and the number of nodes (neurons) in each layer is variable. A larger number of nodes can be used to enhance the robustness of this type of neural network. In this embodiment, the machine learning model is, for example, a Convolutional Neural Network (CNN) or a Deep Neural Network (DNN), and the novel creation is not limited thereto. Taking the convolutional neural network as an example, the parameter values corresponding to the skin lesion image can be input to the convolutional neural network as a machine learning model, and trained using backward propagation to use the final objective function. (loss / cost function) to update the parameters of each layer, and the parameter values of each layer in the learning model can be trained, where, for example, the mean square error is used as the objective function. Among them, each skin lesion image sample can be trained using a conventional convolutional neural network model architecture such as ResNet50, InceptionV3, and the like.

接著可將影像輸入至訓練好的機器學習模型來取得影像特徵。在一實施例中,特徵向量取得模組232利用機器學習模型取得擷取影像的第一特徵向量。也就是說,在訓練機器學習模型後,處理器22將擷取影像輸入至訓練好的機器學習模型,並且提取擷取影像的第一特徵向量。 The image can then be input to a trained machine learning model to obtain image features. In one embodiment, the feature vector obtaining module 232 obtains a first feature vector of the captured image by using a machine learning model. That is, after training the machine learning model, the processor 22 inputs the captured image to the trained machine learning model, and extracts a first feature vector of the captured image.

另一方面,特徵向量取得模組232還會計算多個使用者 參數的第二特徵向量。其中,特徵向量取得模組232例如是利用向量表示各使用者參數,將向量化的各使用者參數合併並輸入至機器學習模型的全連接層(Fully Connected Layer)以取得第二特徵向量。其中,合併後的向量化的各使用者參數的維度與問題數量和問題內部的選項有關。 On the other hand, the feature vector acquisition module 232 will also calculate multiple users The second eigenvector of the parameter. The feature vector acquisition module 232 uses vectors to represent user parameters, for example, and combines the vectorized user parameters and inputs them to the Fully Connected Layer of the machine learning model to obtain a second feature vector. Among them, the dimensions of the combined vectorized user parameters are related to the number of questions and options within the question.

詳細而言,特徵向量取得模組232會將伺服器20從電子裝置10接收到的使用者參數使用指示函數(indicator function)來編碼。舉例而言,若問題是使用者的性別,當使用者回答性別為男,則產生向量(1,0,0);當使用者回答性別為女,則產生向量(0,1,0);當使用者不想回答性別,則產生向量(0,0,1)。在編碼完所有使用者參數之後,特徵向量取得模組232會將編碼完的各使用者參數合併以取得合併向量,並將合併後的合併向量輸入至全連接層來進行雜交並輸出N維的向量。其中,全連接層會考量各使用者參數彼此之間的交互作用而產生出向量維度比原先各使用者參數的向量維度還多的第二特徵向量,例如,輸入16維度的向量至全連接層可以產生256維度的向量。在一實施例中,多個使用者參數包括性別參數、年齡參數、患部面積大小、時間參數或患部變化參數其中之一或其組合。 In detail, the feature vector obtaining module 232 encodes user parameters received by the server 20 from the electronic device 10 using an indicator function. For example, if the question is the gender of the user, a vector (1,0,0) is generated when the user answers the gender is male; a vector (0,1,0) is generated when the user answers the gender is female; When the user does not want to answer the gender, a vector (0,0,1) is generated. After all the user parameters are encoded, the feature vector acquisition module 232 combines the encoded user parameters to obtain a merge vector, and inputs the merged merge vector to the fully connected layer to perform hybridization and output N-dimensional vector. The fully connected layer will consider the interaction between user parameters to generate a second feature vector with more vector dimensions than the original vector dimension of each user parameter. For example, a 16-dimensional vector is input to the fully connected layer. Can produce 256-dimensional vectors. In an embodiment, the plurality of user parameters include one or a combination of a gender parameter, an age parameter, an affected area size, a time parameter, or an affected area change parameter.

接著,處理器22存取並執行膚質參數取得模組233以根據第一特徵向量及第二特徵向量取得關聯於膚質參數的輸出結果(步驟S303)。其中,膚質參數取得模組233合併第一特徵向量及第二特徵向量以取得合併向量,並且輸入合併向量至機器學習模 型的全連接層以取得輸出結果,其中輸出結果關聯於膚質參數的目標機率。在一實施例中,由於透過機器學習模型取得的第一特徵向量得到可能是二維結構的圖片,因此可以先將第一特徵向量轉換成一維空間的向量後再與第二特徵向量合併產生合併向量。 Then, the processor 22 accesses and executes the skin parameter acquisition module 233 to obtain an output result related to the skin parameter according to the first feature vector and the second feature vector (step S303). The skin parameter acquisition module 233 merges the first feature vector and the second feature vector to obtain a merge vector, and inputs the merge vector to a machine learning model. Full-connected layer of the model to obtain the output result, where the output result is related to the target probability of the skin parameter. In an embodiment, since the first feature vector obtained through the machine learning model is used to obtain a picture that may be a two-dimensional structure, the first feature vector may be converted into a one-dimensional space vector before being combined with the second feature vector to generate a merge. vector.

詳細而言,膚質參數取得模組233會合併特徵向量取得模組232取得的擷取影像的第一特徵向量以及從多個使用者參數計算出的第二特徵向量,並將第一特徵向量及第二特徵向量合併為合併向量。接著,膚質參數取得模組233將合併向量輸入至全連接層,並在輸出層(Output Layer)產生輸出結果。其中輸出結果的數量與想分類(classification)的輸出結果數目有關,假設最終希望輸出結果分為兩個類別(例如:皮膚無狀況與皮膚有狀況),則在輸出層有兩個輸出類別的膚質參數,本新型創作不在此限制輸出類別的數量。最終合併向量輸入至全連接層會轉化成各個輸出類別的機率(介於0到1之間)。在本實施例中,膚質參數例如是「痣」、「青春痘」或「膚況」等不同組輸出類別中分別分為「惡變風險較低的痣/惡變風險較高的痣」、「青春痘/非青春痘」或「膚況好/膚況不好」等不同的分類,並且輸出結果關聯於各組輸出類別中各膚質參數的目標機率。 In detail, the skin parameter obtaining module 233 combines the first feature vector of the captured image obtained by the feature vector obtaining module 232 and the second feature vector calculated from a plurality of user parameters, and combines the first feature vector And the second feature vector is merged into a merge vector. Then, the skin parameter acquisition module 233 inputs the merged vector to the fully connected layer, and generates an output result at the output layer. The number of output results is related to the number of output results that you want to classify. Suppose that you want the output results to be divided into two categories (for example, no skin condition and skin condition), then there are two output categories in the output layer. Quality parameters, this new creation does not limit the number of output categories here. The probability that the final merge vector input to the fully connected layer will be transformed into each output category (between 0 and 1). In this embodiment, the skin quality parameters are, for example, "mole", "acne" or "skin condition", and are divided into "moles with lower malignant risk / moles with higher malignant risk", " Acne / non-acne "or" good skin condition / bad skin condition "and other output categories are correlated with the target probability of each skin quality parameter in each output category.

最後,處理器22存取並執行膚質辨識模組234以根據輸出結果決定對應於擷取影像的膚質辨識結果(步驟S304)。其中,膚質辨識模組234根據輸出結果決定對應於擷取影像的膚質辨識結果。詳細而言,輸出結果中機率最大的即是最有可能的類別。 Finally, the processor 22 accesses and executes the skin recognition module 234 to determine a skin recognition result corresponding to the captured image according to the output result (step S304). The skin texture recognition module 234 determines a skin texture recognition result corresponding to the captured image according to the output result. In detail, the most probable category in the output is the most likely category.

基於上述,本新型創作的實施例在輸入影像至機器學習模型取得影像的特徵向量,並利用全連接層計算出使用者參數的向量後,將兩者向量合併作為資料輸入機器學習模型的全連接層,並透過全連接層產生輸出結果。也就是說,本新型創作除了考慮圖片的資訊以外,還同時考慮非圖片資訊,藉由建立能夠同時考慮圖片及非圖片資訊的機器學習模型,以更真實地模擬臨床判斷膚質的情境並使模型精準度提高。 Based on the above, in the embodiment of the novel creation, after inputting the image to the machine learning model to obtain the feature vector of the image, and using the fully connected layer to calculate the vector of the user parameters, the two vectors are combined as data to enter the full connection of the machine learning model. Layer and produce output results through the fully connected layer. In other words, in addition to considering picture information, this new type of creation also considers non-picture information. By building a machine learning model that can consider both picture and non-picture information, it can more realistically simulate the situation of clinical judgment of skin texture and Improved model accuracy.

以下實施例以「痣」為例,其中輸出類別「痣」分為「惡變風險較低的痣」與「惡變風險較高的痣」兩個膚質參數,並且在本實施例中,使用卷積神經網路作為機器學習模型的範例。圖4繪示本新型創作一實施例的人工智慧雲端膚質與皮膚病灶辨識方法的流程圖。請參照圖4,首先,處理器22接收擷取影像及多個使用者參數(步驟S401)。在本實施例中,使用者利用電子裝置10拍攝或從電子裝置10選取擷取影像,擷取影像的圖片大小例如是按照習知的卷積神經網路的輸入格式與尺寸設置為224x224,因此擷取影像可以表示為(224,224,3)的矩陣,其中3代表RGB顏色的位階。並且使用者回答伺服器20提供的多個問題,其中問題例如是包括「性別(男,女,不想回答)」、「年齡(20歲以下,21~40歲,41-65歲,66歲以上)」、「患部面積(小於等於0.6公分,大於0.6公分)」、「存在時間(小於等於1年,大於1年且小於2年,大於2年,沒注意)」或「患部變化(最近一個月有變化,最近一個月無變化,沒注意)」的組合。處理器22接收由電子裝置10傳 輸的擷取影像及多個使用者參數。 The following embodiment takes “mole” as an example, in which the output category “mole” is divided into two skin parameters: “mole with lower risk of malignant change” and “mole with higher risk of malignant change”. In this embodiment, the volume is used. Product neural networks are examples of machine learning models. FIG. 4 is a flowchart of a method for identifying skin texture and skin lesions in an artificial intelligence cloud according to an embodiment of the novel creation. Referring to FIG. 4, first, the processor 22 receives the captured image and a plurality of user parameters (step S401). In this embodiment, the user uses the electronic device 10 to capture or select a captured image. The size of the captured image is, for example, set to 224x224 according to the input format and size of the conventional convolutional neural network. The captured image can be represented as a matrix of (224,224,3), where 3 represents the rank of the RGB color. And the user answers a number of questions provided by the server 20, among which the questions include, for example, "gender (male, female, do not want to answer)", "age (under 20 years old, 21-40 years old, 41-65 years old, 66 years old or older) ) "," Area of affected area (0.6 cm or less, greater than 0.6 cm) "," Longevity (less than or equal to 1 year, greater than 1 year and less than 2 years, greater than 2 years, no attention) "or" Change of affected area (recent There have been changes from month to month, and there has been no change in the last month. The processor 22 receives the information transmitted by the electronic device 10 Input captured image and multiple user parameters.

接著,處理器22利用卷積神經網路取得擷取影像的第一特徵向量(步驟S4021)。並且處理器22計算多個使用者參數的第二特徵向量(步驟S4022)。其中,處理器22將擷取影像輸入至訓練好的卷積神經網路來取得擷取影像的第一特徵向量,其中卷積神經網路係利用關於「痣」的影像來訓練。並且伺服器20接收使用者的回答後,處理器22將回答編碼為向量,例如在本實施例中,若使用者回答為男、20歲以下、小於等於0.6公分、小於等於1年、最近一個月有變化,則向量化的回答為性別(1,0,0)、年齡(1,0,0,0)、患部面積(1,0)、存在時間(1,0,0,0)及患部變化(1,0,0)。接著,處理器22在維度上合併向量化的各多個使用者參數以取得合併向量,並且處理器22輸入合併向量至機器學習模型的全連接層以取得第二特徵向量。 Next, the processor 22 uses the convolutional neural network to obtain a first feature vector of the captured image (step S4021). And the processor 22 calculates the second feature vectors of the plurality of user parameters (step S4022). The processor 22 inputs the captured image to a trained convolutional neural network to obtain a first feature vector of the captured image. The convolutional neural network is trained using the image of the "mole". And after the server 20 receives the user's answer, the processor 22 encodes the answer as a vector. For example, in this embodiment, if the user's answer is male, under 20, 0.6 cm or less, 1 year or less, the most recent The month changes, the vectorized answers are gender (1,0,0), age (1,0,0,0), affected area (1,0), age (1,0,0,0), and Affected area changes (1,0,0). Next, the processor 22 merges each of the vectorized user parameters in a dimension to obtain a merge vector, and the processor 22 inputs the merge vector to a fully connected layer of the machine learning model to obtain a second feature vector.

接著,處理器22合併第一特徵向量及第二特徵向量以取得合併向量(步驟S403)。接著,處理器22輸入合併向量至卷積神經網路的全連接層以取得輸出結果(步驟S404)。在本實施例中,處理器22對第一特徵向量及第二特徵向量在維度上進行合併以取得合併向量,並且輸入合併向量至卷積神經網路的全連接層以取得輸出結果,其中輸出結果關聯於輸出類別「痣」中兩個膚質參數「惡變風險較低的痣/惡變風險較高的痣」分別的目標機率。 Next, the processor 22 combines the first feature vector and the second feature vector to obtain a merged vector (step S403). Next, the processor 22 inputs the merge vector to the fully connected layer of the convolutional neural network to obtain an output result (step S404). In this embodiment, the processor 22 merges the first feature vector and the second feature vector in dimensions to obtain a merge vector, and inputs the merge vector to a fully connected layer of the convolutional neural network to obtain an output result, where the output The results are related to the respective target probabilities of the two skin parameter parameters "moles with a lower risk of malignancy / moles with a higher risk of malignancy" in the output category "mole".

最後,處理器22根據輸出結果決定對應於擷取影像的膚質辨識結果(步驟S405)。在本實施例中,輸出結果中若膚質參數 「惡變風險較低的痣」的機率大則決定擷取影像中包括惡變風險較低的痣,若膚質參數「惡變風險較高的痣」的機率大則決定擷取影像中包括惡變風險較高的痣。 Finally, the processor 22 determines a skin quality recognition result corresponding to the captured image according to the output result (step S405). In this embodiment, if the skin quality parameter is in the output result, Mole with a lower risk of malignant change determines that the moles with a lower risk of malignancy are included in the captured image. If the probability of a skin parameter "mole with a higher risk of malignant change" is higher, it is determined that the moles with a higher risk of malignant change are included in the captured image Tall mole.

在另一實施例中,若卷積神經網路係利用「青春痘」等其他關於病灶的影像或是「膚況」等關於膚質的影像來訓練,並且針對「青春痘」或「膚況」等病灶或膚質提出不同的用於判斷病灶或膚質的問題作為使用者參數,則本新型創作的系統建立的模型可用於協助判斷「青春痘」、「膚況」或其他的病灶或膚質的影像是否符合特定病灶或膚質的狀態。 In another embodiment, if the convolutional neural network uses other images about lesions such as "acne" or images about skin texture such as "skin condition" to train, and targets "acne" or "skin condition" ”And other lesions or skin types propose different questions for judging the lesions or skin texture as user parameters, the model created by the new creative system can be used to help determine“ acne ”,“ skin condition ”or other lesions or Whether the skin image matches the specific lesion or skin condition.

在另一實施例中,本新型創作實施例提供的人工智慧雲端膚質與皮膚病灶辨識方法所建立的人工智慧雲端膚質與皮膚病灶辨識模型,可利用反向傳遞進行訓練以利用最後的目標函數來進行各層參數的更新,以使模型的辨識精準度提高。 In another embodiment, the artificial intelligence cloud skin and skin lesion identification model provided by the artificial intelligence cloud skin and skin lesion identification method provided by the novel creative embodiment may be trained using reverse transfer to utilize the final target Function to update the parameters of each layer to improve the recognition accuracy of the model.

綜上所述,本新型創作提供的人工智慧雲端膚質與皮膚病灶辨識系統可同時考量皮膚影像及使用者回答問題的內容,在輸入影像至機器學習模型取得影像的特徵向量,並利用全連接層計算出使用者參數的向量後,將影像的特徵向量及使用者參數的向量合併作為資料輸入機器學習模型的全連接層,並透過全連接層產生輸出結果。藉此,可根據皮膚影像的特徵向量及使用者參數的特徵向量取得各膚質參數的機率以決定病灶或膚質的辨識結果。也就是說,本新型創作除了考慮圖片的資訊以外,還同時考慮非圖片資訊,藉由建立能夠同時考慮圖片及非圖片資訊的機器 學習模型,以更真實地模擬臨床判斷病灶或膚質時以患部狀態及問答結果判斷的情境來使模型精準度提高。 In summary, the artificial intelligence cloud skin texture and skin lesion recognition system provided by this new creation can simultaneously consider the skin image and the content of the user's answer to the question, and obtain the feature vector of the image from the input image to the machine learning model, and use the full connection After computing the vector of user parameters in the layer, the feature vector of the image and the vector of user parameters are combined as a data input fully-connected layer of the machine learning model, and an output result is generated through the fully-connected layer. Thereby, the probability of each skin quality parameter can be obtained according to the feature vector of the skin image and the feature vector of the user parameter to determine the recognition result of the lesion or skin quality. In other words, in addition to considering picture information, this new type of creation also considers non-picture information. By creating a machine that can consider both picture and non-picture information Learning the model to more realistically simulate the situation in which the lesion or skin is judged clinically based on the condition of the affected area and the result of the question and answer to improve the accuracy of the model.

雖然本新型創作已以實施例揭露如上,然其並非用以限定本新型創作,任何所屬技術領域中具有通常知識者,在不脫離本新型創作的精神和範圍內,當可作些許的更動與潤飾,故本新型創作的保護範圍當視後附的申請專利範圍所界定者為準。 Although this new type of creation has been disclosed in the above examples, it is not intended to limit the new type of creation. Any person with ordinary knowledge in the technical field can make some changes and modifications without departing from the spirit and scope of this new type of creation. Retouching, so the protection scope of this new type of creation shall be determined by the scope of the attached patent application.

Claims (7)

一種人工智慧雲端膚質與皮膚病灶辨識系統,包括:
電子裝置,取得擷取影像及多個使用者參數;以及
伺服器,連接所述電子裝置,所述伺服器包括:
儲存裝置,儲存多個模組;以及
處理器,耦接所述儲存裝置,存取並執行儲存於所述儲存裝置的所述多個模組,所述多個模組包括:
資訊接收模組,接收所述擷取影像及所述多個使用者參數;
特徵向量取得模組,取得所述擷取影像的第一特徵向量,並計算所述多個使用者參數的第二特徵向量;
膚質參數取得模組,根據所述第一特徵向量及所述第二特徵向量取得關聯於膚質參數的輸出結果;以及
膚質辨識模組,根據所述輸出結果決定對應於所述擷取影像的膚質辨識結果。
An artificial intelligence cloud skin texture and skin lesion identification system includes:
An electronic device to obtain the captured image and a plurality of user parameters; and a server connected to the electronic device, the server including:
A storage device storing a plurality of modules; and a processor coupled to the storage device to access and execute the plurality of modules stored in the storage device, the plurality of modules including:
An information receiving module for receiving the captured image and the plurality of user parameters;
A feature vector obtaining module, obtaining a first feature vector of the captured image, and calculating a second feature vector of the plurality of user parameters;
A skin parameter acquisition module obtains output results related to skin parameters according to the first feature vector and the second feature vector; and a skin identification module determines corresponding to the extraction according to the output results Skin image recognition results.
如申請專利範圍第1項所述的人工智慧雲端膚質與皮膚病灶辨識系統,其中所述特徵向量取得模組取得所述擷取影像的所述第一特徵向量的運作包括:
利用機器學習模型取得所述擷取影像的所述第一特徵向量。
According to the artificial intelligence cloud skin and skin lesion identification system according to item 1 of the scope of patent application, the operation of the feature vector acquisition module to obtain the first feature vector of the captured image includes:
A machine learning model is used to obtain the first feature vector of the captured image.
如申請專利範圍第1項所述的人工智慧雲端膚質與皮膚病灶辨識系統,其中所述特徵向量取得模組計算所述多個使用者參數的所述第二特徵向量的運作包括:
利用向量表示各所述多個使用者參數;以及
將向量化的各所述多個使用者參數合併並輸入至機器學習模型的全連接層以取得所述第二特徵向量。
The artificial intelligence cloud skin and skin lesion identification system according to item 1 of the scope of the patent application, wherein the operation of the feature vector acquisition module calculating the second feature vector of the plurality of user parameters includes:
Using vectors to represent each of the plurality of user parameters; and merging and vectorizing each of the plurality of user parameters into a fully connected layer of a machine learning model to obtain the second feature vector.
如申請專利範圍第3項所述的人工智慧雲端膚質與皮膚病灶辨識系統,其中所述多個使用者參數包括性別參數、年齡參數、患部面積大小、時間參數或患部變化參數的組合。According to the artificial intelligence cloud skin texture and skin lesion identification system according to item 3 of the scope of the patent application, the plurality of user parameters include a combination of a gender parameter, an age parameter, an affected area size, a time parameter, or an affected area change parameter. 如申請專利範圍第1項所述的人工智慧雲端膚質與皮膚病灶辨識系統,其中所述膚質參數取得模組根據所述第一特徵向量及所述第二特徵向量取得關聯於所述膚質參數的所述輸出結果的運作包括:
合併所述第一特徵向量及所述第二特徵向量以取得合併向量;以及
輸入所述合併向量至機器學習模型的全連接層以取得所述輸出結果,其中所述輸出結果關聯於所述膚質參數的目標機率。
The artificial intelligence cloud skin and skin lesion identification system according to item 1 of the scope of the patent application, wherein the skin parameter acquisition module obtains a correlation with the skin according to the first feature vector and the second feature vector. The operation of the output results of the quality parameters includes:
Merging the first feature vector and the second feature vector to obtain a merge vector; and inputting the merge vector to a fully connected layer of a machine learning model to obtain the output result, wherein the output result is associated with the skin Target probability of quality parameters.
如申請專利範圍第5項所述的人工智慧雲端膚質與皮膚病灶辨識系統,其中所述膚質辨識模組根據所述膚質參數決定對應於所述擷取影像的所述膚質辨識結果的運作包括:
根據所述輸出結果決定對應於所述擷取影像的所述膚質辨識結果。
The artificial intelligence cloud skin and skin lesion identification system according to item 5 of the scope of patent application, wherein the skin identification module determines the skin identification result corresponding to the captured image according to the skin parameter The operations include:
Determine the skin quality recognition result corresponding to the captured image according to the output result.
如申請專利範圍第2項所述的人工智慧雲端膚質與皮膚病灶辨識系統,其中所述機器學習模型包括卷積神經網路或深度神經網路。The artificial intelligence cloud skin and skin lesion identification system according to item 2 of the scope of patent application, wherein the machine learning model includes a convolutional neural network or a deep neural network.
TW108206572U 2019-05-24 2019-05-24 System for analyzing skin texture and skin lesion using artificial intelligence cloud based platform TWM586599U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW108206572U TWM586599U (en) 2019-05-24 2019-05-24 System for analyzing skin texture and skin lesion using artificial intelligence cloud based platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW108206572U TWM586599U (en) 2019-05-24 2019-05-24 System for analyzing skin texture and skin lesion using artificial intelligence cloud based platform

Publications (1)

Publication Number Publication Date
TWM586599U true TWM586599U (en) 2019-11-21

Family

ID=69190034

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108206572U TWM586599U (en) 2019-05-24 2019-05-24 System for analyzing skin texture and skin lesion using artificial intelligence cloud based platform

Country Status (1)

Country Link
TW (1) TWM586599U (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI728369B (en) * 2019-05-24 2021-05-21 臺北醫學大學 Method and system for analyzing skin texture and skin lesion using artificial intelligence cloud based platform
US11986285B2 (en) 2020-10-29 2024-05-21 National Taiwan University Disease diagnosing method and disease diagnosing system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI728369B (en) * 2019-05-24 2021-05-21 臺北醫學大學 Method and system for analyzing skin texture and skin lesion using artificial intelligence cloud based platform
US11986285B2 (en) 2020-10-29 2024-05-21 National Taiwan University Disease diagnosing method and disease diagnosing system

Similar Documents

Publication Publication Date Title
CN111126574B (en) Method, device and storage medium for training machine learning model based on endoscopic image
CN111754596B (en) Editing model generation method, device, equipment and medium for editing face image
WO2020108165A1 (en) Image description information generation method and device, and electronic device
WO2020103676A1 (en) Image identification method and apparatus, system, and storage medium
CN110689025B (en) Image recognition method, device and system and endoscope image recognition method and device
WO2020103700A1 (en) Image recognition method based on micro facial expressions, apparatus and related device
WO2022105118A1 (en) Image-based health status identification method and apparatus, device and storage medium
US20200372639A1 (en) Method and system for identifying skin texture and skin lesion using artificial intelligence cloud-based platform
WO2018196718A1 (en) Image disambiguation method and device, storage medium, and electronic device
WO2023015935A1 (en) Method and apparatus for recommending physical examination item, device and medium
WO2022188697A1 (en) Biological feature extraction method and apparatus, device, medium, and program product
CN110516734B (en) Image matching method, device, equipment and storage medium
CN109063643B (en) Facial expression pain degree identification method under condition of partial hiding of facial information
CN112419326B (en) Image segmentation data processing method, device, equipment and storage medium
WO2023173646A1 (en) Expression recognition method and apparatus
CN111091010A (en) Similarity determination method, similarity determination device, network training device, network searching device and storage medium
CN108492301A (en) A kind of Scene Segmentation, terminal and storage medium
WO2023016137A1 (en) Facial image processing method and apparatus, and device and storage medium
CN109583367A (en) Image text row detection method and device, storage medium and electronic equipment
CN112529149A (en) Data processing method and related device
TWM586599U (en) System for analyzing skin texture and skin lesion using artificial intelligence cloud based platform
WO2023231753A1 (en) Neural network training method, data processing method, and device
CN113327212B (en) Face driving method, face driving model training device, electronic equipment and storage medium
CN115376198A (en) Gaze direction estimation method, gaze direction estimation device, electronic apparatus, medium, and program product
WO2021114626A1 (en) Method for detecting quality of medical record data and related device