TWI813096B - An indicating method and system for processing positions of a workpiece - Google Patents

An indicating method and system for processing positions of a workpiece Download PDF

Info

Publication number
TWI813096B
TWI813096B TW110146986A TW110146986A TWI813096B TW I813096 B TWI813096 B TW I813096B TW 110146986 A TW110146986 A TW 110146986A TW 110146986 A TW110146986 A TW 110146986A TW I813096 B TWI813096 B TW I813096B
Authority
TW
Taiwan
Prior art keywords
processing
workpiece
indication
threshold value
image
Prior art date
Application number
TW110146986A
Other languages
Chinese (zh)
Other versions
TW202325467A (en
Inventor
顏均泰
高志強
陳俊榮
吳智逸
Original Assignee
科智企業股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 科智企業股份有限公司 filed Critical 科智企業股份有限公司
Priority to TW110146986A priority Critical patent/TWI813096B/en
Publication of TW202325467A publication Critical patent/TW202325467A/en
Application granted granted Critical
Publication of TWI813096B publication Critical patent/TWI813096B/en

Links

Images

Landscapes

  • Electrical Discharge Machining, Electrochemical Machining, And Combined Machining (AREA)
  • Pharmaceuticals Containing Other Organic And Inorganic Compounds (AREA)
  • Multi-Process Working Machines And Systems (AREA)
  • General Factory Administration (AREA)
  • Machine Tool Sensing Apparatuses (AREA)

Abstract

The present invention provides an indicating method for processing positions of a workpiece. The indicating method comprises a step a, a step b, and a step c. In the step a, processing data of the workpiece is obtained by querying a workpiece database, and the processing data includes processing steps of the workpiece, the sequence of the processing steps, human-factors motion images and processing coordinates corresponding to the processing steps. In the step b, an inference unit is used to determine that the actions of an operator in the images are the human-factors motion images of which step, and an instruction trigger unit is used to calculate the coordinates of a next processing position to generate indicating coordinates. In the step c, before the operator starts to perform the next action, the instruction trigger unit transmits an indicating coordinate to an instruction device, and the instruction device provides an instruction signal indicating the next processing position of the workpiece according to the indicating coordinate. An indicating system for processing positions of a workpiece includes a workpiece database, an inference unit and an instruction trigger unit.

Description

工件加工部位之指示方法與系統 Method and system for indicating workpiece processing parts

本發明關於一種工件加工部位之指示方法與系統,尤指一種在各種自動化加工機具加工生產線、需作業員操作的加工生產線或前述兩種之組合所使用的工件加工部位之指示方法與系統。The present invention relates to a method and system for indicating the processing position of a workpiece, and in particular, to a method and system for indicating the processing position of a workpiece used in various automated processing machine tool processing production lines, processing production lines requiring operator operation, or a combination of the above two.

在習知技術中,作業員在面對組裝、檢查或檢測等例行作業時,通常依自己的習慣完成每個步驟。例如,以一般工件組裝作業為例,首先,作業員自行從整個作業生產線系統中決定要量測的點;再,作業員依自己的判斷要組裝、檢查或檢測工件的哪些點位而完成檢測動作。然而,新人可能因為對於整個作業生產線系統動作不熟練而出錯;作業員也可能看錯系統上的量測點而出錯;或,作業員量測點與系統提示點不同而出錯。此外,就算是經驗豐富的作業員,也難保證不會因為受到各種生理、心理狀態的影響而產生錯誤。In the conventional technology, when operators face routine tasks such as assembly, inspection or testing, they usually complete each step according to their own habits. For example, taking a general workpiece assembly operation as an example, first, the operator determines the points to be measured from the entire production line system; then, the operator completes the inspection based on his or her own judgment as to which points of the workpiece to assemble, inspect, or detect. action. However, newcomers may make mistakes because they are not proficient in the operations of the entire production line system; operators may also misread the measurement points on the system and make mistakes; or operators may make mistakes because the measurement points are different from the system prompt points. In addition, even experienced operators cannot guarantee that they will not make mistakes due to various physiological and psychological conditions.

另,在習知技術中,也會使用自動化機械組裝設備進行各種工件的組裝、檢查或檢測等例行作業,雖然自動化機械組裝設備是依照預設的工件計畫進行前述各種工件的組裝、檢查或檢測等例行作業,但,在實際作業上,調整作業仍可能因為人為操作出錯,因此習知技術中的自動化機械組裝的機制也仍需改進。In addition, in the prior art, automated mechanical assembly equipment is also used to perform routine operations such as assembling, inspecting or inspecting various workpieces. Although the automated mechanical assembly equipment performs the assembly and inspection of the aforementioned various workpieces according to the preset workpiece plan. Or routine operations such as inspection. However, in actual operations, adjustment operations may still be caused by human errors. Therefore, the automated mechanical assembly mechanism in the conventional technology still needs to be improved.

另一方面,在習知技術中,若需要用人工組裝、檢查或檢測,表示無法採用自動化機械組裝設備。可能原因為因應相同/不同工件會有位置相同/不同的情況,無法有規律性。再,若再考慮少量多樣化的組裝、檢查或檢測之排列組合,若使用自動化機械組裝設備反而必須經常調整而使整體成本變高。On the other hand, in the conventional technology, if manual assembly, inspection or testing is required, it means that automated mechanical assembly equipment cannot be used. The possible reason is that the same/different workpieces may have the same/different positions, and there is no regularity. Furthermore, if we consider a small number of diverse assembly, inspection or detection permutations and combinations, if automated mechanical assembly equipment is used, frequent adjustments will be required, which will increase the overall cost.

再,如果要運用影像識別技術來進行各種物件識別,並進而進行步驟提示,由於作業中的物件大小常常不一,需要非常大量的事前標記工作才能順利進行後續的步驟提示工作。Furthermore, if you want to use image recognition technology to identify various objects and then provide step prompts, since the objects in the job are often of different sizes, a very large amount of pre-marking work is required to successfully carry out the subsequent step prompts.

再,由於實際上物件在所擷取的影像中的大小、比例會隨著作業員實際動作而有所改變。例如,可能作業員將工件拿近時工件會變大,拿遠時工件變小,為了能因應不同大小都能順利框選,需要不同大小的待訓練影像才能達成,但,這樣需要花費非常多的時間在錄影作業過程及標記影像上的物件。Furthermore, because the actual size and proportion of objects in the captured images will change with the actual movements of the operator. For example, when the operator brings the workpiece closer, the workpiece may become larger, and when the workpiece is held further away, the workpiece may become smaller. In order to be able to smoothly select frames according to different sizes, different sizes of images to be trained are needed to achieve this. However, this requires a lot of costs. time during the recording process and marking objects on the image.

因此,為了克服前述問題,遂有本發明的產生。Therefore, in order to overcome the aforementioned problems, the present invention was developed.

為解決前述所提新進作業員可能因為對於整個作業生產線系統動作不熟練而出錯;作業員可能看錯系統上的量測點而出錯;作業員量測點與系統提示點不同而出錯的技術問題;例如,以人工鎖螺絲,因為漏看而有螺絲少鎖的情況;或,雖然自動化機械組裝設備是依照預設的工件計畫進行前述各種工件的組裝、檢查或檢測等例行作業,但在實際作業上仍有出錯的情況發生等技術問題,藉由本發明的方法與系統自動指示下個動作的位置,可讓作業員快速移動到指示的位置,減少搜尋的時間、系統自動指示每個工作的量測位置,作業員只需依指示量測,可減少量測位置錯的情況。讓新人也可以快速上手,減少動作出錯、減少作業員量測錯誤位置的機會;也藉由本發明的方法與系統將相關製程進行進一步的檢核,讓現有自動化機械組裝設備的出錯率能大幅改善。 In order to solve the above-mentioned technical problems that new operators may make mistakes because they are not proficient in the operation of the entire production line system; operators may misunderstand the measurement points on the system and make mistakes; operators may make mistakes because the measurement points are different from the system prompt points. ; For example, when screws are manually locked, the screws may not be locked due to oversight; or, although the automated mechanical assembly equipment performs routine operations such as assembling, inspecting or testing the various workpieces mentioned above according to the preset workpiece plan, but There are still technical problems such as errors in actual operations. By automatically indicating the position of the next action through the method and system of the present invention, the operator can quickly move to the indicated position, reducing the search time, and the system automatically indicates each step. Operators only need to follow the instructions to measure the working measurement position, which can reduce the possibility of wrong measurement positions. It allows newcomers to get started quickly, reduces movement errors, and reduces the chances of operators measuring wrong positions; the method and system of the present invention are also used to further check the relevant processes, so that the error rate of existing automated mechanical assembly equipment can be greatly improved. .

為達前述目的,本發明提供一種工件加工部位之指示方法,包括步驟A、B與C。於該步驟A中,從工件資料庫查詢而獲得工件的加工資料,該加工資料包括該工件的加工步驟、加工步驟的順序以及加工座標位置;於該步驟B中,藉由推論單元判斷影像中該工件的座標位置屬於該加工資料中的哪個加工步驟中的加工座標位置,而以指示觸發單元即時計算下一個加工點的座標位置而產生指示座標。於該步驟C中,在下一個步驟開始前,以該指示觸發單元傳送該指示座標給指示裝置,該指示裝置根據該指示座標提供該工件下一加工部位的指示信號。 In order to achieve the aforementioned objects, the present invention provides a method for indicating the processing position of a workpiece, which includes steps A, B and C. In step A, the processing data of the workpiece is obtained from the workpiece database. The processing data includes the processing steps of the workpiece, the sequence of the processing steps, and the processing coordinate position; in step B, the inference unit is used to determine the processing data in the image. The coordinate position of the workpiece belongs to which processing coordinate position in the processing step in the processing data, and the instruction trigger unit instantly calculates the coordinate position of the next processing point to generate the instruction coordinates. In the step C, before the next step starts, the indication trigger unit is used to transmit the indication coordinates to the indication device, and the indication device provides an indication signal for the next processing part of the workpiece according to the indication coordinates.

本發明另提供一種工件加工部位之指示方法,包括步驟a、b與c。於步驟a中,從工件資料庫查詢而獲得工件的加工資料,該加工資料包括該工件的加工步驟、加工步驟的順序、與該加工步驟相應的人因動作影像以及加工座標位置;於該步驟b中,藉由推論單元判斷影像中的作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作影像,而以指示觸發單元計算下一個加工點的座標位置而產生指示座標;於該步驟c中,在該作業員下一個動作開始前,以該指示觸發單元傳送該指示座標給指示裝置,該指示裝置根據該指示座標於提供該工件下一加工部位的指示信號。 The invention also provides a method for indicating the processing position of a workpiece, including steps a, b and c. In step a, the processing data of the workpiece is obtained from the workpiece database. The processing data includes the processing steps of the workpiece, the sequence of the processing steps, the human action image corresponding to the processing step, and the processing coordinate position; in this step In b, the inference unit determines which human action image in the processing step the operator's movements in the image belong to in the processing data, and the instruction trigger unit calculates the coordinate position of the next processing point to generate instruction coordinates; In step c, before the operator starts the next action, the instruction triggering unit transmits the instruction coordinates to the instruction device, and the instruction device provides an instruction signal for the next processing part of the workpiece based on the instruction coordinates.

實施時,於該步驟c之後更包括步驟d:重複該步驟b與該步驟c, 直到所有加工步驟完成為止。 During implementation, step d is included after step c: repeat step b and step c, until all processing steps are completed.

實施時,於該步驟b中更包括:藉由推論單元以CNN演算法分別判斷該影像中的該作業員的動作分別屬於該加工資料中的哪個步驟中的人因動作,並根據該影像中多個物件的大小占該影像的比例,計算下一個加工點的座標位置而產生該指示座標。 During implementation, the step b further includes: using the inference unit to use the CNN algorithm to determine which step of the processing data the operator's actions in the image belong to, and based on the steps in the image. The size of multiple objects accounts for the proportion of the image, and the coordinate position of the next processing point is calculated to generate the indicator coordinates.

實施時,於該步驟B中更包括:藉由推論單元以CNN演算法分別判斷該影像中的物件分別屬於該加工資料中的哪個步驟中的加工步驟,並根據該影像中多個物件的大小占該影像的比例,計算下一個加工點的座標位置而產生該指示座標。 During implementation, step B further includes: using the inference unit to use the CNN algorithm to determine which processing step in the processing data the objects in the image belong to, and based on the sizes of the multiple objects in the image According to the proportion of the image, the coordinate position of the next processing point is calculated to generate the indicated coordinates.

實施時,於該步驟C中更包括:在下一個步驟開始前,以該指示觸發單元傳送該指示座標給指示裝置,該指示裝置根據該指示座標提供該工件下一加工部位的指示信號,並於加工完成後依據前述下一加工部位的加工狀態而提供驗證指示信號。 During implementation, step C further includes: before starting the next step, using the instruction triggering unit to transmit the instruction coordinates to the instruction device, and the instruction device provides an instruction signal for the next processing part of the workpiece according to the instruction coordinates, and After the processing is completed, a verification instruction signal is provided according to the processing status of the next processing part.

實施時,於該步驟C之後更包括步驟D:重複該步驟B與該步驟C,直到所有加工步驟完成為止。 During implementation, step D is further included after step C: step B and step C are repeated until all processing steps are completed.

實施時,該指示信號持續到下一加工部位的步驟結束。 When implemented, the instruction signal continues until the end of the step of the next processing part.

實施時,於該步驟A中,是以工件編號而獲得該工件的加工資料。 During implementation, in step A, the processing data of the workpiece is obtained based on the workpiece number.

實施時,於步驟A前更包括步驟X,該步驟X包括步驟X1、X2、與X3。於該步驟X1中,藉由邊緣檢測導數Marr-Hildreth演算法找出該影像中各物件的邊緣;於該步驟X2中,藉由各物件的邊緣分別計算出平均門檻值,其供分別強化該各物件的邊緣;於該步驟X3中,將該平均門檻值乘上比例,而決定該工件所對應的各物件的最終門檻值,該最終門檻值是包括於該加工資料中。 During implementation, step X is further included before step A, and step X includes steps X1, X2, and X3. In this step X1, the edge of each object in the image is found through the edge detection derivative Marr-Hildreth algorithm; in this step X2, the average threshold value is calculated based on the edge of each object, which is used to respectively enhance the The edge of each object; in step X3, the average threshold value is multiplied by the proportion to determine the final threshold value of each object corresponding to the workpiece, and the final threshold value is included in the processing data.

實施時,於步驟A前更包括步驟X’,該步驟X’包括:X’1:以CNN演算法訓練各影像中同一物件中相異大小或相異光源者的平均門檻值,藉以找出同一物件中相異大小或相異光源者經訓練平均門檻值;X’2:藉由邊緣檢測導數Marr-Hildreth演算法找出該各物件的邊緣,藉由各物件的邊緣分別計算出平均門檻值;X’3:將該平均門檻值與該經訓練平均門檻值加以篩選,而得到經篩選門檻值;X’4:將該經篩選門檻值乘上一比例,而決定該工件所對應的各物件的最終門檻值,該最終門檻值是包括於該加工資料中。 During implementation, step X' is included before step A. This step X' includes: The trained average threshold value of objects with different sizes or different light sources in the same object; value; X'3: filter the average threshold value and the trained average threshold value to obtain the filtered threshold value; X'4: multiply the filtered threshold value by a ratio to determine the corresponding The final threshold value of each object, the final threshold value is included in the processing data.

本發明另提供一種工件加工部位之指示系統,其包括工件資料庫、推論單元與指示觸發單元。該工件資料庫存有複數筆加工資料,該複數筆加工資料包括該工件的加工步驟、加工步驟的順序、與該加工步驟相應的人因動作影像以及加工座標位置。該推論單元供判斷影像中該工件的座標位置屬於該加工資料中的哪個步驟中的加工步驟的座標位置或供判斷該影像中的作業員動作分別屬於該加工資料中的哪個加工步驟中的人因動作。該指示觸發單元供根據辨識出的座標位置,計算下一個加工點的座標位置而產生指示座標,並在下一個步驟開始前,傳送該指示座標至指示裝置,該指示裝置根據該指示座標提供該工件下一加工部位的指示信號。 The invention also provides an indication system for a workpiece processing part, which includes a workpiece database, an inference unit and an indication triggering unit. The workpiece database contains a plurality of pieces of processing data, and the plurality of pieces of processing data include the processing steps of the workpiece, the sequence of the processing steps, the human action images corresponding to the processing steps, and the processing coordinate positions. The inference unit is used to determine which step in the processing data the coordinate position of the workpiece in the image belongs to, or to determine which processing step in the processing data the operator's actions in the image belong to. Because of action. The indication trigger unit is used to calculate the coordinate position of the next processing point based on the identified coordinate position to generate indication coordinates, and before the next step starts, transmits the indication coordinates to the indication device, and the indication device provides the workpiece according to the indication coordinates. Instruction signal for the next processing position.

實施時,其更包括人因動作資料庫,其用以提供多個歷史人因動作影像,而供該推論單元判斷該影像中的該作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作。 When implemented, it further includes a human action database, which is used to provide multiple historical human action images for the inference unit to determine which processing step in the processing data the operator's actions in the image belong to. human action.

實施時,其更包括訓練單元,其供以CNN演算法根據多個歷史影像而建立供判斷該工件的座標位置屬於該加工資料中的哪個步驟中的加工步 驟的座標位置的第一標準群;以及,供以CNN演算法根據該多個歷史影像而建立供判斷該影像中的該作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作的第二標準群。 When implemented, it further includes a training unit, which uses a CNN algorithm to establish a processing step based on multiple historical images to determine which step in the processing data the coordinate position of the workpiece belongs to. The first standard group of the coordinate positions of the steps; and, using the CNN algorithm to establish based on the multiple historical images, human factors for determining which processing step in the processing data the operator's actions in the image belong to. Second standard group of actions.

實施時,其更包括物件邊緣強化單元,其供藉由邊緣檢測導數Marr-Hildreth演算法找出該影像中各物件的邊緣;藉由各物件的邊緣分別計算出平均門檻值,其供分別強化該各物件的邊緣;以及將該平均門檻值乘上比例,而決定該工件所對應的各物件的最終門檻值,該最終門檻值是包括於該加工資料中。 When implemented, it also includes an object edge enhancement unit, which is used to find the edges of each object in the image through the edge detection derivative Marr-Hildreth algorithm; the average threshold value is calculated using the edges of each object, which is used to enhance respectively. the edge of each object; and multiplying the average threshold value by a ratio to determine the final threshold value of each object corresponding to the workpiece, and the final threshold value is included in the processing data.

實施時,該物件邊緣強化單元更供以CNN演算法訓練各影像中同一物件中相異大小或相異光源者的平均門檻值,藉以找出同一物件中相異大小或相異光源者的經訓練平均門檻值;藉由邊緣檢測導數Marr-Hildreth演算法找出該各物件的邊緣,藉由各物件的邊緣分別計算出平均門檻值;將該平均門檻值與該經訓練平均門檻值加以篩選,而得到經篩選門檻值;將該經篩選門檻值乘上比例,而決定該工件所對應的各物件的最終門檻值,該最終門檻值是包括於該加工資料中。 During implementation, the object edge enhancement unit also uses the CNN algorithm to train the average threshold values of the same object with different sizes or different light sources in each image, so as to find out the experience of the same object with different sizes or different light sources. Train the average threshold; use the edge detection derivative Marr-Hildreth algorithm to find the edges of each object, and calculate the average threshold using the edges of each object; filter the average threshold and the trained average threshold , to obtain the filtered threshold value; multiply the filtered threshold value by the proportion to determine the final threshold value of each object corresponding to the workpiece, and the final threshold value is included in the processing data.

實施時,該推論單元更供在下一個步驟開始前,以該指示觸發單元傳送該指示座標給一指示裝置,該指示裝置根據該指示座標提供該工件下一加工部位的指示信號,並於加工完成後依據前述下一加工部位的加工狀態而提供驗證指示信號。 During implementation, the inference unit is further configured to use the instruction trigger unit to transmit the instruction coordinates to an instruction device before starting the next step. The instruction device provides an instruction signal for the next processing part of the workpiece based on the instruction coordinates, and after the processing is completed Then a verification instruction signal is provided based on the processing status of the next processing part.

實施時,該指示裝置為紅外線指示燈或投影裝置。 When implemented, the indicating device is an infrared indicator light or a projection device.

為進一步瞭解本發明,以下舉較佳之實施例,配合圖式、圖號,將本發明之具體構成內容及其所達成的功效詳細說明如下。 In order to further understand the present invention, the following is a preferred embodiment, and together with the drawings and figure numbers, the specific structure and content of the present invention and the effects achieved are described in detail as follows.

請參考圖1,本發明提供一種工件加工部位之指示系統,其包括:工件資料庫1、推論單元2、指示觸發單元3、人因動作資料庫4、訓練單元5。該工件資料庫1存有複數筆加工資料,該複數筆加工資料包括該工件的加工步驟、加工步驟的順序、與該加工步驟相應的人因動作影像以及加工座標位置。該推論單元2供判斷影像中該工件的座標位置屬於該加工資料中的哪個步驟中的加工步驟的座標位置;或供判斷該影像中的作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作。該指示觸發單元3供根據辨識出的座標位置,而計算下一個加工點的座標位置而產生指示座標;並在下一個步驟開始前,傳送該指示座標至指示裝置7,該指示裝置7根據該指示座標提供該工件下一加工部位的指示信號。該人因動作資料庫4用以提供多個歷史人因動作影像,而供該推論單元2判斷該影像中的該作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作。該訓練單元5供根據多個歷史影像而建立供判斷該工件的座標位置屬於該加工資料中的哪個步驟中的加工步驟的座標位置的第一標準群;以及,供根據該多個歷史影像而建立供判斷該影像中的該作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作的第二標準群。物件邊緣強化單元8,其是與該工件資料庫1、人因動作資料庫4與該訓練單元5連接,供藉由邊緣檢測導數Marr-Hildreth演算法找出該影像中各物件的邊緣;藉由各物件的邊緣分別計算出平均門檻值,其供分別強化該各物件的邊緣;以及將該平均門檻值乘上一比例,而決定該工件所對應的各物件的最終門檻值,該最終門檻值是包括於該加工資料中。 Please refer to Figure 1. The present invention provides an indication system for a workpiece processing part, which includes: a workpiece database 1, an inference unit 2, an indication trigger unit 3, a human action database 4, and a training unit 5. The workpiece database 1 stores a plurality of processing data. The plurality of processing data includes the processing steps of the workpiece, the sequence of the processing steps, the human action images corresponding to the processing steps, and the processing coordinate positions. The inference unit 2 is used to determine which step in the processing data the coordinate position of the workpiece in the image belongs to; or to determine which processing step in the processing data the operator's actions in the image belong to. Human factors in action. The instruction trigger unit 3 is used to calculate the coordinate position of the next processing point based on the identified coordinate position to generate instruction coordinates; and before starting the next step, transmit the instruction coordinates to the instruction device 7, and the instruction device 7 performs the instruction according to the instruction. The coordinates provide indication signals for the next processing location of the workpiece. The human motion database 4 is used to provide a plurality of historical human motion images for the inference unit 2 to determine which human motion in the processing step of the processing data the operator's motions in the images belong to. The training unit 5 is used to establish a first standard group of coordinate positions for judging which step of the processing step in the processing data the coordinate position of the workpiece belongs to based on multiple historical images; and, based on the multiple historical images, A second standard group is established for determining which human factors action in the processing step the operator's actions in the image belong to in the processing data. The object edge enhancement unit 8 is connected to the workpiece database 1, the human action database 4 and the training unit 5 to find the edges of each object in the image through the edge detection derivative Marr-Hildreth algorithm; The average threshold is calculated from the edges of each object, which is used to strengthen the edges of each object respectively; and the average threshold is multiplied by a ratio to determine the final threshold of each object corresponding to the workpiece. The final threshold The value is included in the processing data.

請參考圖2,本發明另提供一種工件加工部位之指示方法工件加工部位之指示方法,包括:步驟A:從工件資料庫查詢而獲得工件的加工資料,該加工資料包括該工件的加工步驟、加工步驟的順序以及加工座標位置;步驟B:藉由推論單元判斷影像中該工件的座標位置屬於該加工資料中的哪個加工步驟中的加工座標位置,而以指示觸發單元即時計算下一個加工點的座標位置而產生指示座標;以及 步驟C:在下一個步驟開始前,以該指示觸發單元傳送該指示座標給指示裝置,該指示裝置根據該指示座標提供該工件下一加工部位的指示信號。 Please refer to Figure 2. The present invention also provides a method for indicating the processing part of the workpiece. The method for indicating the processing part of the workpiece includes: Step A: Query the workpiece database to obtain the processing data of the workpiece. The processing data includes the processing steps of the workpiece. The sequence of processing steps and the processing coordinate position; Step B: Use the inference unit to determine which processing step in the processing data the coordinate position of the workpiece in the image belongs to, and instruct the trigger unit to instantly calculate the next processing point The coordinate position is used to generate indicating coordinates; and Step C: Before starting the next step, use the indication triggering unit to transmit the indication coordinates to an indication device, and the indication device provides an indication signal for the next processing location of the workpiece based on the indication coordinates.

請參考圖3,本發明另提供一種工件加工部位之指示方法,包括:步驟a:從工件資料庫查詢而獲得工件的加工資料,該加工資料包括該工件的加工步驟、加工步驟的順序、與該加工步驟相應的人因動作影像以及加工座標位置;步驟b:藉由推論單元判斷影像中的作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作影像,而以指示觸發單元計算下一個加工點的座標位置而產生指示座標;以及步驟c在該作業員下一個動作開始前,以該指示觸發單元傳送該指示座標給指示裝置,該指示裝置根據該指示座標於提供該工件下一加工部位的指示信號。 Please refer to Figure 3. The present invention also provides a method for indicating the processing part of a workpiece, including: step a: querying the workpiece database to obtain the processing data of the workpiece. The processing data includes the processing steps of the workpiece, the sequence of the processing steps, and The human action image and processing coordinate position corresponding to the processing step; Step b: Use the inference unit to determine which human action image in the processing step the operator's actions in the image belong to, and trigger with instructions The unit calculates the coordinate position of the next processing point to generate indication coordinates; and step c uses the indication triggering unit to transmit the indication coordinates to the indication device before the operator starts the next action, and the indication device provides the indication coordinates based on the indication coordinates. Indicating signal for the next processing part of the workpiece.

以下將詳述本發明的方法與系統。請參考本發明圖5至圖9的實施例,此實施例為電路板的自動化組裝過程。首先,於圖5中,自動化機械裝置(未圖示出)先將此流程的步驟1的虛線方框中的螺絲(物件)鎖緊;再,於圖6中,自動化機械裝置將此流程的步驟2的排線(物件)安裝於虛線方框中的位置;再,於圖7中,自動化機械裝置將此流程的步驟3的另一組排線(物件)安裝於虛線方框中的位置;再,於圖8中,自動化機械裝置將此流程的步驟4的另一組排線安裝於虛線方框中的位置;最後,於圖9中,自動化機械裝置先將此流程的步驟5的虛線方框中的另一螺絲鎖緊。 The method and system of the present invention will be described in detail below. Please refer to the embodiments of FIG. 5 to FIG. 9 of the present invention. This embodiment is an automated assembly process of circuit boards. First, in Figure 5, the automated mechanical device (not shown) first tightens the screws (objects) in the dotted box in step 1 of this process; then, in Figure 6, the automated mechanical device The cable arrangement (object) in step 2 is installed at the position in the dotted box; then, in Figure 7, the automated mechanical device installs another set of cables (objects) in step 3 of the process at the position in the dotted box ; Then, in Figure 8, the automated mechanical device installs another set of cables in step 4 of the process at the position in the dotted box; finally, in Figure 9, the automated mechanical device first installs the other set of cables in step 5 of the process. Tighten the other screw in the dotted box.

於本發明的該步驟A中,從該工件資料庫1查詢而獲得工件的加工資料,該加工資料包括該工件的加工步驟、加工步驟的順序以及加工座標位置。以此實施例而言,前述工件即為前述電路板,從該工件資料庫1查詢而獲得 此特定電路板的自動化機械加工資料,該自動化機械加工資料包括該工件的加工步驟,即,前述圖5至圖9的各步驟;加工步驟的各順序,即,前述圖5至圖9的步驟1至5;以及加工座標位置,即,前述圖5至圖9的各虛線方框於各所擷取影像中的座標位置。在本發明另一實施例中,於該步驟A中,是以工件編號而獲得該工件的加工資料,工件編號可經由,例如,QR碼的方式加以實施。 In step A of the present invention, the processing data of the workpiece is obtained from the workpiece database 1. The processing data includes the processing steps of the workpiece, the sequence of the processing steps and the processing coordinate position. In this embodiment, the aforementioned workpiece is the aforementioned circuit board, which is obtained by querying the workpiece database 1 The automated machining data of this specific circuit board includes the processing steps of the workpiece, that is, the steps of the aforementioned Figures 5 to 9; the order of the processing steps, that is, the steps of the aforementioned Figures 5 to 9 1 to 5; and the processing coordinate position, that is, the coordinate position of each dotted line box in each captured image in the aforementioned Figures 5 to 9. In another embodiment of the present invention, in step A, the processing data of the workpiece is obtained based on the workpiece number. The workpiece number can be implemented through, for example, a QR code.

再,於另一實施例中,於該步驟B中,更包括藉由該推論單元2判斷來自攝影裝置6所擷取的多個影像中該工件的座標位置屬於該加工資料中的哪個加工步驟中的加工座標位置,而以該指示觸發單元3即時計算下一個加工點的座標位置而產生指示座標。意思是,請參考圖6,該推論單元2讀取由該攝影裝置6所擷取在圖6的步驟中的多個影像,藉由卷積運算而判斷與圖6的步驟相關的影像屬於該加工資料中的哪個加工步驟(即,步驟2之將排線***排線孔的步驟)中的加工座標位置(即,圖6中虛線方框的位置);接著以該指示觸發單元3即時計算下一個加工點的座標位置,即,圖7另一排線的組裝位置(步驟3的虛線方框處)而產生指示座標。再,在下一個步驟,即,圖7的步驟開始前,以該指示觸發單元3傳送該指示座標給指示裝置7,該指示裝置7根據該指示座標提供該電路板下一加工部位的指示信號至該自動化機械裝置,以使該自動化機械裝置進行後續相應的加工步驟。換言之,需注意的是,在此實施例,本發明的指示信號為電子通訊用的指示電信號。在另一實施例中,本發明的指示信號也可為實體的紅外線指示信號。更進一步而言,於本發明的實施例中,本發明的系統是亦可經由深度類神經網路滑動視窗進行物件辨識前述各影像,而得到前述各影像任何位置的特徵表示及其輸出的分類結果。意即,本發明的方法與系統先針對不同大小的物件(即前述各步驟中的各物件)而產生不同大小的特徵圖;再針對不同大小的特徵圖依需求進行比例分割且計算各分割中的平均數值,藉以使影像中不同大小的物件能以特定大小的特徵表示,例如,通常為相同大小的特徵表示。Furthermore, in another embodiment, step B further includes using the inference unit 2 to determine which processing step in the processing data the coordinate position of the workpiece in the multiple images captured by the photography device 6 belongs to. The instruction trigger unit 3 instantly calculates the coordinate position of the next processing point to generate the instruction coordinate. Meaning, please refer to Figure 6 , the inference unit 2 reads multiple images captured by the photography device 6 in the steps of Figure 6 , and determines through convolution operations that the images related to the steps of Figure 6 belong to the Which processing step in the processing data (i.e., the step of inserting the cable into the cable hole in step 2) is the processing coordinate position (i.e., the position of the dotted box in Figure 6); then use this instruction to trigger unit 3 to calculate in real time The coordinate position of the next processing point, that is, the assembly position of the other row of wires in Figure 7 (the dotted box in step 3) is used to generate the indicated coordinates. Then, before the next step, that is, before the step of Figure 7 starts, the instruction trigger unit 3 transmits the instruction coordinates to the instruction device 7, and the instruction device 7 provides an instruction signal for the next processing part of the circuit board based on the instruction coordinates. The automated mechanical device enables the automated mechanical device to perform subsequent corresponding processing steps. In other words, it should be noted that in this embodiment, the indication signal of the present invention is an indication electrical signal for electronic communication. In another embodiment, the indication signal of the present invention may also be a physical infrared indication signal. Furthermore, in the embodiments of the present invention, the system of the present invention can also perform object recognition on each of the aforementioned images through a deep neural network sliding window to obtain the feature representation of any position in each of the aforementioned images and its output classification. result. That is to say, the method and system of the present invention first generate feature maps of different sizes for objects of different sizes (i.e., each object in the above-mentioned steps); then perform proportional segmentation on the feature maps of different sizes according to requirements and calculate the value of each segmentation The average value allows objects of different sizes in the image to be represented by features of a specific size, for example, usually by features of the same size.

再,於另一實施例中,於該步驟B中更包括:藉由推論單元2以CNN演算法以確認該多個影像中的物件在兩個相鄰的取樣時間點的水平與垂直位移量,而判斷該多個影像中的物件分別屬於該加工資料中的哪個步驟中的加工步驟,意思是,若一個步驟中有多個相異或相同的加工步驟,也在本發明的範圍內。換言之,圖5至圖9的步驟僅示例出各步驟中只有單一加工步驟的實施例。再,根據該影像中所識別出的多個物件的大小占該影像的比例,而計算下一個加工點的座標位置,而產生該指示座標。Furthermore, in another embodiment, step B further includes: using the inference unit 2 to use a CNN algorithm to confirm the horizontal and vertical displacements of the objects in the multiple images at two adjacent sampling time points. , and determining which processing step in the processing data each object in the multiple images belongs to means that if there are multiple different or identical processing steps in one step, it is also within the scope of the present invention. In other words, the steps of FIGS. 5 to 9 only illustrate embodiments in which there is only a single processing step in each step. Then, the coordinate position of the next processing point is calculated according to the proportion of the sizes of the multiple objects recognized in the image to the image, and the indication coordinates are generated.

再,於另一實施例中,於該步驟C之後更包括步驟D,於該步驟D中,重複該步驟B與該步驟C,直到所有加工步驟完成為止。意即,以本發明前述圖5至圖9的實施例而言,本發明的方法與系統會持續進行至所有步驟皆完成。Furthermore, in another embodiment, step D is further included after step C. In step D, step B and step C are repeated until all processing steps are completed. That is to say, based on the aforementioned embodiments of the present invention shown in FIGS. 5 to 9 , the method and system of the present invention will continue until all steps are completed.

請參考本發明圖10至圖14的實施例,此實施例為作業員將電路板的進行組裝過程。首先,於圖10中,作業員先將虛線方框的螺絲鎖緊;再,於圖11中,作業員將排線安裝於虛線方框的位置;再,於圖12中,作業員將另一組排線安裝於虛線方框的位置;再,於圖13中,作業員將另一組排線安裝於虛線方框的位置;最後,於圖14中,作業員先將虛線方框內的另一螺絲鎖緊。圖10至圖14僅示出作業員的手部,但本發明的方法與系統並不侷限於只識別作業員的手部,亦可識別作業員的整體動作行為。Please refer to the embodiment of the present invention shown in FIGS. 10 to 14 . This embodiment shows an operator's assembly process of the circuit board. First, in Figure 10, the operator tightens the screws in the dotted box; then, in Figure 11, the operator installs the cable at the location of the dotted box; then, in Figure 12, the operator installs another One set of cables is installed at the position of the dotted box; then, in Figure 13, the operator installs another set of cables at the position of the dotted box; finally, in Figure 14, the operator first installs the cables in the dotted box Tighten the other screw. Figures 10 to 14 only show the operator's hands, but the method and system of the present invention are not limited to identifying only the operator's hands, but can also identify the operator's overall movement behavior.

請繼續參考本發明圖10至圖14的實施例,於本發明的該步驟a中,從該工件資料庫1查詢而獲得工件的加工資料,該加工資料包括該工件的加工步驟、加工步驟的順序與該加工步驟相應的人因動作影像以及加工座標位置。以此實施例而言,前述工件即為電路板,從該工件資料庫1查詢而獲得此電路板的加工資料,該加工資料包括該工件的加工步驟,即,前述圖10至圖14的各步驟1’至5’、加工步驟的順序,即,前述圖10至圖14的各順序;與該加工步驟相應的人因動作影像,即,圖10至14中作業員的手部動作;以及各影像中加工座標位置,即,前述圖10至圖14的各步驟1’至5’虛線方框的座標位置)。需注意的是,如前所述,本發明的人因動作不限於手部,也包括,如前所述,例如,作業員身體的各種動作、臉部表情等。在本發明另一實施例中,於該步驟a中,是以工件編號而獲得該工件的加工資料,工件編號亦可經由,例如,QR碼的方式加以實施。 Please continue to refer to the embodiments of FIGS. 10 to 14 of the present invention. In step a of the present invention, the processing data of the workpiece is obtained by querying the workpiece database 1. The processing data includes the processing steps of the workpiece and the number of processing steps. Human action images and processing coordinate positions corresponding to the sequence of the processing steps. In this embodiment, the workpiece is a circuit board, and the processing data of the circuit board is obtained by querying the workpiece database 1. The processing data includes the processing steps of the workpiece, that is, each of the aforementioned Figures 10 to 14 Steps 1' to 5', the sequence of the processing steps, that is, the order of the aforementioned Figures 10 to 14; the human action images corresponding to the processing steps, that is, the operator's hand movements in Figures 10 to 14; and The processing coordinate positions in each image are, that is, the coordinate positions of the dotted boxes of steps 1' to 5' in each of the aforementioned Figures 10 to 14). It should be noted that, as mentioned above, the human movements of the present invention are not limited to the hands, but also include, as mentioned above, for example, various movements of the operator's body, facial expressions, etc. In another embodiment of the present invention, in step a, the processing data of the workpiece is obtained based on the workpiece number. The workpiece number can also be implemented through, for example, a QR code.

再,在該步驟b中,藉由該推論單元2判斷影像中的該作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作影像,而以該指示觸發單元3計算下一個加工點的座標位置而產生指示座標。意即,本發明的該推論單元2更進一步判斷影像中作業員的動作,而決定影像中的該作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作影像,評估影像中人因動作後,接著以該指示觸發單元3計算下一個加工點的座標位置而產生指示座標。在該作業員下一個動作開始前,以該指示觸發單元3傳送該指示座標給指示裝置,該指示裝置根據該指示座標於提供該工件下一加工部位的指示信號。在此實施例中,本發明的指示信號為實體的紅外線指示信號,請參考圖10,在步驟1’時,本發明的紅外線指示信號指示出下一步驟(步驟2’)的加工部位(圓形虛線);請參考圖11,在步驟2’時,本發明的紅外線指示信號指示出下一步驟(步驟3’)的加工部位(圓形虛線);請參考圖12,在步驟3’時,本發明的紅外線指示信號指示出下一步驟(步驟4’)的加工部位(圓形虛線);請參考圖13,在步驟4’時,本發明的紅外線指示信號指示出下一步驟(步驟5’)的加工部位(圓形虛線);請參考圖14,在步驟5’時,由於是最後一個步驟,本發明的紅外線指示信號會停止或回到第一個步驟。本實施例中的步驟a至c與前述實施例中步驟A至C或步驟A至D的區別在於,更增加與該工件相應的人因動作的相關比對與分析的步驟。在另一實施例中,本發明的系統除用影像的畫面大小計算出相對的位置外,也會根據指示裝置7的顯示範圍(FOV),進行座標的調整與準換,使得最終結果可以符合實際工件的顯示範圍。Then, in step b, the inference unit 2 determines which human action image the operator's actions in the image belong to in which processing step in the processing data, and uses the instruction triggering unit 3 to calculate the next The coordinate position of the processing point is used to generate indication coordinates. That is to say, the inference unit 2 of the present invention further determines the actions of the operator in the image, and determines which human action image in the processing step of the processing data the actions of the operator in the image belong to, and evaluates the human action images in the image. After the human action, the instruction triggering unit 3 is used to calculate the coordinate position of the next processing point and generate the instruction coordinates. Before the operator starts the next action, the instruction triggering unit 3 transmits the instruction coordinates to the instruction device, and the instruction device provides an instruction signal for the next processing location of the workpiece based on the instruction coordinates. In this embodiment, the indication signal of the present invention is a physical infrared indication signal. Please refer to Figure 10. At step 1', the infrared indication signal of the invention indicates the processing part (circle) of the next step (step 2'). (shaped dotted line); please refer to Figure 11, at step 2', the infrared indicator signal of the present invention indicates the processing part (circular dotted line) of the next step (step 3'); please refer to Figure 12, at step 3' , the infrared indicator signal of the present invention indicates the processing part (circular dotted line) of the next step (step 4'); please refer to Figure 13, at step 4', the infrared indicator signal of the present invention indicates the next step (step 5') processing part (circular dotted line); please refer to Figure 14, at step 5', since it is the last step, the infrared indicating signal of the present invention will stop or return to the first step. The difference between steps a to c in this embodiment and steps A to C or steps A to D in the previous embodiment is that the steps of comparison and analysis of human factors corresponding to the workpiece are added. In another embodiment, in addition to calculating the relative position using the image size, the system of the present invention also adjusts and aligns the coordinates according to the display range (FOV) of the pointing device 7 so that the final result can meet the requirements. Display range of actual artifacts.

再,於另一實施例中,本發明於該步驟b中更包括:藉由推論單元2以CNN演算法分別將該攝影裝置6所擷取的影像進行動作估計,以確認該多個影像中的像素、區塊或物件在兩個相鄰的取樣時間點的水平即垂直位移量(即,動作向量的變化),而判斷該多個影像中的該作業員的動作分別屬於該加工資料中的哪個步驟中的人因動作。再,根據該影像中所識別出的多個物件的大小占該影像的比例,配合所判斷出的人因動作,而計算下一個加工點的座標位置,而產生該指示座標,繼續進行如前所述指示信號為實體的紅外線指示信號的指示動作。本發明的方法與系統是藉由各種採樣模式包括:單影格行為識別、多影格延遲混合行為識別、多影格早期混合行為識別、多影格緩慢混合行為識別等。Furthermore, in another embodiment, the present invention further includes in step b: using the inference unit 2 to perform motion estimation on the images captured by the photography device 6 using a CNN algorithm to confirm that the images captured by the camera 6 are The horizontal or vertical displacement (i.e., the change of the action vector) of the pixels, blocks or objects at two adjacent sampling time points is determined, and the actions of the operator in the multiple images are determined to belong to the processing data. Which step of the human factors action. Then, according to the proportion of the sizes of the multiple objects identified in the image to the image, and in conjunction with the determined human actions, the coordinate position of the next processing point is calculated, and the instruction coordinates are generated, and the process continues as before. The indication signal is an indication action of an entity's infrared indication signal. The method and system of the present invention use various sampling modes including: single-frame behavior recognition, multi-frame delayed mixed behavior recognition, multi-frame early mixed behavior recognition, multi-frame slow mixed behavior recognition, etc.

再,於另一實施例中,本發明亦可藉由該推論單元2判斷影像中的該作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作影像以及藉由推論單元2以CNN演算法以確認該多個影像中的物件在兩個相鄰的取樣時間點的水平與垂直位移量,經由分析影像中人因動作,而判斷該多個影像中的物件分別屬於該加工資料中的哪個步驟中的加工步驟,而以該指示觸發單元3計算下一個加工點的座標位置而產生指示座標。接著,如前所述,若實施例示有作業員的情況,在該作業員下一個動作開始前,以該指示觸發單元3傳送該指示座標給該指示裝置。該指示裝置根據該指示座標於提供該工件下一加工部位的指示信號至紅外線指示裝置,紅外線指示裝置再執行前述所提的預先提示步驟。若實施例是無作業員的情況,則以該指示觸發單元3傳送該指示座標給指示裝置7,該指示裝置7根據該指示座標提供該電路板下一加工部位的指示信號至該自動化機械裝置的處理器,以使該自動化機械裝置根據前述指示信號進行後續相應的加工步驟。 Furthermore, in another embodiment, the present invention can also use the inference unit 2 to determine which human action image of the processing step in the processing data the operator's actions in the image belong to. The CNN algorithm is used to confirm the horizontal and vertical displacements of the objects in the multiple images at two adjacent sampling time points, and by analyzing the human actions in the images, it is determined that the objects in the multiple images belong to the processing Which step in the data is the processing step, and the instruction triggering unit 3 calculates the coordinate position of the next processing point to generate the instruction coordinates. Next, as mentioned above, if the embodiment shows the situation of an operator, the instruction triggering unit 3 is used to transmit the instruction coordinates to the instruction device before the operator starts the next action. The indication device provides an indication signal of the next processing position of the workpiece to the infrared indication device according to the indication coordinates, and the infrared indication device then performs the aforementioned pre-prompt step. If the embodiment is without an operator, the instruction triggering unit 3 transmits the instruction coordinates to the instruction device 7, and the instruction device 7 provides an instruction signal for the next processing location of the circuit board to the automated mechanical device based on the instruction coordinates. processor, so that the automated mechanical device performs subsequent corresponding processing steps according to the aforementioned instruction signal.

在另一實施例中,於步驟A前更包括步驟X,該步驟X包括步驟X1、X2與X3。於該步驟X1中,藉由邊緣檢測導數Marr-Hildreth演算法找出該影像中各物件的邊緣。再,於該步驟X2中,藉由各物件的邊緣分別計算出平均門檻值,其供分別強化該各物件的邊緣。再,於該步驟X3中,將該平均門檻值乘上一特定比例,而決定該工件所對應的各物件的最終門檻值,該最終門檻值是包括於該加工資料中。在另一實施例中,於該步驟a前亦可包括前述步驟X1、X2與X3。 In another embodiment, step X is further included before step A, and step X includes steps X1, X2 and X3. In this step X1, the edge of each object in the image is found by using the edge detection derivative Marr-Hildreth algorithm. Then, in step X2, the average threshold value is calculated based on the edges of each object, which is used to respectively strengthen the edges of each object. Then, in step X3, the average threshold value is multiplied by a specific ratio to determine the final threshold value of each object corresponding to the workpiece, and the final threshold value is included in the processing data. In another embodiment, the aforementioned steps X1, X2 and X3 may also be included before step a.

在另一實施例中,於步驟A前更包括步驟X’,該步驟X’包括:步驟X’1、X’2、X’3與X’4。於該步驟X’1中,以CNN演算法訓練各影像中同一物件中相異大小或相異光源者的平均門檻值,藉以找出同一物件中相異大小或相異光源者的經訓練平均門檻值,意思是該物件邊緣強化單元8將相異大小或相異光源的同一物件以CNN演算法進行訓練。於該步驟X’2中,該物件邊緣強化單元8藉由邊緣檢測導數 Marr–Hildreth 演算法找出該各物件的邊緣,藉由各物件的邊緣分別計算出對應各物件的平均門檻值;於該步驟X’3中,將該平均門檻值與該經訓練平均門檻值加以篩選,而得到經篩選門檻值,篩選的規則可以現有演算法或以人工加以輔助。於該步驟X’4中,將該經篩選門檻值乘上特定比例,同樣的,此比例可以現有演算法或以人工加以輔助而估算,而決定該工件所對應的各物件的最終門檻值,該最終門檻值是包括於該加工資料中。在另一實施例中,於該步驟a前亦可包括前述步驟X’1、X’2、X’3與X’4。In another embodiment, step X' is further included before step A, and step X' includes: steps X'1, X'2, X'3 and X'4. In this step The threshold value means that the object edge enhancement unit 8 trains the same object with different sizes or different light sources using the CNN algorithm. In the step In step X'3, the average threshold value and the trained average threshold value are filtered to obtain the filtered threshold value. The filtering rules can be assisted by existing algorithms or manually. In step The final threshold is included in the processing data. In another embodiment, the aforementioned steps X’1, X’2, X’3 and X’4 may also be included before step a.

本發明於前述實施例中所使用的邊緣檢測導數 Marr–Hildreth演算法乃是針對各影像選取特定的門檻值後,再進行框選。再,本發明利用CNN演算法,藉由多個影像來訓練影像中不同物件的門檻值,藉以快速的找出每一相異物件的門檻值,再利用Marr -Hildren算法來進行框選,減少因為環境光源影響導致的框選失誤或框選誤差,單利用Marr -Hildren會受不同光影的影響,會導致框選的範圍在不同畫面下會有不同的結果,可能會有範圍誤差而導致失敗的情況發生。再,本發明利用CNN演算法來輔助找出影像中的物件的範圍門檻值。如前所述,由於實際上物件在影像中的大小、比例會隨著作業員實際動作而有所改變。為了能因應不同大小都能順利框選,需要不同大小的待訓練影像才能達成,這樣需要花費許多時間在錄影作業過程及標記影像上的物件,而為了解決這樣的問題,本發明以現今行動設備如智慧型手機的攝影功能及其攜便性,藉由在錄影的過程同時將物件一併即時框選,藉以大幅減少不同尺寸物件的標記作業,也加快CNN演算法找出物件邊緣門檻的速度,加速訓練資料的收集。The edge detection derivative Marr-Hildreth algorithm used in the foregoing embodiments of the present invention selects a specific threshold value for each image and then performs frame selection. Furthermore, the present invention uses the CNN algorithm to train the threshold values of different objects in the image through multiple images, so as to quickly find the threshold value of each different object, and then uses the Marr-Hildren algorithm to perform frame selection, reducing Due to frame selection errors or frame selection errors caused by the influence of ambient light sources, using Marr-Hildren alone will be affected by different lights and shadows, which will cause the frame selection range to have different results in different screens, and there may be range errors that lead to failure. situation occurs. Furthermore, the present invention uses a CNN algorithm to assist in finding the range threshold of objects in the image. As mentioned above, the actual size and proportion of objects in the image will change with the actual movements of the operator. In order to be able to smoothly select frames according to different sizes, different sizes of images to be trained are needed to achieve this. This requires a lot of time in the recording process and marking objects on the images. In order to solve this problem, the present invention uses today's mobile devices For example, the photography function and portability of smartphones can greatly reduce the labeling work of objects of different sizes by instantly selecting objects together during the recording process, and also speed up the CNN algorithm to find the edge threshold of objects. , accelerate the collection of training data.

在另一實施例中,該訓練單元5供根據來自該人因動作資料庫4的多個歷史影像而建立該第一標準群與該第二目標群。本發明的該多個歷史影像可經由一般攝影裝置如現今常用的智慧型手機(攝影裝置6)而獲得。該第一目標群根據來自該人因動作資料庫4的該多個歷史影像判斷該工件的座標位置屬於該加工資料中的哪個步驟中的加工步驟的座標位置。該第二目標群供根據該多個歷史影像而建立供判斷該影像中的該作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作。意即,本發明的系統中之該訓練單元5能根據歷史視頻中所擷取的多個影像自動完成物件標記與識別以及動作標記與識別;並根據前述的標記與識別結果建立各種模組,再將識別結果與各種模組供該推論單元2使用。在另一實施例中,本發明的系統更包括通訊模組,該通訊模組使本發明的系統可接受外部指令,例如,PLC通訊模組。此外,本發明的系統利用相對位置的動作識別,再進行實際顯示位置的座標轉換,就以因應不同的指示物件顯示範圍進行調整,以適應實際的加工作業。 In another embodiment, the training unit 5 is configured to establish the first standard group and the second target group based on a plurality of historical images from the human action database 4 . The plurality of historical images of the present invention can be obtained through a general photography device such as a smartphone (photography device 6) commonly used today. The first target group determines which step of the processing step in the processing data the coordinate position of the workpiece belongs to according to the plurality of historical images from the human action database 4 . The second target group is established based on the plurality of historical images and is used to determine which human action in the processing step of the processing data the operator's actions in the image belong to. That is to say, the training unit 5 in the system of the present invention can automatically complete object marking and recognition as well as action marking and recognition based on multiple images captured in historical videos; and establish various modules based on the aforementioned marking and recognition results. The recognition results and various modules are then provided to the inference unit 2 for use. In another embodiment, the system of the present invention further includes a communication module, which allows the system of the present invention to accept external instructions, such as a PLC communication module. In addition, the system of the present invention uses motion recognition of relative positions and then performs coordinate conversion of the actual display position to adjust the display range of different indicating objects to adapt to actual processing operations.

在另一實施例中,請參考圖15與圖16,此實施例為筆電之組裝,圖15為組裝前,而圖16則為右邊之組件組裝後。於該步驟C中更包括:在下一個組裝步驟開始前,以該指示觸發單元3傳送該指示座標給指示裝置,例如,短焦投影機,該指示裝置根據該指示座標提供該工件下一加工部位的一指示信號,並於加工完成後依據前述下一加工部位的加工狀態而提供一驗證指示信號(即,圖15與圖16中的紅燈與綠燈)。藉由該推論單元2判斷所擷取的影像(圖15與圖16)中該工件的座標位置屬於該加工資料中的哪個加工步驟中的加工座標位置,若該工件的裝設位置正確,則以第一燈號,即,以綠燈投影在該工件的裝設區域提醒作業員裝設位置正確,若該工件的裝設位置不正確或尚未安裝,則以第二燈號,即,以紅燈投影在該工件的裝設區域提醒作業員裝設位置不正確或尚未安裝;且如前所述,同樣的,接著,以該指示觸發單元即時計算 下一個加工點的座標位置而產生指示座標。在另一實施例中,本發明的指示裝置可為可接收顯示命令的指示裝置,本發明的方法與系統,例如,推論單元2,會進行座標的轉換,以適用不同的指示裝置。由此可知,本發明的方法與系統不僅指示下個動作的位置,可讓自動化機械組裝設備或作業員快速移動到指示的位置,大幅減少自動化機械組裝設備或作業員自行搜尋的時間,也在加工完成後依據前述下一加工部位的加工狀態而提供驗證指示信號,藉以確認是否組裝正確或是否有遺漏或錯誤的情形。 In another embodiment, please refer to FIG. 15 and FIG. 16 . This embodiment shows the assembly of the laptop. FIG. 15 shows before assembly, and FIG. 16 shows the components on the right after assembly. Step C further includes: before starting the next assembly step, using the instruction triggering unit 3 to send the instruction coordinates to an instruction device, such as a short-throw projector, and the instruction device provides the next processing location of the workpiece based on the instruction coordinates. An indication signal, and after the processing is completed, a verification indication signal (ie, the red light and the green light in Figures 15 and 16) is provided according to the processing status of the next processing part. The inference unit 2 determines which processing step in the processing data the coordinate position of the workpiece in the captured image (Figure 15 and Figure 16) belongs to. If the installation position of the workpiece is correct, then The first light signal, that is, a green light is projected on the installation area of the workpiece to remind the operator that the installation position is correct. If the installation position of the workpiece is incorrect or has not been installed, the second light signal, that is, a red light is used. The lamp is projected in the installation area of the workpiece to remind the operator that the installation position is incorrect or has not been installed; and as mentioned above, similarly, then, the indication trigger unit is used to calculate in real time The coordinate position of the next processing point is used to generate the indicated coordinates. In another embodiment, the pointing device of the present invention can be a pointing device that can receive display commands. The method and system of the present invention, such as the inference unit 2, will perform coordinate conversion to adapt to different pointing devices. It can be seen from this that the method and system of the present invention not only indicate the position of the next action, but also allow automated mechanical assembly equipment or operators to quickly move to the indicated position, greatly reducing the time for automated mechanical assembly equipment or operators to search by themselves, and also After the processing is completed, a verification instruction signal is provided based on the processing status of the next processing part to confirm whether the assembly is correct or whether there are any omissions or errors.

因此,本發明具有以下優點: Therefore, the present invention has the following advantages:

1.本發明的方法與系統自動指示下個動作的位置,可讓作業員快速移動到指示的位置,大幅減少作業員自行搜尋的時間。 1. The method and system of the present invention automatically indicate the location of the next action, allowing the operator to quickly move to the indicated location, significantly reducing the operator's self-searching time.

2.本發明的方法與系統自動指示每個工作的量測位置,作業員只需依指示量測,大幅減少作業員量測位置錯誤的情況。 2. The method and system of the present invention automatically indicate the measurement position of each job, and the operator only needs to measure according to the instructions, which greatly reduces the error of the operator's measurement position.

3.本發明的方法與系統將相關自動化製程進行進一步的檢核,讓現有自動化機械組裝設備的出錯率能大幅改善。 3. The method and system of the present invention further check the relevant automated processes, so that the error rate of existing automated mechanical assembly equipment can be greatly improved.

4.本發明的方法與系統僅需使用一般攝影裝置如智慧型手機等所擷取的影像,即可即時將所擷取的影像進行訓練,並依需求自動建構出模組,極為便利。 4. The method and system of the present invention only need to use images captured by general photography devices such as smartphones, etc., and can train the captured images in real time and automatically construct modules according to needs, which is extremely convenient.

5.本發明利用CNN演算法來輔助找出影像中的同一物件的範圍門檻值,藉以解決由於同一物件在影像中的大小、比例會隨著例如,作業員實際動作,而有所改變的問題。 5. The present invention uses the CNN algorithm to assist in finding the range threshold of the same object in the image, thereby solving the problem that the size and proportion of the same object in the image will change due to, for example, the actual actions of the operator. .

6.本發明的方法與系統所使用的影像可來自現今的各種行動裝置所擷取的影像,使用者可以便利與快速的擷取各種生產作業的影像,再以本發明的方法與系統執行前述各實施例。 6. The images used by the method and system of the present invention can come from images captured by various mobile devices today. Users can conveniently and quickly capture images of various production operations, and then use the method and system of the present invention to execute the above. Various Examples.

以上所述乃是本發明之具體實施例及所運用之技術手段,根據本文的揭露或教導可衍生推導出許多的變更與修正,仍可視為本發明之構想所作之等效改變,其所產生之作用仍未超出說明書及圖式所涵蓋之實質精神,均應視為在本發明之技術範疇之內,合先陳明。The above are specific embodiments of the present invention and the technical means used. Many changes and modifications can be derived based on the disclosure or teachings of this article, and they can still be regarded as equivalent changes to the concept of the present invention. The resulting The effect does not exceed the essential spirit covered by the description and drawings, and should be regarded as within the technical scope of the present invention and shall be stated in advance.

綜上所述,依上文所揭示之內容,本發明確可達到發明之預期目的,提供一種工件加工部位之指示方法與系統,極具產業上利用之價植,爰依法提出發明專利申請。To sum up, based on the content disclosed above, the present invention can clearly achieve the intended purpose of the invention and provide a method and system for indicating the processing position of a workpiece, which is of great value for industrial use. An invention patent application can be filed in accordance with the law.

工件資料庫1 推論單元2 指示觸發單元3 人因動作資料庫4 訓練單元5 攝影裝置6 步驟A、B、C、D、E、F 步驟a、b、c、d 步驟1、2、3、4、5 步驟1’、2’、3’、4’、5’ Artifact database 1 Corollary Unit 2 Instruction trigger unit 3 Human factors motion database 4 Training unit 5 Photography device 6 Steps A, B, C, D, E, F Steps a, b, c, d Steps 1, 2, 3, 4, 5 Steps 1’, 2’, 3’, 4’, 5’

第1圖為本發明之工件加工部位之指示系統之實施例的結構方塊圖。Figure 1 is a structural block diagram of an embodiment of the indicating system for the workpiece processing part of the present invention.

第2圖為本發明之工件加工部位之指示方法之實施例的流程圖。Figure 2 is a flow chart of an embodiment of a method for indicating a workpiece processing location according to the present invention.

第3圖為本發明之工件加工部位之指示方法之實施例的流程圖。Figure 3 is a flow chart of an embodiment of a method for indicating a workpiece processing location according to the present invention.

圖4為本發明之工件加工部位之指示方法之實施例的流程圖。Figure 4 is a flow chart of an embodiment of a method for indicating a workpiece processing location according to the present invention.

圖5至圖9為本發明之工件加工部位之指示方法與系統之實施例的示意圖。5 to 9 are schematic diagrams of embodiments of the method and system for indicating the workpiece processing location according to the present invention.

圖10至圖14為本發明之工件加工部位之指示方法與系統之實施例的示意圖。10 to 14 are schematic diagrams of embodiments of the method and system for indicating the workpiece processing location according to the present invention.

圖15至圖16為本發明之工件加工部位之指示方法與系統之另一實施例的示意圖。15 to 16 are schematic diagrams of another embodiment of the method and system for indicating the workpiece processing location of the present invention.

a、b、c、d:步驟 a, b, c, d: steps

Claims (16)

一種工件加工部位之指示方法,包括:A.從一工件資料庫查詢而獲得一工件的加工資料,該加工資料包括該工件的加工步驟、加工步驟的順序以及加工座標位置;B.藉由一推論單元判斷一影像中該工件的座標位置屬於該加工資料中的哪個加工步驟中的加工座標位置,而以一指示觸發單元即時計算下一個加工點的座標位置而產生一指示座標;C.在下一個步驟開始前,以該指示觸發單元傳送該指示座標給一指示裝置,該指示裝置根據該指示座標提供該工件下一加工部位的一指示信號;其中於步驟A前更包括一步驟X,該步驟X包括:X1:藉由邊緣檢測導數Marr-Hildreth演算法找出該影像中各物件的邊緣;X2:藉由各物件的邊緣分別計算出一平均門檻值,其供分別強化該各物件的邊緣;X3:將該平均門檻值乘上一比例,而決定該工件所對應的各物件的一最終門檻值,該最終門檻值是包括於該加工資料中。 A method for indicating the processing position of a workpiece, including: A. Obtaining the processing data of a workpiece from a workpiece database. The processing data includes the processing steps of the workpiece, the sequence of the processing steps and the processing coordinate position; B. By a The inference unit determines which processing step in the processing data the coordinate position of the workpiece in an image belongs to, and uses an indication trigger unit to instantly calculate the coordinate position of the next processing point to generate an indication coordinate; C. in the following Before starting a step, the instruction triggering unit transmits the instruction coordinates to an instruction device, and the instruction device provides an instruction signal for the next processing position of the workpiece according to the instruction coordinates; wherein a step X is further included before step A, the Step X includes: X1: Use the edge detection derivative Marr-Hildreth algorithm to find the edges of each object in the image; Edge; X3: Multiply the average threshold value by a ratio to determine a final threshold value for each object corresponding to the workpiece. The final threshold value is included in the processing data. 一種工件加工部位之指示方法,包括:A.從一工件資料庫查詢而獲得一工件的加工資料,該加工資料包括該工件的加工步驟、加工步驟的順序以及加工座標位置;B.藉由一推論單元判斷一影像中該工件的座標位置屬於該加工資料中的哪個加工步驟中的加工座標位置,而以一指示觸發單元即時計算下一個加工點的座標位置而產生一指示座標;C.在下一個步驟開始前,以該指示觸發單元傳送該指示座標給一指示裝置, 該指示裝置根據該指示座標提供該工件下一加工部位的一指示信號;其中於步驟A前更包括一步驟X’,該步驟X’包括:X’1:以CNN演算法訓練各影像中同一物件中相異大小或相異光源者的平均門檻值,藉以找出同一物件中相異大小或相異光源者的經訓練平均門檻值;X’2:藉由邊緣檢測導數Marr-Hildreth演算法找出該各物件的邊緣,藉由各物件的邊緣分別計算出一平均門檻值;X’3:將該平均門檻值與該經訓練平均門檻值加以篩選,而得到一經篩選門檻值;X’4:將該經篩選門檻值乘上一比例,而決定該工件所對應的各物件的一最終門檻值,該最終門檻值是包括於該加工資料中。 A method for indicating the processing position of a workpiece, including: A. Obtaining the processing data of a workpiece from a workpiece database. The processing data includes the processing steps of the workpiece, the sequence of the processing steps and the processing coordinate position; B. By a The inference unit determines which processing step in the processing data the coordinate position of the workpiece in an image belongs to, and uses an indication trigger unit to instantly calculate the coordinate position of the next processing point to generate an indication coordinate; C. in the following Before starting a step, the indication triggering unit is used to transmit the indication coordinates to an indication device, The indication device provides an indication signal for the next processing part of the workpiece according to the indication coordinates; a step X' is further included before step A, and the step X' includes: The average threshold value of objects with different sizes or different light sources is used to find the trained average threshold values of objects with different sizes or different light sources in the same object; X'2: Using the edge detection derivative Marr-Hildreth algorithm Find the edges of each object, and calculate an average threshold value based on the edges of each object; X'3: Filter the average threshold value and the trained average threshold value to obtain a filtered threshold value; X' 4: Multiply the filtered threshold value by a ratio to determine a final threshold value for each object corresponding to the workpiece. The final threshold value is included in the processing data. 一種工件加工部位之指示方法,包括:a.從一工件資料庫查詢而獲得一工件的加工資料,該加工資料包括該工件的加工步驟、加工步驟的順序、與該加工步驟相應的人因動作影像以及加工座標位置;b.藉由一推論單元判斷一影像中的一作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作影像,而以一指示觸發單元計算下一個加工點的座標位置而產生一指示座標;c.在該作業員下一個動作開始前,以該指示觸發單元傳送該指示座標給一指示裝置,該指示裝置根據該指示座標於提供該工件下一加工部位的一指示信號;其中於步驟a前更包括一步驟X,該步驟X包括:X1:藉由邊緣檢測導數Marr-Hildreth演算法找出該影像中各物件的邊緣; X2:藉由各物件的邊緣分別計算出一平均門檻值,其供分別強化該各物件的邊緣;X3:將該平均門檻值乘上一比例,而決定該工件所對應的各物件的一最終門檻值,該最終門檻值是包括於該加工資料中。 A method for indicating the processing part of a workpiece, including: a. Obtaining processing data of a workpiece from a workpiece database. The processing data includes the processing steps of the workpiece, the sequence of the processing steps, and the human actions corresponding to the processing steps. Image and processing coordinate position; b. Use an inference unit to determine which human action image in the processing step of the processing data the action of an operator in an image belongs to, and use an instruction trigger unit to calculate the next processing The coordinate position of the point generates an indication coordinate; c. Before the operator starts the next action, the indication trigger unit transmits the indication coordinate to an indication device, and the indication device provides the next processing of the workpiece based on the indication coordinate. An indication signal of the location; which further includes a step X before step a, which step X includes: X1: find the edges of each object in the image through the edge detection derivative Marr-Hildreth algorithm; X2: Calculate an average threshold value from the edges of each object, which is used to strengthen the edges of each object respectively; X3: Multiply the average threshold value by a ratio to determine a final result of each object corresponding to the workpiece. Threshold value, the final threshold value is included in the processing data. 一種工件加工部位之指示方法,包括:a.從一工件資料庫查詢而獲得一工件的加工資料,該加工資料包括該工件的加工步驟、加工步驟的順序、與該加工步驟相應的人因動作影像以及加工座標位置;b.藉由一推論單元判斷一影像中的一作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作影像,而以一指示觸發單元計算下一個加工點的座標位置而產生一指示座標;c.在該作業員下一個動作開始前,以該指示觸發單元傳送該指示座標給一指示裝置,該指示裝置根據該指示座標於提供該工件下一加工部位的一指示信號;其中於步驟a前更包括一步驟X’,該步驟X’包括:X’1:以CNN演算法訓練各影像中同一物件中相異大小或相異光源者的平均門檻值,藉以找出同一物件中相異大小或相異光源者的經訓練平均門檻值;X’2:藉由邊緣檢測導數Marr-Hildreth演算法找出該各物件的邊緣,藉由各物件的邊緣分別計算出一平均門檻值;X’3:將該平均門檻值與該經訓練平均門檻值加以篩選,而得到一經篩選門檻值; X’4:將該經篩選門檻值乘上一比例,而決定該工件所對應的各物件的一最終門檻值,該最終門檻值是包括於該加工資料中。 A method for indicating the processing part of a workpiece, including: a. Obtaining processing data of a workpiece from a workpiece database. The processing data includes the processing steps of the workpiece, the sequence of the processing steps, and the human actions corresponding to the processing steps. Image and processing coordinate position; b. Use an inference unit to determine which human action image in the processing step of the processing data the action of an operator in an image belongs to, and use an instruction trigger unit to calculate the next processing The coordinate position of the point generates an indication coordinate; c. Before the operator starts the next action, the indication trigger unit transmits the indication coordinate to an indication device, and the indication device provides the next processing of the workpiece based on the indication coordinate. An indication signal of the part; which further includes a step X' before step a, which step X' includes: X'1: Use CNN algorithm to train the average threshold of the same object in each image with different sizes or different light sources. value, so as to find the trained average threshold value of the same object with different sizes or different light sources; X'2: Use the edge detection derivative Marr-Hildreth algorithm to find the edge of each object. The edges calculate an average threshold value respectively; X'3: filter the average threshold value and the trained average threshold value to obtain a filtered threshold value; X’4: Multiply the filtered threshold value by a ratio to determine a final threshold value for each object corresponding to the workpiece. The final threshold value is included in the processing data. 如請求項1或2所述的工件加工部位之指示方法,其中於該步驟C中更包括:在下一個步驟開始前,以該指示觸發單元傳送該指示座標給一指示裝置,該指示裝置根據該指示座標提供該工件下一加工部位的一指示信號,並於加工完成後依據前述下一加工部位的加工狀態而提供一驗證指示信號。 The method for indicating the workpiece processing position as described in claim 1 or 2, wherein step C further includes: before starting the next step, using the instruction triggering unit to transmit the instruction coordinates to an instruction device, and the instruction device is based on the instruction triggering unit. The indication coordinates provide an indication signal for the next processing part of the workpiece, and provide a verification indication signal based on the processing status of the next processing part after the processing is completed. 如請求項5所述的工件加工部位之指示方法,其中於該步驟B中更包括:藉由一推論單元以CNN演算法分別判斷該影像中的物件分別屬於該加工資料中的哪個步驟中的加工步驟,並根據該影像中多個物件的大小占該影像的比例,計算下一個加工點的座標位置而產生該指示座標。 The method for indicating a workpiece processing location as described in claim 5, wherein step B further includes: using an inference unit to determine which step in the processing data the objects in the image belong to using a CNN algorithm. The processing step is to calculate the coordinate position of the next processing point based on the proportion of the size of the multiple objects in the image to the image to generate the indicator coordinates. 如請求項1或2所述的工件加工部位之指示方法,其中於該步驟B中更包括:藉由一推論單元以CNN演算法分別判斷該影像中的物件分別屬於該加工資料中的哪個步驟中的加工步驟,並根據該影像中多個物件的大小占該影像的比例,計算下一個加工點的座標位置而產生該指示座標。 The method for indicating the processing part of the workpiece as described in claim 1 or 2, wherein step B further includes: using an inference unit to use a CNN algorithm to determine which step in the processing data each object in the image belongs to. The processing steps in the image, and based on the proportions of the sizes of the multiple objects in the image to the image, calculate the coordinate position of the next processing point to generate the indicator coordinates. 如請求項1或2所述的工件加工部位之指示方法,其中於該步驟C之後更包括一步驟D:重複該步驟B與該步驟C,直到所有加工步驟完成為止。 The method for indicating the processing position of a workpiece as described in claim 1 or 2, further includes a step D after step C: repeating step B and step C until all processing steps are completed. 如請求項1、2、3或4所述的工件加工部位之指示方法,其中該指示信號持續到下一加工部位的步驟結束。 The method for indicating the processing position of the workpiece as described in claim 1, 2, 3 or 4, wherein the indicating signal continues until the end of the step of the next processing position. 如請求項1或2所述的工件加工部位之指示方法,其中於該步驟A中,是以一工件編號而獲得該工件的加工資料。 The method for indicating the processing position of a workpiece as described in claim 1 or 2, wherein in step A, the processing data of the workpiece is obtained by using a workpiece number. 一種工件加工部位之指示系統,其包括:一工件資料庫,其存有複數筆加工資料,該複數筆加工資料包括該工件的加工步驟、加工步驟的順序、與該加工步驟相應的人因動作影像以及加工座標位置;一推論單元,其供判斷一影像中該工件的座標位置屬於該加工資料中的哪個步驟中的加工步驟的座標位置或供判斷該影像中的一作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作;一指示觸發單元,其供根據辨識出的座標位置,計算下一個加工點的座標位置而產生一指示座標,並在下一個步驟開始前,傳送該指示座標至一指示裝置,該指示裝置根據該指示座標提供該工件下一加工部位的一指示信號;以及一訓練單元,其供以CNN演算法根據多個歷史影像而建立供判斷該工件的座標位置屬於該加工資料中的哪個步驟中的加工步驟的座標位置的一第一標準群;以及,供以CNN演算法根據該多個歷史影像而建立供判斷該影像中的該作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作的一第二標準群。 An indication system for the processing part of a workpiece, which includes: a workpiece database, which stores a plurality of processing data. The plurality of processing data includes the processing steps of the workpiece, the sequence of the processing steps, and the human factors actions corresponding to the processing steps. Image and processing coordinate position; an inference unit for judging which coordinate position of the processing step in the processing data the coordinate position of the workpiece in an image belongs to, or for judging whether an operator's action in the image belongs to which coordinate position. Which processing step in the processing data contains the human action; an instruction trigger unit that calculates the coordinate position of the next processing point based on the identified coordinate position to generate an instruction coordinate, and transmits it before the next step starts. The indication coordinates are to an indication device, which provides an indication signal for the next processing part of the workpiece according to the indication coordinates; and a training unit for using a CNN algorithm to establish a method for judging the workpiece based on multiple historical images. A first standard group of coordinate positions of which processing step in the processing data the coordinate position belongs to; and a CNN algorithm is used to establish based on the multiple historical images for judging the operator's actions in the image. A second standard group of human action in which processing step in the processing material belongs respectively. 一種工件加工部位之指示系統,其包括:一工件資料庫,其存有複數筆加工資料,該複數筆加工資料包括該工件的加工步驟、加工步驟的順序、與該加工步驟相應的人因動作影像以及加工座標位置; 一推論單元,其供判斷一影像中該工件的座標位置屬於該加工資料中的哪個步驟中的加工步驟的座標位置或供判斷該影像中的一作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作;一指示觸發單元,其供根據辨識出的座標位置,計算下一個加工點的座標位置而產生一指示座標,並在下一個步驟開始前,傳送該指示座標至一指示裝置,該指示裝置根據該指示座標提供該工件下一加工部位的一指示信號;以及一物件邊緣強化單元,其供藉由邊緣檢測導數Marr-Hildreth演算法找出該影像中各物件的邊緣;藉由各物件的邊緣分別計算出一平均門檻值,其供分別強化該各物件的邊緣;以及將該平均門檻值乘上一比例,而決定該工件所對應的各物件的一最終門檻值,該最終門檻值是包括於該加工資料中。 An indication system for the processing part of a workpiece, which includes: a workpiece database, which stores a plurality of processing data. The plurality of processing data includes the processing steps of the workpiece, the sequence of the processing steps, and the human factors actions corresponding to the processing steps. Image and processing coordinate position; An inference unit for determining which step in the processing data the coordinate position of the workpiece in an image belongs to or for determining which step in the processing data an operator's action in the image belongs to. Human actions in the processing step; an instruction trigger unit that calculates the coordinate position of the next processing point based on the identified coordinate position to generate an instruction coordinate, and sends the instruction coordinate to an instruction before the next step starts. A device that provides an indication signal for the next processing location of the workpiece based on the indication coordinates; and an object edge enhancement unit for finding the edges of each object in the image through the edge detection derivative Marr-Hildreth algorithm; Calculate an average threshold value from the edges of each object, which is used to strengthen the edges of each object respectively; and multiply the average threshold value by a ratio to determine a final threshold value for each object corresponding to the workpiece, The final threshold is included in the processing data. 如請求項11或12所述的工件加工部位之指示系統,其更包括一人因動作資料庫,其用以提供多個歷史人因動作影像,而供該推論單元以CNN演算法判斷該影像中的該作業員的動作分別屬於該加工資料中的哪個加工步驟中的人因動作。 The indication system for workpiece processing parts as described in claim 11 or 12 further includes a human motion database, which is used to provide a plurality of historical human motion images for the inference unit to use a CNN algorithm to determine the content of the image. The operator's actions belong to the human factors actions in which processing step in the processing data. 如請求項12所述的工件加工部位之指示系統,其中該物件邊緣強化單元更供以CNN演算法訓練各影像中同一物件中相異大小或相異光源者的平均門檻值,藉以找出同一物件中相異大小或相異光源者的經訓練平均門檻值;藉由邊緣檢測導數Marr-Hildreth演算法找出該各物件的邊緣,藉由各物件的邊緣分別計算出一平均門檻值;將該平均門檻值與該經訓練平均門檻值加以篩選,而得到一經篩選門檻值;將該經篩選門檻值乘上一比例,而決定該工件所對應的各物件的一最終門檻值,該最終門檻值是包括於該加 工資料中。 The workpiece processing part indication system as described in claim 12, wherein the object edge enhancement unit further uses a CNN algorithm to train the average threshold values of the same object in each image with different sizes or different light sources, so as to find out the same The trained average threshold value of objects with different sizes or different light sources; use the edge detection derivative Marr-Hildreth algorithm to find the edge of each object, and calculate an average threshold value based on the edge of each object; The average threshold value and the trained average threshold value are filtered to obtain a filtered threshold value; the filtered threshold value is multiplied by a ratio to determine a final threshold value for each object corresponding to the workpiece. The final threshold value is value is included in the plus in the work information. 如請求項11或12所述的工件加工部位之指示系統,其中該推論單元更供在下一個步驟開始前,以該指示觸發單元傳送該指示座標給一指示裝置,該指示裝置根據該指示座標提供該工件下一加工部位的一指示信號,並於加工完成後依據前述下一加工部位的加工狀態而提供一驗證指示信號。 As claimed in claim 11 or 12, the inference unit is used to transmit the indication coordinates to an indication device through the indication triggering unit before starting the next step, and the indication device provides information based on the indication coordinates. An indication signal for the next processing part of the workpiece, and a verification indication signal is provided based on the processing status of the next processing part after the processing is completed. 如請求項11或12所述的工件加工部位之指示系統,其中該指示裝置為一紅外線指示燈或一投影裝置。 The indication system for a workpiece processing location as described in claim 11 or 12, wherein the indication device is an infrared indicator lamp or a projection device.
TW110146986A 2021-12-15 2021-12-15 An indicating method and system for processing positions of a workpiece TWI813096B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW110146986A TWI813096B (en) 2021-12-15 2021-12-15 An indicating method and system for processing positions of a workpiece

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110146986A TWI813096B (en) 2021-12-15 2021-12-15 An indicating method and system for processing positions of a workpiece

Publications (2)

Publication Number Publication Date
TW202325467A TW202325467A (en) 2023-07-01
TWI813096B true TWI813096B (en) 2023-08-21

Family

ID=88147834

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110146986A TWI813096B (en) 2021-12-15 2021-12-15 An indicating method and system for processing positions of a workpiece

Country Status (1)

Country Link
TW (1) TWI813096B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200279388A1 (en) * 2019-02-28 2020-09-03 Canon Kabushiki Kaisha Information processing apparatus, information processing method and storage medium
TW202134808A (en) * 2019-11-06 2021-09-16 美商奈米創尼克影像公司 Systems, methods, and media for manufacturing processes
CN113560942A (en) * 2021-07-30 2021-10-29 新代科技(苏州)有限公司 Workpiece pick-and-place control device of machine tool and control method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200279388A1 (en) * 2019-02-28 2020-09-03 Canon Kabushiki Kaisha Information processing apparatus, information processing method and storage medium
TW202134808A (en) * 2019-11-06 2021-09-16 美商奈米創尼克影像公司 Systems, methods, and media for manufacturing processes
CN113560942A (en) * 2021-07-30 2021-10-29 新代科技(苏州)有限公司 Workpiece pick-and-place control device of machine tool and control method thereof

Also Published As

Publication number Publication date
TW202325467A (en) 2023-07-01

Similar Documents

Publication Publication Date Title
CN110659660B (en) Automatic optical detection classification equipment using deep learning system and training equipment thereof
CN111289538B (en) PCB element detection system and detection method based on machine vision
JP6922168B2 (en) Surface mount line quality control system and its control method
CN106097361B (en) Defect area detection method and device
WO2015120734A1 (en) Special testing device and method for correcting welding track based on machine vision
CN103776841B (en) Synthetic leather automatic defect detecting device and detection method
KR20180106856A (en) Automatic optical inspection system and operating method thereof
CN102938077A (en) Online AOI (Automatic Optical Inspection) image retrieval method based on double-threshold binaryzation
CN105139384B (en) The method and apparatus of defect capsule detection
CN115131268A (en) Automatic welding system based on image feature extraction and three-dimensional model matching
CN105809674A (en) Machine vision based die protection apparatus and its functioning method
KR20210038211A (en) Method of inspection using image masking operation
CN115311618A (en) Assembly quality inspection method based on deep learning and object matching
CN111626995B (en) Intelligent insert detection method and device for workpiece
CN110954555A (en) WDT 3D vision detection system
CN113822810A (en) Method for positioning workpiece in three-dimensional space based on machine vision
CN115524347A (en) Defect detection method, defect detection apparatus, and computer-readable storage medium
CN110263608B (en) Automatic electronic component identification method based on image feature space variable threshold measurement
TWI813096B (en) An indicating method and system for processing positions of a workpiece
CN113109364B (en) Method and device for detecting chip defects
Huang et al. Deep learning object detection applied to defect recognition of memory modules
CN104034259A (en) Method for correcting image measurement instrument
CN116563391B (en) Automatic laser structure calibration method based on machine vision
CN211604116U (en) Projection type augmented reality fastener assembly guiding and detecting device
Wang et al. Assembly defect detection of atomizers based on machine vision