CN112926760B - Device and method for establishing prediction model and product quality monitoring system - Google Patents

Device and method for establishing prediction model and product quality monitoring system Download PDF

Info

Publication number
CN112926760B
CN112926760B CN202010058246.XA CN202010058246A CN112926760B CN 112926760 B CN112926760 B CN 112926760B CN 202010058246 A CN202010058246 A CN 202010058246A CN 112926760 B CN112926760 B CN 112926760B
Authority
CN
China
Prior art keywords
classifier
candidate
strong
product
classifiers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010058246.XA
Other languages
Chinese (zh)
Other versions
CN112926760A (en
Inventor
谢得威
王孝裕
陈承辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute for Information Industry
Original Assignee
Institute for Information Industry
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute for Information Industry filed Critical Institute for Information Industry
Publication of CN112926760A publication Critical patent/CN112926760A/en
Application granted granted Critical
Publication of CN112926760B publication Critical patent/CN112926760B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Educational Administration (AREA)
  • Game Theory and Decision Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Manufacturing & Machinery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • General Factory Administration (AREA)

Abstract

The invention relates to a device and a method for establishing a prediction model and a product quality monitoring system. The creation means analyzes a quality detection data set generated by detecting a plurality of products and a processing characteristic data set relating to production of the plurality of products. The establishing device comprises: and a strong classifier generating module. The strong classifier generating module comprises a plurality of generators and a primary selection module. The plurality of generators generate a plurality of candidate strong classifier groups according to different classifier strategies, processing feature data sets and quality detection data sets. The primary selection module judges whether the candidate strong classifier groups meet the primary selection condition according to the quality detection data set.

Description

Device and method for establishing prediction model and product quality monitoring system
[ technical field ] A
The present invention relates to an apparatus and a method for building a prediction model, and a system for monitoring product quality, and more particularly, to an apparatus and a method for building a prediction model, which can predict product quality without actually performing quality inspection, and a system for monitoring product quality.
[ background of the invention ]
The manufacturing industry is an indispensable ring in the industrialized society. Although different types of manufacturing produce different products, they are essentially the result of processing the manufacturing material to produce the product. Indeed, manufacturers expect the quality of the product to be acceptable. However, various factors exist in the production flow, and the quality of the product is unstable. Whenever any product quality may have defects, the manufacturer must consider the following issues. For example, how to determine whether a product has defects, whether quality inspection is required for all products, and where problems occur in the production process, the quality of the product has defects. Quality testing is essential to the manufacturing industry to ensure that the quality of the product meets specifications.
Please refer to fig. 1, which is a schematic diagram illustrating a product inspection performed by a quality inspection apparatus according to the prior art. After processing of production material 11 by production equipment 13, product 19 results. The product 19 is subjected to quality detection by the quality detection device 17, and quality detection data 15 is generated. Quality inspection data 15 indicates that product 19 is of acceptable quality or is defective.
However, the quality detection device 17 is not only expensive, but also unable to help the manufacturer to identify where the abnormality occurs in the production process, which leads to the quality problem of the product 19. In other words, the product production process involves a lot of steps, which is the actual reason for the product 19 to be actually flawed, and is not an easy problem for the manufacturer.
[ summary of the invention ]
The invention relates to a device and a method for establishing a prediction model and a product quality monitoring system. The device and the method for establishing the prediction model and the product quality monitoring system establish the prediction model, so that a manufacturer of a product only predicts the result and tracks information of the flaw source according to the quality output by the prediction model. The use of the prediction model can enable the manufacturer to quickly master the quality of the product, not only save a large amount of time and cost required for quality detection, but also provide related analysis information for the abnormity of the production flow.
According to a first aspect of the present invention, an apparatus for building a prediction model is provided. The device for creating the prediction model analyzes the quality detection data set generated by detecting a plurality of products and the processing characteristic data set related to the production of the plurality of products. The establishing device comprises: and a strong classifier generating module. The strong classifier generating module comprises: the device comprises a first generator, a second generator, a third generator and a primary selection module. A first generator generates a first candidate strong classifier group including K first candidate strong classifiers according to a first classifier strategy, the processing feature data set and a quality detection data set, wherein K is a positive integer. The second generator generates a second candidate strong classifier group including K second candidate strong classifiers according to a second classifier strategy, the processed feature data set and the quality detection data set. The third generator generates a third candidate strong classifier group including K third candidate strong classifiers according to a third classifier strategy, the processed feature data set and the quality detection data set. The primary selection module is electrically connected to the first generator, the second generator and the third generator. The initial selection module judges whether the first candidate strong classifier group, the second candidate strong classifier group and the third candidate strong classifier group meet the initial selection condition according to the quality detection data set.
According to a second aspect of the present invention, a method for building a prediction model is provided. The method for establishing the prediction model analyzes the quality detection data set generated by detecting a plurality of products and the processing characteristic data set related to the production of the products. The establishing method comprises the following steps. First, a first candidate strong classifier group including K first candidate strong classifiers is generated according to a first classifier strategy, a processing feature data set and a quality detection data set, wherein K is a positive integer. Next, a second candidate strong classifier group including K second candidate strong classifiers is generated according to the second classifier strategy, the processing feature data set and the quality detection data set. Then, a third candidate strong classifier group including K third candidate strong classifiers is generated according to the third classifier strategy, the processing feature data set and the quality detection data set. In addition, whether the first candidate strong classifier group, the second candidate strong classifier group and the third candidate strong classifier group meet the initial selection condition is judged according to the quality detection data set.
According to a third aspect of the present invention, a product quality monitoring system is provided. The product quality monitoring system comprises: a quality detection device, a data preprocessing device and a model establishing device. The quality detection device detects a plurality of products and generates a quality detection data set. The data preprocessing device receives a plurality of production parameters related to the production of the plurality of products and generates a processing characteristic data set according to the production parameters. The model building device includes: and a strong classifier generating module. The strong classifier generating module comprises: the device comprises a first generator, a second generator, a third generator and a primary selection module. The first generator generates a first candidate strong classifier group including K first candidate strong classifiers according to a first classifier strategy, a processing feature data set and a quality detection data set, wherein K is a positive integer. The second generator generates a second candidate strong classifier group including K second candidate strong classifiers according to a second classifier strategy, the processed feature data set and the quality detection data set. The third generator generates a third candidate strong classifier group including K third candidate strong classifiers according to a third classifier strategy, the processed feature data set and the quality detection data set. A primary selection module is electrically connected to the first generator, the second generator and the third generator. The initial selection module judges whether the first candidate strong classifier group, the second candidate strong classifier group and the third candidate strong classifier group meet the initial selection condition according to the quality detection data set.
In order to better understand the above and other aspects of the present invention, the following detailed description of the embodiments is made with reference to the accompanying drawings:
[ description of the drawings ]
FIG. 1 is a schematic diagram illustrating a quality inspection of a product by a quality inspection apparatus according to a conventional technique.
Fig. 2 is a schematic diagram of an embodiment of a product quality monitoring system according to the present invention in a process of manufacturing a front fork shoulder cap of a bicycle.
Fig. 3 is a block diagram of the data preprocessing apparatus.
FIG. 4 is a schematic illustration of a pre-processing module generating a processing characteristic of a predictive model input.
Fig. 5 is a schematic diagram illustrating a processed feature data set PMFGall corresponding to a product.
Fig. 6A is a schematic diagram of the data preprocessing unit providing the processing feature data set bmmfgall of the product bMP for modeling purposes when the product quality monitoring system is in the model building mode bM.
Fig. 6B is a schematic diagram of the data preprocessing device providing the processing feature data set of the product uMP for model use when the product quality monitoring system is in the model use mode uM.
Fig. 6C is a schematic diagram of the data preprocessing device providing the processing feature data set of the product eMP for model evaluation when the product quality monitoring system is in the model evaluation mode eM.
FIG. 7 is a schematic diagram of a prediction model using a binary tree structure to classify process features and quality inspection data.
FIG. 8A, which is a schematic of using a random forest as a classifier strategy.
FIG. 8B, which is a schematic illustration of using adaptive boosting as a classifier strategy.
FIG. 9 is a schematic diagram of a process feature used to build a predictive model when the product quality monitoring system is in a model build mode (bM).
Fig. 10 is a block diagram of a model building apparatus.
Fig. 11 is a flowchart of the strong classifier generator generating the candidate strong classifier group canSClfGp.
Fig. 12 is a schematic diagram illustrating how the primary selection module determines to select the classifier strategy as the primary selection strategy, taking the classification result generated by the candidate strong classifier group canSClfGp _ a as an example.
Fig. 13 is a flowchart illustrating the primary selection module determining whether the classifier strategy can be used as the primary selection strategy according to the quality prediction result generated by the candidate strong classifier group canSClfGp.
Fig. 14 is a schematic diagram showing that after the strong classifier generator establishes the initial strong classifier preSClf, the verification sequence calculation module matches the initial strong classifier preSClf to generate the verification sequence, and then the correlation calculation module calculates the correlation coefficient of the verification sequence.
FIG. 15 is a flow chart of the strong classifier generation module.
FIG. 16 is a diagram illustrating the prediction of the quality of a product uMP based on a predictive model and processing characteristics of the product when the product quality monitoring system is in a model use mode (uM).
FIG. 17 is a block diagram of a model using apparatus.
FIG. 18 is a diagram illustrating a product quality monitoring system in a model evaluation mode (eM) for checking whether a prediction model needs to be updated.
Fig. 19 is a flowchart of the product quality monitoring system in model evaluation mode (eM).
[ notation ] to show
Production facility 13, 23 for production material 11, 31, 21
Products 19, 29, P1, P100, bMP, uMP, eMP
Quality detection device 17, 248 quality detection data 15
Blank heating furnace 331 for semi-finished products 31a, 31b
Warm forging stamping machine 333 standing cooling zone 335
Sensor 231, 232, 241 plant 20
Data preprocessing device 243 of product quality monitoring system 24
Model building device 245 model using device 247
Model evaluation device 249 production parameter PP
Receiving module 2430 preprocessing module 2431
Key data selection module 2435 database 2433
Feature conversion module 2437 production condition selection module 2439
Key production parameters corePP1, corePP2, corePP3, corePP4, corePP5, corePP6, corePP7, corePP8, corePP9, corePP10
Key production factors corePF1, corePF2 and corePF3
Processing features PMF1, PMF2, PMF3, PMF4, PMF5, P1MF1, P1MF2, P1MF3, P1MF4, P1MF5, P100MF1, P100MF2, P100MF3, P100MF4, P100MF5
Machining feature data P1MFG and P100MFG
Processing feature data set PMFGall model building mode bM
Checking strong classifier rescclf 1 and rescclf 2
Model usage mode uM model evaluation mode eM
Node path C1
Nodes 1, 2-2, 3-1, 3-2, 3-3, 3-4
Processing characteristics 411a, 411b, 411c, 431a, 431b, 431c N1 classification conditions 413, 413a N2 classification conditions 413, 413b N3 classification conditions 413c of product bMP
Weak classifiers 415a, 415b, 415c, 433a, 433b, 433c
Adjusting weights 435a, 435b
Original production parameters bMP _ origPP, uMP _ origPP, eMP _ origPP
Strong classifier generating module 245a
Strong classifier generators 2451a, 2451b, 2451c, 2451d, 2451e
Primary election module 2453 accuracy calculation module 2453a
Policy selection module 2453b checkup module 2455
Verification sequence calculation Module 2455a correlation calculation Module 2455c
Strong classifier selection module 2455e strong classifier parsing module 245b
Rule comparison Module 2457 support calculation Module 2457a
Coverage calculation module 2457c weight calculation module 2457b
Node analysis module 2458 rule item set conversion module 2458a
Rule reading Module 2458b
Steps S41, S43, S45, S43a, S43c, S43e, S43g, S43h, S45, S47, S501, S503, S506, S505, S507, S509, S511, S513, S5031, S5033, S5035, S5037, S31, S33, S35, S37, S331, S333, S335, S337, S801, S802, S803, S804, S805, S811, S813, S815, S817
Primary selection test data preTtDAT _1 and preTtDAT _10
Candidate strong classifier group canSClfGp _ A
Candidate strong classifiers canSClf _ A1, canSClf _ A10
Individual accuracy rates pdtA1, pdtA10
Checking training data retrydat checking test data rettstdat
Quality detection data reTst _ QC of checking verification product
Checking and verifying the quality prediction results QpdtA, QpdtB and QpdtD of the products
Primary strong classifier preSClf _ A, preSClf _ B, preSClf _ D
Verification of the sequences seqA, seqB, seqD
Product characteristic receiving module 2471 classification rule receiving module 2473
Product feature and classification rule comparison module 2475
Similarity calculation module 2477 quality prediction module 2479
Defect source tracking module 2470
[ detailed description ] embodiments
As described above, the manufacturer of the product needs to be able to know the quality of the product, but the cost of performing the quality inspection using the quality inspection apparatus is too high, and the defect source in the production flow cannot be effectively analyzed. Therefore, the scheme provides a product quality monitoring system matched with production equipment. In order to illustrate the usage of the product quality monitoring system, the present document will describe how to apply the product quality monitoring system in the present application by taking the production process of the front fork shoulder cap of the bicycle as an example.
Please refer to fig. 2, which is a schematic diagram of an embodiment of a product quality monitoring system according to the present invention configured in a production process of a front fork shoulder cap of a bicycle. In fig. 2, it is assumed that the product quality monitoring system 24 and the manufacturing equipment 23 for manufacturing the bicycle front fork shoulder cap are both disposed in the factory building 20. In practical applications, the product quality monitoring system 24 may also be disposed outside the plant 20 and connected to the sensors 231 and 232 and the production equipment 23 via a network.
In short, the production process of the bicycle front fork shoulder cap is roughly divided into three steps. First, the blank heating furnace 331 heats the production material (aluminum block) 31 to produce a semi-finished product (aluminum block) 31 a. Next, the warm forging press 333 presses and heats the semi-finished product (aluminum block) 31a to shape it, thereby producing a semi-finished product (aluminum block) 31b in the shape of a front fork shoulder cover. Next, the semi-finished product (aluminum block) 31b is placed in the standing cooling zone 335 for a certain period of time, and then the production of the product 29 is completed. The quality defect of the front fork shoulder cover of the bicycle may mean that the size of the product 35 is deformed, damaged, and cracked.
During the process of manufacturing the production material 21 by the production equipment 23 to convert the production material 31 into the product 29, some machine setting parameters may need to be set in the production equipment 23 itself. Alternatively, a plurality of sensors 232 may be installed inside or around the production apparatus 23. The plurality of sensors 232 may be used to sense a state of a machine (e.g., temperature, pressure of the machine) of the production equipment 23, or to sense characteristics such as temperature of the workpiece (the production material 31 or the semi-finished products 31a, 31 b). In addition, a sensor 231 such as a thermometer or a hygrometer may be provided in the plant 20. For convenience of description, the sensed parameters or set parameters from different sources are collectively referred to as the production parameters PP.
In fig. 2, it is assumed that the blank heating furnace 331 heats the production material 31 in four stages including: first stage heating at 470 deg.C; second stage heating at 480 deg.c; heating at 490 deg.C for the third period; and, performing a fourth stage heating at 500 ℃. After the semi-finished product 31a is taken out from the blank heating furnace 331, the feeding temperature of the semi-finished product 31a (for example, between 440 ℃ to 460 ℃) may be sensed by an infrared sensor. Furthermore, a temperature sensor and a pressure sensor may be disposed in the warm forging press bench 333. Generally, the temperature of the upper die of the warm forging press 333 is between 120 ℃ and 150 ℃; the temperature of the lower die is between 150 ℃ and 190 ℃; and the maximum value of the forging pressure is 6 tons. Accordingly, the production parameter PP can be matched with the sensing result of the sensor 232 associated with the production equipment 23.
The product quality monitoring system 24 includes a data preprocessing device 243, a model building device 245, a model using device 247, a model evaluating device 249, and a quality detecting device 248. Wherein, the data preprocessing device 243 is electrically connected to the sensors 231, 232, the production facility 23, the model establishing device 245 and the model using device 247; the model using device 247 is electrically connected to the model establishing device 245 and the model evaluating device 249; the quality detection device 248 is electrically connected to the model creation device 245 and the model evaluation device 249.
At the heart of the product quality monitoring system 24 is a predictive model that is built for the purpose of product quality and analysis required by the manufacturer. With different purposes of building, using, maintaining, etc. the prediction model, the product quality monitoring system 24 may be in turn in a model building mode bM, a model using mode uM, or a model evaluating mode eM.
For purposes of illustration, the product produced by the production equipment 23 when the product quality monitoring system 24 is in the model build mode bM is shown herein as symbol bMP; with the product quality monitoring system 24 in the model use mode uM, the product produced by the production equipment 23 is represented by uMP; and, when the product quality monitoring system 24 is in the model evaluation mode eM, the product produced by the production facility 23 is represented by eMP.
In practical applications, the number of the products bMP, uMP, and eMP produced by the production equipment 23 in each mode of the product quality monitoring system 24 is not limited, and may be selected by the manufacturer according to the production process or product characteristics. Or the products bMP, uMP and eMP can be selected by matching with a sampling mode. The details of how and how the products bMP, uMP, and eMP are selected and how the product quality monitoring system 24 switches the operation modes in actual applications may be adjusted according to the needs of the manufacturer, and are not described in detail herein.
Since the product quality monitoring system 24 uses the data preprocessing device 243 in each mode. Here, how the data preprocessing device 243 processes the production parameter PP will be described with reference to fig. 3, 4 and 5. Next, how to build or use the prediction model in the model building mode (bM), the model using mode (uM), or the model evaluating mode (eM) will be described later with fig. 6A, 6B, 6C; 7, 8A, 8B illustrate the basic architecture of the prediction model; the model building mode (bM) is illustrated with FIGS. 9-15; model usage patterns (uM) are illustrated in fig. 16 and 17; the model evaluation mode (eM) is illustrated in fig. 18 and 19.
Please refer to fig. 3, which is a block diagram of a data preprocessing apparatus. The data preprocessing device 243 includes a receiving module 2430, a preprocessing module 2431, a key data selecting module 2435, a database 2433, a feature transforming module 2437, and a production condition selecting module 2439. The preprocessing module 2431 is electrically connected to the receiving module 2430 and the database 2433; the key data selection module 2435 is electrically connected to the database 2433 and the feature conversion module 2437; and the production condition selection module 2439 is electrically connected to the feature conversion module 2437. In addition, the receiving module 2430 can receive the production parameters PP from the sensors 231 and 232 by means of electrical connection or signal connection; and the production condition selecting module 2439 may transmit the processing feature data sets of the bMP, uMP, or eMP products to the model using device 247 through electrical connection or signal connection.
First, the production parameter PP sensed by the sensors 231, 232 needs to be sampled first, based on the consideration of reducing the amount of data. For example, only one production parameter per hour is taken as the sampled production parameter smpPP. In addition, the number of sensors 231, 232 used in the manufacturing process may be quite large. Therefore, the manufacturer can selectively provide the selection strategy according to the mastery and understanding of the production line. The key data selecting module 2435 selects which of the production parameters PP sensed by the sensors 231, 232 are relatively important according to the selecting strategy, and defines the selected production parameters PP as the key production parameters corePP. Alternatively, the key data selecting module 2435 may be omitted, and the whole sampled production parameter smpPP may be directly regarded as the key production parameter corePP. In FIG. 4, it is assumed that the key data selection module 2435 selects 10 key production parameters corePP 1-corePP 10.
Since some of the key production parameters corePP 1-corePP 10 may be interrelated with each other. For example, it may be temperatures sensed at different locations within the same machine. Therefore, the manufacturer can provide a feature transformation formula according to the understanding and mastery of the production process, and the plurality of mutually related key production parameters corePP can be jointly integrated into one key production factor PF by property induction and the like. For example, assume that feature transformation module 2437 integrates the key production parameters corePP3, corePP4, corePP5 into the key production factor corePF 1; integrating key production parameters corePP6 and corePP7 into a key production factor corePF 2; and integrating the key production parameters corePP9 and corePP10 into a key production factor corePF 3. This step of generating the key production factor corePF by the feature transformation module 2437 can also be omitted in practical applications.
Then, the production condition selection module 2439 may select a person with a purpose more suitable for the prediction model from the key production parameters corePP and/or the key production factors corePF according to the different purposes of the prediction model, and use the person as the processing characteristic. In FIG. 4, it is assumed that the production condition selection module 2439 selects the key production parameters corePP1, corePP5, corePP8, and the key production factors corePF1 and corePF3 as the processing characteristics PMF 1-PMF 5.
It should be noted that, as the objective of creating the prediction model is different (e.g., analyzing yield or analyzing root causes of defects), the consideration of the processing characteristics selected by the production condition selection module 2439 may be different. Thus, although the key production parameters corePP 2-corePP 4, corePP6, corePP7, corePP9, corePP10, and key production factor corePF2 were not selected as processing features in FIG. 4, they may be selected for use in building predictive models for other applications. As illustrated in fig. 4, the conversion of the production parameter PP by the data preprocessing device 243 is only a single product. In practical applications, the conversion of the parameters is also applied to other products produced by the production line. For ease of illustration, FIG. 5 will illustrate the correspondence of a product with its corresponding processing features.
Please refer to fig. 5, which is a diagram illustrating a processing feature data set corresponding to a product. In FIG. 5, assume that product numbers are represented by 1-100, where each product P1-P100 corresponds to five processing features PMF 1-PMF 5. For example, product P1 corresponds to processing features P1MF 1-P1 MF 5; product P100 corresponds to processing characteristics P100MF 1-P100 MF 5.
In order to distinguish the processing characteristics corresponding to different products P1-P100, the processing characteristics corresponding to different products P1-P100 are defined. For example, the processing features P1MF1 to P1MF5 corresponding to the product P1 are collectively defined as processing feature data P1MFG corresponding to the product P1. For example, the processing characteristics P100MF1 to P100MF5 corresponding to the product P100 are collectively defined as processing characteristic data P100MFG corresponding to the product P100. The set of processing features of the products P1 to P100 may be defined as a processing feature data set PMFGall.
In fig. 4 and 5, the number of products, the product numbers, the number of parameters, and the like are used as examples. Fig. 6A, 6B, and 6C will describe how the data preprocessing device 243 generates processing feature data sets corresponding to the products bMP, uMP, and eMP in the model building mode bM, the model using mode uM, and the model evaluating mode eM, and provides the processing feature data sets to the model building device 245, the model using device 247, and the model evaluating device 249 for use.
Please refer to fig. 6A, which is a schematic diagram of the data preprocessing device providing a processing feature data set of a product bMP for modeling purpose when the product quality monitoring system is in the model building mode bM. In the model building mode bM, the model building apparatus 245 builds a prediction model according to the processing feature data set of the product bMP and the quality inspection data set of the product bMP. The contents of the prediction models (checked strong classifiers rescclf 1, rescclf 2, rule item sets and weights, etc.) depicted herein will be described later.
Fig. 6B is a schematic diagram of the data preprocessing device providing the processing feature data set of the product uMP for model use when the product quality monitoring system is in the model use mode uM. In the model use mode uM, the model using device 247 uses the prediction model created based on the product bMP to classify the product uMP using the processing feature data set of the product uMP as an input, and uses the classification result as the quality prediction result of the product uMP. In addition, model-using device 247 may generate relevant fault source analysis information for product uMP classified as a fault based on quality prediction of product uMP.
Fig. 6C is a schematic diagram of the data preprocessing device providing a processing feature data set of the product eMP for model evaluation when the product quality monitoring system is in the model evaluation mode eM. In the model evaluation mode eM, the model using device 247 classifies the processing feature data set of the product eMP using the prediction model established based on the product bMP, and uses the classification result as the quality prediction result of the product eMP. On the other hand, quality detection device 248 will perform quality detection on product eMP and generate a quality detection dataset for product eMP. Then, the model evaluation device 249 compares the quality prediction result of the product eMP with the quality detection data set, and determines whether or not to continue using the previously established prediction model.
In FIGS. 6A, 6B, and 6C, the predictive model built from product bMP contains the checkable strong classifiers, rescclf 1, rescclf 2, and a list of relationships of the rule item sets to the rule weights. Wherein, the check strong classifier rescclf 1 is assumed to contain T1 weak classifiers; and assume that the checked strong classifier rescclf 2 contains T2 weak classifiers. In practical applications, the number of the check strong classifiers included in the prediction model is not limited. In addition, for simplicity, T1 may be assumed to be T2.
The basic architecture of the predictive model of the present disclosure is described next. In short, the embodiment of the present invention may establish the check strong classifiers rescclf 1 and rescclf 2 generated by aggregating a plurality of weak classifiers (weak classifiers) adopting a binary tree architecture through a machine learning manner. How to generate the multiple strong classifiers rescclf 1 and rescclf 2 in the prediction model will be described with reference to fig. 11 to 15, where the components of the weak classifiers will be described first.
Please refer to fig. 7, which is a schematic diagram of a prediction model using a binary tree structure to classify processing characteristics and quality detection data of a product. In a preset case, it is assumed that the strong classifier is composed of 100 weak classifiers (T ═ 100), and the depth of each weak classifier is three layers (D ═ 3).
According to the idea of the invention, no matter which operation mode the product quality monitoring system is in, the weak classifiers in the selected strong classifiers rescclf adopt a binary tree structure to classify the processing feature data sets of the products bMP, uMP and eMP. Under the model building mode bM, the input of a high classifier rescclf is selected as a processing characteristic data set of the product bMP; under the model using mode uM, the input of the strong classifier rescclf is checked as a processing feature data set of the product uMP; and, in the model evaluation mode eM, the input of the strong classifier rescclf is checked as the processed feature dataset of the product eMP. Each node of the weak classifier corresponds to a classification condition clfcond (classification condition of prediction model) for classifying the machining features.
In the model building mode bM, the nodes of the weak classifier are generated according to the selected classifier strategy. The classification condition represented by the node of the weak classifier is repeatedly modified in the model building mode bM until the strong classifier rescclf is checked. On the other hand, once the strong classifier rescclf is checked to be generated, the node of the weak classifier included therein is used as a classification for the products uMP and eMP in the model use mode uM and the model evaluation mode eM. Therefore, the settings (the number T, the depth D, the classification condition represented by the node, etc.) of the weak classifiers in the selected multiple strong classifiers rescclf in the model building mode bM are not modified in the model using mode uM and the model evaluating mode eM.
Taking fig. 7 as an example, node 1 is a first-level node; the nodes 2-1 and 2-2 are second-layer nodes; nodes 3-1, 3-2, 3-3, and 3-4 are third level nodes (terminal nodes). Further, circles that re-extend below nodes 3-1, 3-2, 3-3, 3-4 are used to represent the number of products that fit the plurality of node paths; in fig. 7, the bottom box represents the number of products that meet the node paths, which are determined to be qualified (indicated by white dots) and defective (indicated by dotted dots) after quality inspection.
For convenience of illustration, the weak classifier architecture and how to match the processing feature data set of the product bMP are described as an example. It is assumed here that in the model building mode bM there are a total of 80 products bMP that need to be classified. The classification conditions for the respective nodes assumed here are summarized in table 1.
TABLE 1
Figure BDA0002373524830000161
Figure BDA0002373524830000171
It is further assumed herein that the processing characteristics of product bMP include: the feeding temperature of the semi-finished product 31a, the first to fourth heating temperatures of the blank heating furnace 331, the maximum pressure value of the warm forging press 333, and the temperatures of the upper die and the lower die of the warm forging press 333. In the weak classifier, product bMP is classified according to the processing characteristics of the plurality of products bMP and the classification conditions listed in table 1. For simplicity, the node path C1 is only used as an example to illustrate how the weak classifier is used to classify the product bMP.
First, read if the feed temperature of the 80 products bMP is greater than 450 ℃ (node 1). Wherein, it is assumed that bMP products with the feeding temperature of more than 450 ℃ are 35 products, and bMP products with the feeding temperature of less than or equal to 450 ℃ are 45 products. Next, of the 35 products bMP with feed temperatures greater than 450 ℃, the first stage heating temperature of those products bMP was greater than 470 ℃. It is assumed here that of the 35 products bMP with a feed temperature greater than 450 ℃, the first stage heating temperature of the 25 products bMP is greater than 470 ℃ and the remaining 10 first stage heating temperatures bMP are less than or equal to 470 ℃. Then, at node 3-1, it is determined whether the temperature of the corresponding lower mold is higher than 150 ℃ for 25 products with the material feeding temperature higher than 450 ℃ and the first stage heating temperature higher than 470 ℃. It is assumed here that of the 25 products bMP with a feed temperature greater than 450 ℃ and a first stage heating temperature greater than 470 ℃, the lower mold temperature for a total of 23 products bMP is greater than 150 ℃ and the lower mold temperature for the remaining two products bMP is less than or equal to 150 ℃. Furthermore, of the 23 products bMP with the feeding temperature greater than 450 ℃, the first stage heating temperature greater than 470 ℃ and the lower mold temperature greater than 150 ℃, a total of 15 were judged as good by the quality inspection device 248 and 8 were judged as defective by the quality inspection device 248.
Incidentally, bMP products that met node 1 but did not meet node 2-1 (i.e., the inlet temperature was greater than 450 ℃ and the first stage heating temperature was less than or equal to 470 ℃) were assumed to be 10. If the 10 products bMP all have good inspection results, it is not necessary to perform the determination at the node 3-2 under other classification conditions.
Next, taking fig. 8A and 8B as an example, how to generate the weak classifier shown in fig. 7 will be described. In fig. 7, 8A, 8B, the first-level nodes are all represented by vertical net bottoms; representing the second layer nodes by the horizontal net bottom; the third level nodes are represented by the net bottom in the top-right-bottom-left direction. In addition, when a node is a terminal node, it is marked with a thicker frame line.
For convenience of illustration, it is assumed herein that the model building means 245 provides five classifier strategies (candidate classifiers) for use in building the strong classifier. In practice, the number and types of classifier strategies are not limited by the examples herein. Fig. 8A and 8B illustrate two examples of classifier strategies.
Please refer to fig. 8A, which is a schematic diagram of using random forest (random forest) as the classifier strategy. This figure only exemplifies weak classifiers 415a, 415b, 415 c. When the random forest strategy is adopted, the weak classifiers 415a, 415b and 415c respectively select a part of the products bMP from the processing characteristics of the products bMP and use the processing characteristics of the selected products bMP. In addition, the weak classifiers 415a, 415b and 415c are independently selected from the same N production conditions.
For example, assume that there are a total of M products bMP, and that a random forest strategy provides a total of N classification conditions. The input to the weak classifier 415a is then the processing feature 411a of M1 products bMP randomly selected from M products bMP, and the node is the N1 classification conditions 413a randomly selected from the N classification conditions. The weak classifier 415a then classifies the input processing features 411a of the M1 products bMP according to the selected N1 classification conditions 413a as shown in fig. 7. Further, the weak classifier 415b inputs the processing features 411b of M2 products bMP randomly selected from M products bMP, and its node is N2 classification conditions 413b randomly selected from N classification conditions. The weak classifier 415b then classifies the input processing features 411b of the M2 products bMP according to the selected N2 classification conditions 413b as shown in fig. 7. Similarly, the input to the weak classifier 415c is the machining feature 411c of M3 products bMP randomly selected from M products bMP, and the node is the N3 classification conditions 413c randomly selected from the N classification conditions. Next, the weak classifier 415c classifies the input processing characteristics 411c of the M3 products bMP according to the selected N3 classification conditions 413c as shown in FIG. 7.
See fig. 8B, which is a schematic diagram of using Adaptive Boosting as a classifier strategy. Only three weak classifiers 433a, 433b, 433c are shown in this figure. When the adaptive boosting strategy is adopted, the inputs of the weak classifiers 433a, 433b, 433c are not identical.
Also assume that there are M products bMP and the adaptive enhancement strategy provides a total of N classification conditions. Then, although the weak classifier 433a selects the processing features 431a of the M1 products bMP to be used as input from the M products bMP, the weak classifier 433b generates the adjustment weight 435a of the product bMP according to the classification result of the weak classifier 433a, and generates the processing features 431b of the product bMP with the first weight adjustment according to the adjustment weight 435a and the processing features 431a of the product bMP. The processed features 431b of the first weight-adjusted product bMP are considered as inputs to the weak classifier 433 b. The reason why the input of the weak classifier 433b is previously weight-adjusted here is to prevent the weak classifier 433b from classifying the wrong sample of the weak classifier 433a again as the wrong product bMP. The classification error described here refers to a case where the classification result obtained by classifying the processing characteristics of the product bMP is compared with the quality detection result obtained by actually detecting the product bMP, and it is confirmed that the two are not matched.
Similarly, the weak classifier 433c re-adjusts the adjustment weight 435b of the product bMP according to the classification result of the weak classifier 433b, and generates the processing characteristic 431c of the product bMP with the second weight adjustment according to the adjustment weight 435b and the processing characteristic 431b of the product bMP with the first weight adjustment. The processed features 431c of the second weight-adjusted product bMP will serve as inputs to the weak classifier 433 c. Likewise, the reason why the input of the weak classifier 433c is weighted again is to prevent the weak classifier 433c from classifying the product bMP again incorrectly with the sample in which the weak classifier 433b is incorrectly classified. That is, the weak classifier generated according to the adaptive enhancement strategy has the capability of correcting the misjudgment result of the previous weak classifier, so that the effect of iterative correction can be achieved.
As can be seen from the descriptions of fig. 8A and 8B, when different classifier strategies are used, different classification results can be generated for the same processing characteristics and classification conditions. According to the invention, the classification condition represented by the node of the weak classifier is determined in the model building mode bM. Under the model using mode uM and the model evaluating mode eM, the classification conditions represented by the nodes of the weak classifier are directly used for classifying the processing characteristics of the product uMP and the product eMP. Then, the weak classifiers established by using the plurality of classifier strategies are further compared, and the effect of classifying the processing characteristics is better in accordance with the actual condition.
Next, a model building mode bM of the product quality monitoring system 24 will be described with reference to fig. 9 to 15; a model usage pattern uM of the product quality monitoring system 24 is illustrated in fig. 16 and 17; and, a model evaluation mode eM of the product quality monitoring system 24 is illustrated in fig. 18 and 19.
Please refer to fig. 9, which is a schematic diagram illustrating a process of building a prediction model according to the original production parameter bMP _ origPP of the product bMP when the product quality monitoring system is in the model building mode (bM). Please also refer to fig. 6A. In the model building mode bM, the production apparatus 23 processes the production material 21 to produce a product (bMP)29 a. While product bMP is being produced, sensor 241 (which may be sensors 231, 232 of fig. 2) generates raw parameters bMP _ origPP to data preprocessing device 243. Next, the data preprocessing unit 243 transmits the converted processing feature data set of the product bMP to the model building unit 245. On the other hand, the quality detection device 248 detects the quality of the product bMP. The quality detection data generated by the quality detection device 248 after detecting the quality of the product bMP is further transmitted to the modeling device 245, and is provided to the modeling device 245 as a reference. Thus, model building means 245 will receive the processed feature data sets for product bMP from data preprocessing means 243 and the quality check data sets for product bMP from quality check means 248.
Please refer to fig. 10, which is a block diagram of a model building apparatus. The model building device 245 includes a strong classifier generating module 245a and a strong classifier parsing module 245 b. The internal structure and the connection relationship between the modules are only briefly described here, and details about the operation of the modules will be described later.
The strong classifier generating module 245a includes: strong classifier generators 2451 a-2451 e, an initial selection module 2453, and a check module 2455. The primary selection module 2453 includes: an accuracy calculation module 2453a, and a policy selection module 2453 b. The accuracy calculation module 2453a is electrically connected to the strong classifier generators 2451 a-2451 e, the strategy selection module 2453b, and the check module 2455. The checkup module 2455 further comprises: a verification sequence calculation module 2455a, a correlation calculation module 2455c, and a strong classifier selection module 2455 e. The correlation calculation module 2455c is electrically connected to the verification sequence calculation module 2455a and the selection module 2455e, and the check module 2455 is electrically connected to the strong classifier parsing module 245 b. The check module 2455 is electrically connected to the strong classifier generators 2451a to 2451e and the strong classifier parsing module 245 b.
The strong classifier parsing module 245b includes: a rule comparison module 2457 and a node analysis module 2458. The rule comparison module 2457 comprises: a support calculation module 2457a, a coverage calculation module 2457c, and a weight calculation module 2457 b. The support calculation module 2457a and the coverage calculation module 2457c are electrically connected to the node analysis module 2458 and the weight calculation module 2457 b.
For convenience of illustration, the process of the model building means 245 building the prediction model is divided into three stages (the primary selection stage STG1, the check stage STG2, and the parsing stage STG 3). Wherein, the strong classifier generating module 245a is related to the initial selection stage STG1 and the check stage STG 2; strong classifier parsing module 245b is associated with parsing stage STG 3.
In the initial selection stage STG1, the strong classifier generators 2451a 2451E respectively establish candidate strong classifier groups canSClfGp _ A-canSClfGp _ E according to the classifier strategies A-E, and the initial selection module 2453 selects the initial selection strategy according to the candidate strong classifier groups canSClfGp _ A-canSClfGp _ E. In the check phase STG2, the strong classifier generators 2451 a-2451 e establish the initial strong classifiers preSClf 1-preSClf 3 according to the initial selection strategy, and the check module 2455 selects the check strong classifiers rescclf 1 and rescclf 2 from the initial strong classifiers preSClf 1-preSClf 3. In the parsing stage STG3, the strong classifier parsing module 245b reads the production conditions represented by the nodes of the weak classifiers included in the checked strong classifiers rescclf 1 and rescclf 2, and further calculates the rule weights corresponding to the rule item sets after comparing the rule item sets of the weak classifiers included in the checked strong classifiers rescclf 1 and rescclf 2.
The operation of the initial selection stage STG1 is described next. In an embodiment of the present invention, it is assumed that five different classifier strategies A-E are provided. First, five candidate strong classifier groups canSClfGp _ A to canSClfGp _ E each including K strong classifiers are generated using different strong classifier generators 2451a to 2451E. According to the present invention, each of the candidate strong classifier groups canSClfGp _ A-canSClfGp _ E generated by the strong classifier generators 2451 a-2451E includes K candidate strong classifiers (as shown in Table 2).
TABLE 2
Figure BDA0002373524830000231
For ease of explanation, a total of 100 products bMP are assumed and divided into K parts (where K is assumed to be 10). Based on this assumption, each section contains 10 products bMP. Each product bMP has its corresponding processing characteristics and quality inspection data. In addition, a combination of the machining features corresponding to all of the products bMP is defined as a machining feature data set bmmfgall of the product bMP. Next, the processing feature data set bmmfgall is divided into K equal parts. And, the plurality of equal divisions are defined as K processing feature sub-data sets DATdiv (1) to DATdiv (K) corresponding to the K candidate strong classifiers, respectively. Then, (K-1) of the processing feature sub data sets DATdiv (1) -DATdiv (K) are selected as initial selection training data preTrnDAT _ K according to different values of K; and selecting one of the machining feature sub data sets DATdiv (1) -DATdiv (K) as initial selection test data preTtDAT _ k.
Table 3 illustrates the initial selection training data trndat _ K and the initial selection test data prtstdat _ K corresponding to the kth candidate strong classifier (K is 1 to K), taking K (assuming that K is 10) candidate strong classifiers canSClf _ a1 to canSClf _ a10 of the candidate strong classifier group canSClfGp _ a as an example. Based on the assumption that there are 100 products bMP and K is 10, it can be known that the processing feature sub-data set included in each of the primary training data pretrnbat _ K corresponds to the processing features corresponding to 90 products bMP; moreover, the machining feature sub-data set included in each of the preliminary selection test data prettstdat _ k includes machining features corresponding to 10 products bMP.
TABLE 3
Figure BDA0002373524830000241
Figure BDA0002373524830000251
Then, the strong classifier generators 2451B 2451E generate candidate strong classifier groups canSClfGp _ B to canSClfGp _ E by matching different classifier strategies B to E with the primary selection training data preTrnDAT _ B to preTrnDAT _ E in a manner similar to the relationship in Table 4. As can be seen from the foregoing description, the preliminary selection training data preTrnDAT _1 corresponding to k ═ 1 is matched with the classifier strategies a to E, and then candidate strong classifiers canSClf _ a1, canSClf _ B1, canSClf _ C1, canSClf _ D1 and canSClf _ E1 are generated. Similarly, after the remaining primary selection training data preTrnDAT _2 to preTrnDAT _10 are respectively matched with the classifier strategies A to E, the corresponding candidate strong classifiers are also corresponded.
Please refer to fig. 11, which is a flowchart of the strong classifier generator generating the candidate strong classifier group. Each strong classifier generator 2451a 2451e performs this process. First, a strong classifier counter (k ═ 1) is initialized (step S41). Next, a kth candidate strong classifier in the candidate strong classifier group is formed (step S43). Next, it is determined whether all (K) candidate strong classifiers in the candidate strong classifier group have been formed (step S45). If all the K candidate strong classifiers have been formed, the process is ended. If the determination result in step S45 is negative, the strong classifier counter is incremented (k + +) (step S47), and then step S43 is executed again.
Step S43 further includes the steps of: first, the weak classifier counter is initialized (t ═ 1) (step S43 a). Next, using the preliminary selection training data preTrnDAT and the classifier strategy corresponding to the strong classifier generator, a t-th weak classifier among kth candidate strong classifiers corresponding to the classifier strategy is formed (step S43 c). Thereafter, it is determined whether all the weak classifiers in the kth candidate strong classifier have been formed (assuming that each strong classifier includes T weak classifiers) (step S43 g).
If the determination result of step S43g is negative, step S43c is executed again after the weak classifier counter (t) is incremented (step S43 e). If the determination result in the step S43g is positive, the T weak classifiers included in the kth candidate strong classifier in the candidate strong classifier group are collectively collected as the kth candidate strong classifier in the candidate strong classifier group (step S43h), and then the step S43 is ended.
Please refer to fig. 12, which illustrates how the primary selection module determines to select the classifier policy as the primary selection policy, taking the classification result generated by the candidate strong classifier group canSClfGp _ a as an example. As mentioned above, the candidate strong classifier group canSClfGp _ A includes candidate strong classifiers canSClf _ A1-canSClf _ AK. For each of the candidate strong classifiers canSClf _ A1-canSClf _ AK in the candidate strong classifier group canSClfGp _ A, the initial selection test data prettDAT _ 1-prettDAT _ K are respectively used for testing, and the individual accuracy pdtA 1-pdtAK corresponding to the candidate strong classifiers canSClf _ A1-canSClf _ AK are obtained.
Taking the candidate strong classifier canSClf _ a1 as an example, the corresponding individual accuracy pdtA1 is calculated as follows. Continuing the previous assumptions with the example of Table 3, assume that there are a total of 100 products bMP 1-bMP 100 used as the predictive model. Then, the processing characteristics of the products bMP 1-bMP 90 are used as the primary training data preTrnDAT _1 used by the training candidate strong classifier canSClf _ A1; and, the processing characteristics of the products bMP 91-bMP 100 are used as the initial selection test data preTtDAT _1 for testing the individual accuracy pdtA1 of the candidate strong classifier canSClf _ A1.
After the processing features of the products bMP 91-bMP 100 are input into the candidate strong classifier canSClf _ a1, a plurality of classification results can be generated according to the node path of the candidate strong classifier canSClf _ a 1. The plurality of classification results correspond to the results of quality prediction for the products bMP91 to bMP 100. Then, the classification results of products bMP 91-bMP 100 are compared with the quality detection results of products bMP 91-bMP 100. After the comparison, it can be known that the number of products bMP (for example, 5 products) in the classification results corresponding to the products bMP 91-bMP 100 generated by the candidate strong classifier canSClf _ A1 is correctly predicted. Then, the number of products bMP with the correctly predicted quality is divided by the number (10) of products bMP91 to bMP100, so as to obtain the individual accuracy corresponding to the candidate strong classifier canSClf _ a1 (e.g., 5/10 × 100%: 50%). Similarly, the individual accuracies pdtA2 pdtAK corresponding to the candidate strong classifiers canSClf _ A2 AK can be calculated in a similar manner.
The group average accuracy pdtGA corresponding to the candidate strong classifier group canSClfGp _ A is obtained by dividing the result of summing the individual accuracy pdtA1 pdtAK corresponding to the candidate strong classifiers canSClf _ A1 canSClf _ AK by the number (K) of the candidate strong classifiers canSClf in the candidate strong classifier group canSClfGp _ A. Similarly, the group average accuracy rates pdtGB to pdtGE can be calculated in a similar manner for the candidate strong classifier groups canSClfGp _ B to canSClfGp _ E. Thereafter, the average accuracy of each group pdtGA to pdtGE is compared with the average accuracy threshold.
Please refer to fig. 13, which is a flowchart illustrating the primary selection module determining whether the classifier strategy can be used as the primary selection strategy according to the quality prediction result generated by the candidate strong classifier group. A strong classifier counter (k ═ 1) is initialized (step S501). An individual accuracy pdtk corresponding thereto is calculated for the kth candidate strong classifier (S503).
Next, it is determined whether all the strong classifiers in the candidate strong classifier group canSClfGp have been generated (step S505). If not, after the strong classifier counter k is accumulated (step S506), step S503 is executed again.
If the determination result in the step S505 is positive, the policy selection module 2453b sums and averages the individual accuracy rates pdt 1-pdtK of the K strong classifiers in the candidate strong classifier group canSClfGp to obtain the group average accuracy rate pdtG corresponding to the candidate strong classifier group canSClfGp (step S507). Herein, whether the candidate strong classifier group canSClfGp satisfies the initial selection condition is determined according to the comparison between the group average accuracy pdtG and the average accuracy threshold.
That is, the initial selection condition is equivalent to determining whether the group average accuracy pdtG of the candidate strong classifier group canSClfGp is higher than the average accuracy threshold (step S509). If the group average accuracy pdtG of the candidate strong classifier group canSClfGp is indeed higher than or equal to the average accuracy threshold, the strategy selection module 2453b determines that the classifier strategy can be listed as the initial selection strategy (step S511). At this time, it is confirmed that the candidate strong classifier group canSClfGp satisfies the initial selection condition.
Otherwise, if the group average accuracy pdtG of the candidate strong classifier group canSClfGp is lower than the average accuracy threshold, the strategy selection module 2453b determines that the classifier strategy should not be listed as the primary selection strategy (step S513). At this time, it is confirmed that the candidate strong classifier group canSClfGp does not satisfy the primary selection condition.
Please refer to table 4, which is a list of the comparison between the group average accuracy pdtG and the average accuracy threshold corresponding to the candidate strong classifier group according to the foregoing example. In table 4, assume that the average accuracy threshold is 80%, and Y represents the classifier strategy selected as the primary selection strategy; and representing the classifier strategy which is not selected as the initial selection strategy by N.
TABLE 4
Figure BDA0002373524830000291
The group average accuracy pdtGA of the candidate strong classifier group canSClfGp _ a is 80%, which is equal to the average accuracy threshold. Thus, classifier policy a may be considered as a primary policy. The group average accuracy pdtGB of the candidate strong classifier group canSClfGp _ B is 90%, which is higher than the average accuracy threshold. Thus, classifier policy B may be considered as a primary policy. The group average accuracy pdtGC of the candidate strong classifier group canSClfGp _ C is 70% lower than the average accuracy threshold. Thus, classifier policy C is not selected as the primary selection policy. The group average accuracy pdtGD of the candidate strong classifier group canSClfGp _ D is 80%, which is equal to the average accuracy threshold. Thus, classifier policy D may be considered as a primary policy. The group average accuracy pdtGE of the candidate strong classifier group canSClfGp _ E is 60% lower than the average accuracy threshold. Thus, classifier policy E should not be considered as a primary policy.
As a result, classifier policy A, B, D will be considered the primary selection policy, while classifier policy C, E is not considered the primary selection policy. According to the concept of the present disclosure, the selected strategy is used to generate the initial strong classifier again by the corresponding strong classifier generator. For example, in this example, for the classifier strategy A, B, D considered as the initial selection strategy, the strong classifier generators 2451a, 2451b, 2451D generate the initial selection strong classifier preSClf _ A, preSClf _ B, preSClf _ D corresponding thereto.
In accordance with the inventive concept, the same processing characteristic data set bmmfgall of the product bMP is used in both the preliminary selection stage STG1 and the check stage STG 2. However, the way in which the training data and the test data are defined on the basis of the processing feature data set bmmfgall differs between the initial selection stage STG1 and the check stage STG 2.
In the initial selection stage STG1, the strong classifier generators 2451a 2451e generate candidate strong classifier group canSClfGp using the initial selection training data preTrnDAT and the initial selection test data preTstDAT. In the initial selection stage STG1, the initial selection training data preTrnDAT and the initial selection test data preTstDAT for generating the candidate strong classifiers canSClf are changed with the strong classifier counters (k) representing the respective candidate strong classifiers canSClf in the candidate strong classifier group canSClfGp.
On the other hand, in the check phase STG 2. The strong classifier generators 2451a to 2451e generate a strong classifier preseclf by inputting randomly selected check training data rettrndat according to the initial selection strategy. Then, the checking module 2455 performs a test using the checking test data reTstDAT and the checking verification product quality detection data reTst _ QC. Therefore, in the check phase STG2, both the check training data rettrndat for generating the initially selected strong classifier preseclf and the check test data rettstdat for evaluating the initially selected strong classifier preseclf are maintained.
Please refer to fig. 14, which is a schematic diagram of the strong classifier generator establishing the initial strong classifier preSClf, the verification sequence calculation module collocating the initial strong classifier preSClf to generate the verification sequence, and the correlation calculation module calculating the correlation coefficient of the verification sequence. The strong classifier generator 2451a generates the initial strong classifier preSClf _ a according to the classifier strategy a and the check training data rettrndat. The strong classifier generator 2451B generates a preliminary selection strong classifier preSClf _ B according to the classifier strategy B and the check training data rettrndat. The strong classifier generator 2451D generates an initially selected strong classifier preSClf _ D according to the classifier strategy D and the check training data rettrndat.
In the checking stage STG2, the checking test data reTstDAT and the quality check data reTst _ QC of the check verification products corresponding to the checking test data reTstDAT (for example, 10 of 100 products bMP are sampled) are used as input. After the primary selection strong classifier preseclf _ A, preSClf _ B, preSClf _ D predicts the quality of the check verification products according to the check test data reTstDAT, quality prediction results QpdtA of the multiple check verification products corresponding to the primary selection strong classifier preseclf _ a, quality prediction results QpdtB of the multiple check verification products corresponding to the primary selection strong classifier preseclf _ B, and quality prediction results QpdtD of the multiple check verification products corresponding to the primary selection strong classifier preseclf _ D are generated. Then, the verification sequence calculating module 2455a compares the quality prediction results QpdtA, QpdtB, QpdtD of the checked and verified products with the quality detection data reTst _ QC of the checked and verified products, respectively, to generate a plurality of verification results corresponding to the checked and verified products. Then, the plurality of verification results are listed one by one to form verification sequences seqA, seqB, and seqD shown in table 5.
In table 5, "1" and "0" represent the verification results that the verification sequence calculation module 2455a confirms whether the quality detection data reTst _ QC of the checked and verified products matches the quality prediction results QpdtA, QpdtB, and QpdtD of the checked and verified products generated by the initial strong classifier precclf, respectively. "1" represents that the verification result is a match, and "0" represents that the verification result is a mismatch. Further, the verification result sets are combined into verification sequences seq corresponding to the respective initially selected strong classifiers preSClf. In table 5, the verification sequence seqA corresponding to the initially selected strong classifier preSClf _ a is {1,0,1,1,1,0,1,1,1,1 }; the verification sequence seqB corresponding to the primary strong classifier preseclf _ B is {1,0,1,1,1,1,1,1,1,1 }; the verification sequence seqD corresponding to the preliminary strong classifier preseclf _ D is {1,1,0,1,1,1,0,1,1,1 }.
TABLE 5
Figure BDA0002373524830000311
Figure BDA0002373524830000321
When the verification result is "1", the quality prediction result (e.g., QpdtA, QpdtB, QpdtD) of the check verified product, which is predicted by the primary classifier preself from the check test data reTstDAT, coincides with the quality detection data reTst _ QC of the check verified product. Namely, the primary selection strong classifier preSClf _ A, preSClf _ B, preSClf _ D predicts that the check verification product is qualified, and the quality detection data reTst _ QC of the check verification product also shows that the check verification product is qualified; alternatively, the initial strong classifier preseclf _ A, preSClf _ B, preSClf _ D predicts that the check verification product is defective, and the quality inspection data reTst _ QC of the check verification product also shows that the check verification product is defective.
When the verification result is non-conformity ("0"), the quality prediction result (e.g., QpdtA, QpdtB, QpdtD) of the check verified product predicted by the primary classifier preself from the check test data reTstDAT is inconsistent with the quality detection data reTst _ QC of the check verified product. Namely, the primary selection strong classifier preSClf _ A, preSClf _ B, preSClf _ D predicts that the check verification product is defective, but the quality detection data reTst _ QC of the check verification product shows that the check verification product is defective; alternatively, the check verification product is predicted to be defective by the initial strong classifier preseclf _ A, preSClf _ B, preSClf _ D, but the quality detection data reTst _ QC of the check verification product shows that the quality of the check verification product is good.
After the verification sequence calculation module 2455a generates the verification sequences seqA, seqB, seqD corresponding to the initial strong classifier preSClf _ A, preSClf _ B, preSClf _ D, respectively, the correlation calculation module 2455c calculates the verification sequence correlation coefficients (verification sequence correlation coefficient a-B/B-a, verification sequence correlation coefficient a-D/D-a, and verification sequence correlation coefficient B-D/D-B) between the verification sequences seqA, seqB, seqD and each other by using the correlation calculation formula. Table 6 is a list of correlation coefficients for verification sequences and verification sequences.
TABLE 6
Figure BDA0002373524830000331
Please refer to fig. 14 and table 6. The verification sequence correlation coefficient A-B/B-A represents the correlation between the verification sequences seqA, seqB. The verification sequence correlation coefficient A-D/D-A represents the correlation between the verification sequences seqA, seqD. The verification sequence correlation coefficient B-D/D-B represents the correlation between the verification sequences seqB, seqD. According to the verification sequences seqA, seqB and seqD shown in Table 5, a correlation coefficient A-B/B-A of the verification sequences is 0.666667 by calculation with a correlation calculation formula; verifying that the sequence correlation coefficient A-D/D-A is 2.5; and verifying that the sequence correlation coefficient B-D/D-B is 0.16667.
The strong classifier selection module 2455e will further compare the verification sequence correlation coefficients A-B/B-A, A-D/D-A, B-D/D-B to confirm that the correlation between the preliminary strong classifiers preSClf generated according to the preliminary selection strategies preStg1 (classifier strategy A) and preliminary selection strategies preStg3 (classifier strategy D) is the lowest (the value of the verification sequence correlation coefficient A-D/D-A is the largest compared with 1). Therefore, the strong classifier selecting module 2455e selects the candidate strong classifier canSClf _ A, canSClf _ D as the checked strong classifiers rescclf 1 and rescclf 2 from the initially selected strong classifier preSClf _ A, preSClf _ B, preSClf _ D. Next, the process of how to select the initial selection strategy preSelStg according to the classifier strategies to generate the initial selection strong classifier preSClf is summarized in fig. 15.
Please refer to fig. 15, which is a flowchart of the strong classifier generating module. The generation of the initial strong classifier mainly comprises four steps. First, the strong classifier generators 2451a to 2451e generate a plurality of candidate strong classifier groups (e.g., candidate strong classifier groups canSClfGpA to canSClfGpE) according to the classifier policy (step S31). Next, the primary selection module 2453 determines a primary selection policy according to the candidate strong classifier group (step S33). Then, the strong classifier generators 2451 a-2451 e establish the initial strong classifier (e.g., initial strong classification preSClf _ A, preSClf _ B, preSClf _ D) according to the initial strategy (step S35); and the checking module 2455 selects two of the initially selected strong classifiers as the checked strong classifiers rescclf 1 and rescclf 2.
Step S33 further includes the following steps. First, the accuracy calculation module 2453a calculates a group average accuracy pdtG for each candidate strong classifier group canSClfGp, and determines whether the classifier strategy should be selected as the primary selection strategy according to the group average accuracy pdtG (step S331). Next, it is determined whether the number of the initially selected policies is sufficient (e.g., whether 3 or more) (step S333).
If the determination result in step S333 is positive, step S35 is executed. Otherwise, the strategy selection module 2453b needs to control the strong classifier generators 2451 a-2451 e corresponding thereto to modify the model structure parameters, and generate the candidate strong classifier group canSClfGp again by the strong classifier generators 2451 a-2451 e (step S335). The model structure parameters include, for example, the number (T) of weak classifiers included in the strong classifier, the depth (D) of nodes included in the weak classifier, and the like. The difference between the details of step S335 and step S331 is that the model structure parameters of the strong classifier generators 2451 a-2451 e for generating strong classifiers are changed.
Next, after the model structure parameters are modified and the candidate strong classifier group canSClfGp is generated again, the accuracy calculation module 2453a calculates the group average accuracy pdtG again for the regenerated candidate strong classifier group canSClfGp, and then determines whether the originally unselected classifier strategy can be selected as the primary selection strategy (step S337).
After the strong classifier selecting module 2455e selects the selected strong classifier rescclf, the node analyzing module 2458 will further read the node path from the root node (root node) to the end node (end node) in the weak classifiers included in each selected strong classifier rescclf. Continuing with the above example, the rule reading module 2458b is equivalent to a plurality of node paths to be read between the root node and the plurality of terminal nodes of the T (for example, T ═ 100) weak classifiers included in the candidate strong classifier canSClf _ A, canSClf _ D.
The node path from the root node of the weak classifier to each terminal node is composed of a plurality of nodes. As illustrated in fig. 7, each node represents a classification condition. Therefore, the node path representation from the root node of the weak classifier to each terminal node must satisfy the classification condition represented by each node on the node path. For convenience of explanation, the node path from the root node of the weak classifier to each terminal node is defined as a classification rule.
As shown in fig. 10, the strong classifier parsing module 245b includes a node analysis module 2458 and a rule comparison module 2457. Wherein the node analysis module 2458 further comprises a rule reading module 2458b and a rule item set conversion module 2458 a; the rule comparison module 2457 further includes a support calculation module 2457a, a coverage calculation module 2457c, and a weight calculation module 2457 b.
The rule reading module 2458b reads the classification rules represented by the weak classifiers such as the strong classifiers rescclf 1 and rescclf 2 and the node paths from the root node to the end nodes. Because the nodes of each weak classifier are classification conditions and each node path comprises nodes with different numbers, the classification conditions represented by a plurality of nodes on the node path can be combined together to form a group of classification rules. In addition, the rule item set conversion module 2458a converts the production conditions in the classification rule in a frequent item set (a-priority) manner. Thus, the relationship shown in Table 7 was obtained.
Since the candidate strong classifier canSClf _ A, canSClf _ D includes a large number of weak classifiers (e.g., 100), and the node paths of each weak classifier are analyzed to obtain a plurality of rule sets, which are not detailed herein. In Table 7, the rule item sets included in the weak classifiers 1 to 3 for example and examples of the production conditions included in the rule item sets are listed.
TABLE 7
Figure BDA0002373524830000361
Please refer to table 8, which is an example of the support calculation module 2457a calculating the support according to the rule item set.
TABLE 8
Figure BDA0002373524830000371
It is assumed here that there are five sets of rule items. The term set classification range corresponding to the rule term set 1-1 is that the feed temperature is less than the critical temperature pdtTtmpth _ a. The classification range of the term set corresponding to the rule term set 2-1 is that the feeding temperature is between the critical temperatures pdtTmpTmph _ a and pdtTmph _ b, and the maximum pressure is less than the critical pressure Pth _ a. The item set classification range corresponding to rule item set 2-2 is that the feed temperature is less than the critical temperature pdtTmpTmpTh _ a and the top mold temperature is less than the critical temperature mldusTmpTh _ a. The item set classification range corresponding to rule item set 3-1 is that the feed temperature is less than the critical temperature pdtTtmpTph _ a and the lower mold temperature is less than the critical temperature mldlTtmpth _ a. The term set corresponding to rule term set 3-2 is classified to have a pressure maximum between critical pressures Pth _ a, Pth _ b.
The support degree in table 8 corresponds to the ratio of the item set classification range corresponding to the rule item set to the item set classification range corresponding to all the rule item sets. For example, in Table 8, the term set classification range corresponding to rule term set 1-1 (i.e., the feed temperature is less than the critical temperature pdtTtmpTph _ a) also appears repeatedly in rule term set 2-2 and rule term set 3-1. Thus, the item set classification range corresponding to the rule item set 1-1 (i.e., the feed temperature is less than the critical temperature pdtttmpth _ a), occurs three times in total (5) in the total number of all rule item sets. Therefore, the item set classification range corresponding to the rule item set 1-1 (i.e., the input material temperature is less than the critical temperature pdtTtmpth _ a) has a support degree of 3/5.
On the other hand, the item set classification ranges corresponding to the rule item sets 2-1, 2-2, 3-1, and 3-2 are not duplicated with the item set classification ranges of the other rule item sets. That is, the item set classification range (i.e., feed temperature- (pdtttmpth _ a, b), pressure maximum- (Pth _ a)) corresponding to the rule item set 2-1 appears only in the rule item set 2-1, and is not found in the item set classification ranges of the other four rule item sets; the item set classification ranges corresponding to the rule item set 2-2 (i.e., feed temperature- (pdtttmpth _ a), top mold temperature- (mldustmpth _ a)) appear only in the rule item set 2-2 and are not seen in the item set classification ranges of the other four rule item sets; the item set classification range corresponding to the rule item set 3-1 (i.e., feed temperature- (pdtttmpth _ a), and lower mold temperature- (mldlTmpTh _ a)), pressure maximum value- (Pth _ a)) appears only in the rule item set 3-1, not in the item set classification ranges of the other four rule item sets; and, the item set classification range (i.e., pressure maxima- (Pth _ a, Pth _ b)) corresponding to rule item set 3-2 appears only in rule item set 3-2, not in the item set classification ranges of the other four rule item sets. And therefore, the support degrees of the rule item sets 2-1, 2-2, 3-1 and 3-2 are 1/5.
See table 9, which is an example of the coverage calculation module 2457c calculating coverage according to the rule term set. Coverage includes the ratio of the item set classification range corresponding to the rule item set to the total number of defective products bMP in the products bMP judged to be defective.
TABLE 9
Figure BDA0002373524830000381
The coverage calculation module 2457c receives the rule item sets from the rule item set conversion module 2458a and quality check data about the product bMP (e.g., how many products bMP are determined to be defective, processing characteristics corresponding to the products bMP determined to be defective, etc.) from the quality check module. The coverage calculation module 2457c compares the quality inspection data of the product bMP with the item set classification ranges corresponding to the rule item sets, and determines whether the processing characteristics of the product bMP, which is determined to be defective in the quality inspection data, meet the item set classification ranges of the respective rule item sets (e.g., rule item sets 1-1, 2-2, 3-1, 3-2); and the number of item set classification scopes that meet the rule item set in product bMP for which the quality inspection data is determined to be defective.
Based on the difference between the rule item sets 1-1, 2-2, 3-1, and 3-2, the coverage calculation module 2457c divides the number of products bMP that are determined to be defective by the quality testing data and the total number of products bMP that are determined to be defective by the quality testing data to obtain the coverage corresponding to the rule item sets 1-1, 2-2, 3-1, and 3-2, respectively. As shown in Table 9, the coverage corresponding to rule item set 1-1 is 0.1; the coverage corresponding to rule item set 2-1 is 0.32; the coverage rate corresponding to rule item set 2-2 is 0.98; the coverage corresponding to rule item set 3-1 is 0.36; and the coverage corresponding to rule item set 3-2 is 0.08.
The weight calculation module 2457b receives the support degrees corresponding to the rule item sets 1-1, 2-2, 3-1, 3-2 from the support degree calculation module 2457a, and receives the coverage rates corresponding to the rule item sets 1-1, 2-2, 3-1, 3-2 from the coverage rate calculation module 2457c, and then calculates the rule weights corresponding to the rule item sets 1-1, 2-2, 3-1, 3-2, respectively.
Please refer to table 10, which is an example of the weight calculating module 2457b calculating the rule weight corresponding to the rule item set according to the support degree and the coverage rate.
Watch 10
Figure BDA0002373524830000391
Figure BDA0002373524830000401
The weight calculation module 2457b directly multiplies the coverage (0.1) by the support (0.6) corresponding to the rule item set 1-1 to obtain the original rule weight (0.6 × 0.1 — 0.06); directly multiplying the support degree (0.2) corresponding to the rule item set 2-1 by the coverage rate (0.32) to obtain an original rule weight (0.2 x 0.32 ═ 0.064); directly multiplying the support degree (0.2) corresponding to the rule item set 2-2 by the coverage rate (0.98) to obtain an original rule weight (0.2 x 0.98 ═ 0.196); directly multiplying the support degree (0.2) corresponding to the rule item set 3-1 by the coverage rate (0.36) to obtain an original rule weight (0.2 x 0.36 ═ 0.072); and directly multiplying the support degree (0.2) corresponding to the rule item set 3-2 by the coverage rate (0.08) to obtain the original rule weight (0.2 x 0.08 ═ 0.016). Then, the weight calculation module 2457b sums the original rule weights corresponding to the rule item sets 1-1, 2-2, 3-1, and 3-2 to obtain a total original rule weight; and dividing each original rule weight by the sum of the original rule weights to obtain the weight ratio of the original rule weights corresponding to the rule item sets 1-1, 2-2, 3-1 and 3-2.
In Table 10, the range of the support degree is between 0 and 1, and the range of the coverage rate is between 0 and 1. Accordingly, the original rule weight obtained by multiplying the two is also in the range of 0-1. In order to avoid the situation that when the value of one of the support or the coverage is larger but the value of the other is just 0, the calculation result of the original rule weight is 0, the support shift amount (e.g., 0.5) and the coverage shift amount (e.g., 0.5) can also be used in combination. That is, the support and the support translation amount are summed to obtain the translated support, and the coverage translation amount are summed to obtain the translated coverage. And then multiplying the support degree after translation and the coverage rate after translation to obtain the rule weight after translation. Thereafter, the sum of the post-translation rule weights and the post-translation rule weight ratios corresponding to the rule term sets may be calculated, as described in table 10.
After the translation treatment, the range of the support degree after the translation is between 0.5 and 1.5, and the range of the coverage rate after the translation is also between 0.5 and 1.5. Accordingly, the range of the post-translation rule weight represented by the product of the two is in the range of 0.5 × 0.5 to 0.25 and 1.5 × 1.5 to 2.25, so that the case where the original rule weight is 0 can be avoided.
According to the idea of the invention, the higher the weight proportion of the original rule weight corresponding to the rule item sets 1-1, 2-2, 3-1 and 3-2 is, the higher the relevance of the item set classification range corresponding to the rule item sets 1-1, 2-2, 3-1 and 3-2 to the flaw is represented. For example, in Table 11, the weight fraction (0.48) of the original rule weights corresponding to rule item set 2-2 is highest. Therefore, it can be seen from table 8 that if the processing characteristics of a certain product uMP meet the conditions that the feeding temperature is less than or equal to 430 ℃ and the upper mold temperature is less than or equal to 155 ℃, the product uMP has a higher chance of being defective.
The above illustrates how the prediction model is built. Continuing to describe how the established prediction model is used to predict whether the product uMP requires quality testing; and, if product uMP is identified as requiring quality inspection, the predictive model can also be used to analyze the defects in manufacturing equipment 23 to provide defect source analysis to assist the manufacturer in determining which processing step is more likely to result in a defect in the quality of product uMP.
Please refer to fig. 16, which is a schematic diagram illustrating the prediction of the quality of a product according to a prediction model and the processing characteristics of the product when the product quality monitoring system is in a model using mode (uM). Please refer to fig. 6B and fig. 16.
Production material 21 is processed by production equipment 23 into product uMP. In the production flow of manufactured product uMP, sensor 241 provides sensed raw production parameter uMP _ origPP to data preprocessing device 243. The data preprocessing unit 243 converts the data to generate a processing feature data set of the product uMP. In the model using mode uM, the model using device 247 analyzes and compares the processing feature data set of the product uMP using the prediction model established from the product bMP. Thereafter, model-using device 247 will generate a quality prediction of product uMP and generate fault source analysis information for product uMP.
When the quality prediction result of the product uMP shows that the product uMP does not need to be sampled for quality detection, the product uMP can be directly shipped. When the quality prediction result of the product uMP shows that the product uMP needs quality detection sampling, the quality detection device 248 performs quality detection on the product uMP. In addition, the defect source analysis information may be used as a reference for a user to inspect the manufacturing equipment 23.
Please refer to fig. 17, which is a block diagram of a model using apparatus. The model using device 247 includes a product feature receiving module 2471, a classification rule receiving module 2473, a product feature and classification rule comparing module 2475, a similarity calculating module 2477, a quality predicting module 2479 and a defect source tracking module 2470. The product feature and classification rule comparing module 2475 is electrically connected to the similarity calculating module 2477, the product feature receiving module 2471 and the classification rule receiving module 2473. The quality prediction module 2479 is electrically connected to the similarity calculation module 2477 and the defect source tracking module 2470.
The product characteristic receiving module 2471 is electrically connected to the data preprocessing unit 243 and receives the processed characteristic data set of the product uMP from the data preprocessing unit 243. On the other hand, the classification rule receiving module 2473 is electrically connected to the model establishing device 245, and receives the item set classification range corresponding to the rule item set and the weight proportion of the original rule weight corresponding to the rule item set from the model establishing device 245. The processing characteristics of the product uMP received by the product characteristics receiving module 2471 are shown in table 11.
In practice, the number of products uMP that the model-using device 247 needs to analyze is quite large, and for convenience of illustration, only products uMP 1-uMP 6 are used as examples, and it is assumed that the processing characteristics include the feeding temperature, the pressure maximum, the upper mold temperature, and the lower mold temperature.
TABLE 11
Figure BDA0002373524830000431
Upon receiving the processing characteristics (e.g., feed temperature, pressure maximum, upper mold temperature, and lower mold temperature) of each of products uMP 1-uMP 6 as listed in table 11, product characteristic receiving module 2471 first converts the processing characteristics of products uMP 1-uMP 6 in a frequent itemization. Next, the classification ranges of the regular item sets 1-1, 2-2, 3-1, and 3-2 listed in Table 7 are compared with the processing characteristics of products uMP 1-uMP 6. That is, it was confirmed whether or not the feed temperature, the maximum pressure value, the upper mold temperature and the lower mold temperature of products uMP1 to uMP6 meet the item set classification ranges of rule item sets 1-1, 2-2, 3-1 and 3-2 listed in Table 8.
Table 12 is a confirmation table for confirming whether or not the product conforms to the processing rule of the rule item sets 1-1, 2-2, 3-1, 3-2. Where Y represents the item set classification range where the processing characteristics of product uMP meet the rule item set. The product feature and classification rule comparison module 2475 is used to perform the comparisons shown in table 12.
TABLE 12
Figure BDA0002373524830000432
In Table 12, the processing characteristics of product uMp1 conform to the item set classification ranges of rule item sets 2-1, 3-1; the processing characteristics of the product uMp2 conform to the item set classification range of the rule item set 3-2; the processing characteristics of the product uMp3 conform to the item set classification range of the rule item sets 1-1, 2-1 and 2-2; the processing characteristics of the product uMp4 conform to the item set classification range of the rule item sets 3-1 and 3-2; the processing characteristics of the product uMp5 conform to the item set classification range of the rule item sets 1-1 and 3-2; and the processing characteristics of the product uMp6 conform to the item set classification ranges of the rule item sets 1-1 and 2-1.
After the product characteristic and classification rule comparison module 2475 generates the comparison results as shown in table 12, the comparison results are transmitted to the similarity calculation module 2477. Next, the similarity calculation module 2477 receives from the weight calculation module 2457b the weight ratios of the original rule weights corresponding to the rule item sets as shown in the rightmost column of Table 10, and receives from the product feature and classification rule comparison module 2475 the comparison results, which will be used to calculate the relationship between the product uMP and the weight ratios of the original rule weights corresponding to the rule item sets.
Please refer to table 13, which is a list of the similarity calculation module calculating the defect similarity according to the product uMP and the weight ratio of the original rule weight corresponding to the rule item set. Table 13 is generated from tables 10 and 12 by filling the columns listed as Y in table 12 with the weight ratios of the original rule weights corresponding to the rule item sets calculated in table 10. Then, the defect similarity corresponding to each of the products uMP1 to uMP6 is calculated, and table 13 is obtained. The defect similarity may be regarded as how much the processing feature data set corresponding to the product uMP meets the item set classification range corresponding to the rule item set in the process of producing the product uMP by the production equipment 23.
Watch 13
Figure BDA0002373524830000441
Figure BDA0002373524830000451
As the similarity of defects is higher, it is represented that the more processing features that are easier to make product uMp defective among the processing features generated during the production of product uMp, it can therefore be predicted that product uMp will have a higher chance of being defective. Conversely, when the similarity of defects is higher, it is indicated that there are not too many processing features that tend to cause defects in product uMp, and thus it is predicted that product uMp will have a lower chance of being defective, among the processing features that accompany the production of product uMp.
As shown in Table 12, the processing characteristics of product uMP1 conform to the item set classification ranges of rule item sets 2-1 and 3-1. Therefore, in table 13, the defect similarity corresponding to product uMP1 is obtained from the sum of the weight ratio of the original rule weight corresponding to rule item set 2-1 (0.16) and the weight ratio of the original rule weight corresponding to rule item set 3-1 (0.18) in table 11 ((0.16+0.18) × 100% ═ 34%).
As shown in Table 12, the processing characteristics of product uMP2 meet the item set classification range of rule item set 3-2. Therefore, in table 13, the defect similarity corresponding to product uMP2 is obtained from the weight ratio of the original rule weight corresponding to rule item set 2-1 in table 11 (0.04 × 100% — 4%).
As shown in Table 12, the processing characteristics of product uMP3 conform to the item set classification ranges of rule item sets 1-1, 2-2. Therefore, in table 13, the defect similarity corresponding to product uMP3 is obtained by summing up the weight ratio of the original rule weight corresponding to rule item set 1-1 (0.15), the weight ratio of the original rule weight corresponding to rule item set 2-1 (0.16), and the weight ratio of the original rule weight corresponding to rule item set 2-2 (0.48) ((0.15+0.16+0.48) × 100% ═ 79%) in table 11.
As shown in Table 12, the processing characteristics of product uMP4 conform to the item set classification ranges of rule item sets 3-1 and 3-2. Therefore, in table 13, the defect similarity corresponding to product uMP4 is obtained from the sum of the weight ratio of the original rule weight corresponding to rule item set 3-1 (0.18) and the weight ratio of the original rule weight corresponding to rule item set 3-2 (0.04) ((0.18+0.04) × 100% ═ 22%).
As shown in Table 12, the processing characteristics of product uMP5 conform to the item set classification ranges of rule item sets 1-1 and 3-2. Therefore, in table 13, the defect similarity corresponding to product uMP5 is obtained from the sum of the weight ratio of the original rule weight corresponding to rule item set 1-1 (0.15) and the weight ratio of the original rule weight corresponding to rule item set 3-2 (0.04) ((0.15+0.04) × 100% ═ 19%).
As shown in Table 12, the processing characteristics of product uMP6 conform to the item set classification ranges of rule item sets 1-1 and 2-1. Therefore, in table 13, the defect similarity corresponding to product uMP6 is obtained from the sum of the weight ratio of the original rule weight corresponding to rule item set 1-1 (0.15) and the weight ratio of the original rule weight corresponding to rule item set 2-1 (0.16) ((0.15+0.16) × 100% ═ 31%).
After the similarity calculation module 2477 calculates the defect similarity corresponding to the products uMP 1-uMP 6, the quality prediction module 2479 is further configured to compare the defect similarity with the defect similarity threshold, and determine whether the products need to be subjected to quality detection according to the comparison result.
Please refer to table 14, which shows a list of the quality prediction module 2479 determining whether to perform the quality inspection on the product after performing the comparison between the defect similarity and the defect similarity threshold shown in table 13. The quality prediction module 2479 compares the defect similarity of products uMP 1-uMP 6 to a defect similarity threshold (e.g., 30%), respectively. In practical applications, the defect similarity threshold value may vary depending on product type and yield.
TABLE 14
Figure BDA0002373524830000471
As shown in table 14, the flaw similarity of product uMP1 was 34%, the flaw similarity of product uMP3 was 79%, and the flaw similarity of product uMP6 was 31%, both of which were higher than the flaw similarity threshold (30%). Therefore, the quality prediction module 2479 determines that the products uMP1, uMP3, uMP6 should be quality tested. On the other hand, the defect similarity of product uMP2 is 4%, the defect similarity of product uMP4 is 22%, and the defect similarity of product uMP5 is 19%, all of which are lower than the defect similarity threshold (30%). Therefore, the quality prediction module 2479 determines that none of the products uMP2, uMP4, uMP5 require quality testing.
In addition to generating the quality prediction result according to whether the product itself needs to be quality-tested, the model-using device 247 may also use the prediction model to provide a defect source tracking function. According to an embodiment of the invention, the manufacturer may provide a cause table to the defect source tracking module 2470. The root cause look-up table represents relationships between the rule item sets 1-1, 2-2, 3-1, 3-2 and associated production equipment affecting the plurality of production rules. In practice, the factor table may be provided by the manufacturer based on experience or statistics. Table 15 is an example of a root cause comparison table.
Watch 15
Figure BDA0002373524830000472
Figure BDA0002373524830000481
In Table 15, the root cause of rule item set 1-1 is only associated with the stock heat furnace 331. Therefore, when the processing characteristics of a certain product uMP meet the characteristic range of rule item set 1-1, and the prediction model predicts that the product is defective, the risk of defect of the product uMP caused by the billet heating furnace 331 is 100%.
In table 15, the root of the rule item set 2-1 is the pressure between the blank heating furnace 331 and the warm forging press 333. The blank heating furnace 331 gives 70% of opportunity, and the warm forging press 333 gives 30% of opportunity. Therefore, when the processing characteristics of a certain product uMP meet the characteristic range of rule set 2-1, and the prediction model predicts that the product is defective, the risk of defect of the product uMP caused by the billet heating furnace 331 is 70%, and the risk of defect of the product uMP caused by the pressure of the warm forging press 333 is 30%.
In table 15, the root of rule item set 2-2 is heated by the blank heating furnace 331 and the warm forging press 333. The blank heating furnace 331 gives a chance of 50%, and the warm forging press 333 gives a chance of 50%. Therefore, when the processing characteristics of a certain product uMP meet the characteristic range of rule item set 2-2 and the prediction model predicts that the product is defective, the risk of defect of the product uMP caused by the billet heating furnace 331 is 50%, and the risk of defect of the product uMP caused by the warm forging press 333 is 50%.
In Table 15, the root cause of rule item set 3-1 is only related to warm forging press station 333 heating. Therefore, when the machining feature of a certain product uMP is within the feature range of rule item set 3-1, and the quality of the product is predicted as a defect by the prediction model, the risk of defect of the product uMP caused by the heating of the warm forging press 333 is 100%.
In Table 15, the root cause of rule entry set 3-2 is only related to warm forging press bench 333 pressure. Therefore, when the machining feature of a certain product uMP is within the feature range of rule item set 3-2, and the quality of the product is predicted as defective by the prediction model, the risk of defect of this product uMP caused by the pressure of the warm forging press 333 is 100%.
Since in table 14, the flaw similarity (79%) of product uMP3 was the highest. Therefore, taking product uMP3 as an example, it is described how defect source tracking module 2470, in conjunction with the root cause table shown in table 15, further analyzes what is more likely to cause product uMP3 to be a defect source in the production flow of product uMP 3.
As shown in Table 12, product uMP3 is associated with rule item sets 1-1, 2-2. Therefore, for convenience of explanation, the weight ratios of the rule weights shown in table 11 and the original rule weights corresponding to the rule item sets and the comparison relationship between the rule item sets and the root cause shown in table 15 are listed in table 16.
TABLE 16
Figure BDA0002373524830000491
From table 16, it can be calculated that the sum of the weight ratios of the original rule weights corresponding to the rule item set is 0.15+0.16+0.48 — 0.79. In addition, as can be seen from table 15, the root cause causing the product uMP3 to be predicted as defective includes the blank heating furnace 331, the pressure of the warm forging press station 333, and the heating of the warm forging press station 333. Therefore, the defect source tracking module 2470 determines the ratio of the product uMP3 possibly predicted to be defective due to the pressure of the blank heating furnace 331, the warm forging press 333 and the heating of the warm forging press 333.
See table 17, which is a listing of the root causes and ratios thereof that may cause product uMP3 to require quality testing. This table separately calculates the chances that it will affect product uMP3 as being a flaw for the root cause that could make product uMP 3a flaw.
TABLE 17
Figure BDA0002373524830000501
According to table 17, product uMP3 may have failed with billet heater 331, resulting in a 0.15 x 1 chance of predicting (classifying) product uMP3 as defective based on rule item set 1-1; the chance of predicting (categorizing) product uMP3 as a flaw based on rule term set 2-1 is 0.16 x 0.7; and the chance of predicting (classifying) product uMP3 as defective based on rule term set 2-2 is 0.48 x 0.5. The sum of these three results is divided by the sum (0.79) of the weight ratios of the original rule weights corresponding to the rule sets, so that the probability that the product uMP3 is classified as defective due to the failure of the billet heating furnace 331 is 0.63.
According to table 17, when product uMP3 was classified into uMP3 based on rule item set 2-1 due to pressure anomaly of warm forging press bench 333, the chance of classifying product uMP3 as defective was 0.16 × 0.3. The probability of classifying the product uMP3 as a defect (0.16 x 0.3) is divided by the sum of the weight ratios of the original rule weights corresponding to the rule item sets (0.79), and the probability of classifying the product uMP3 as a defect due to the abnormal pressure of the warm forging press 333 is 0.63.
According to table 17, when product uMP3 may be heated abnormally by warm forging press station 333, resulting in classification of product uMP3 based on rule item set 2-2, the chance of classifying product uMP3 as defective is 0.48 × 0.5. That is, the probability of classifying the product uMP3 as a defect (0.48 × 0.5) is divided by the total of the weight ratios of the original rule weights corresponding to the rule item sets (0.79), and the probability of classifying the product uMP3 as a defect due to abnormal heating of the warm forging press 333 is 0.31.
For comparison, the calculation results of Table 17 are collated in Table 18. In accordance with the present invention, the defect source tracking module 2470 may present the result to the user for reference in the form of a pie chart or the like. Therefore, the manufacturer can grasp which link of the production apparatus 23 should be repaired without complicated analysis.
Watch 18
Product(s) Blank heating furnace Pressure of warm forging stamping machine Warm forging stamping machine heating
uMP3 63% 6% 31%
After the prediction model is established, the prediction effect can be further evaluated, and whether the prediction result still accords with the characteristics of the production flow is further confirmed. The exact time for evaluating the applicability of the prediction model can be adjusted according to the needs of the manufacturer and the product characteristics without limitation. For example, every fixed period, or after a certain number of products are produced, etc.
Please refer to fig. 18, which is a schematic diagram illustrating a product quality monitoring system in a model evaluation mode (eM) to check whether a prediction model needs to be updated. Please refer to fig. 6C and fig. 18. While production facility 23 produces product eMP, sensor 241 also produces raw production parameters eMP _ origPP corresponding to product eMP. The data preprocessing device 243 receives the original production parameter eMP _ origPP and converts the original production parameter eMP _ origPP into the processing characteristics of the product eMP. The model using device 247 generates a quality prediction result corresponding to the product eMP based on the product processing characteristics and the prediction model. On the other hand, the quality detection device 248 performs quality detection on the product eMP to generate a quality detection data set.
The model evaluation device 249 receives the result of the prediction of the quality of the product eMP from the model using device 247 and the quality inspection data set of the product eMP from the quality inspection device 248, respectively, and compares the two data sets to generate a model evaluation result. Then, it is determined whether the model building device 245 needs to be informed of re-building the prediction model according to the model evaluation result, or the model using device 247 is informed of continuing to use the prediction model.
Please refer to fig. 19, which is a flowchart illustrating the product quality monitoring system in the model evaluation mode (eM). First, an update counter of the prediction model is initialized (step S801). Then, the sensor senses the production flow of the product eMP to generate an original production parameter eMP _ origPP (step S802); the data preprocessing device 243 preprocesses the original production parameter eMP _ origPP to generate a processing feature data set of the product eMP (step S803); after the model using device 247 takes the processing feature data set of the product eMP as the input of the prediction model, the prediction model is used together to generate a quality prediction result corresponding to the product eMP (step S804). On the other hand, the quality detection device 248 detects the product eMP and generates quality detection data corresponding thereto (step S805).
After the quality prediction result and the quality inspection data set are generated, the model evaluation device 249 compares whether the quality prediction result of the product eMP and the quality inspection data diverge (step S807). If the comparison result of step S807 shows that the quality prediction result still matches the quality detection result, the model evaluation device 249 notifies the model using device 247b that the prediction model can still be used (step S817).
If the comparison result in step S807 is a discrepancy, it is first determined whether the prediction model update counter has reached a predetermined update time threshold (e.g., twice) (step S811). If the determination result in step S811 is negative, the model evaluation device 249 notifies the model creation device 245 that the prediction model needs to be created again, and increments the prediction model update counter (step S813). After the model building means 245 rebuilds the prediction model, the execution is repeated from step S804. On the other hand, if the determination result in step S811 is positive, the product eMP for evaluating the prediction model is newly selected (step S815), and then the flow of fig. 19 is executed again.
As mentioned above, the product quality monitoring system 24 of the present disclosure first obtains the processing characteristics of the product bMP in the production process and the quality detection data set for the product in the model building mode (bM). A prediction model is built by analyzing the correlation between the quality of the product bMP and its processing characteristics. Next, in the model using mode (uM), the processing characteristics accompanying the production of the product uMP are obtained, and when the model predicts that a part of the product uMP may have a defect, further quality inspection is performed on the product uMP with a higher risk of defect, and defect source analysis information is provided to allow the manufacturer to perform maintenance or repair on the production equipment 23. In addition, the product quality monitoring system 24 also provides a model evaluation mode (eM) to maintain the prediction quality of the prediction model.
It should be noted that although the above description is made of the production of bicycle parts, the product quality monitoring system 24 of the present disclosure can be applied to various production plants in different manufacturing industries. In the product quality monitoring system 24, although the production parameters that can be obtained by the production equipment 23 of different types of production plants are different, the step of generating the processing characteristics by the data preprocessing device 243 needs to be modified according to the characteristics and product types of the production plants. However, after the conversion into the machining features, the subsequent model building device 245, model using device 247 and model evaluating device 249 still operate in a similar manner. Thus, the product quality monitoring system 24 of the present invention may be applied to a variety of manufacturing industries.
In summary, the model establishment mode (bM), the model usage mode (uM), and the model evaluation mode (eM) provided by the product quality monitoring system 24 of the present invention enable the prediction model to maintain the accuracy of the predicted product quality. Since the prediction model can be used to stably predict the quality of the product, the manufacturer can greatly save the cost and time required for quality detection of the product quality.
Those of ordinary skill in the art will recognize that: in the above description, various logic blocks, modules, circuits, and method steps can be implemented by using electronic hardware, computer software, or a combination of the two, and the connection between the various implementations is not limited to the terms signal connection, coupling, electrical connection, or other types of alternatives, and is only used for the purpose of describing that when the logic blocks, modules, circuits, and method steps are implemented, signals can be directly or indirectly exchanged through different means, such as wired electronic signals, wireless electromagnetic signals, and optical signals, so as to achieve the purpose of exchanging and transmitting signals, data, and control information. Therefore, the terms used in the specification do not limit the connection of the present application, and do not depart from the scope of the present application due to the different connection modes.
In summary, although the present invention has been described with reference to the above embodiments, the present invention is not limited thereto. Those skilled in the art can make various changes and modifications without departing from the spirit and scope of the invention. Therefore, the protection scope of the present invention should be determined by the appended claims.

Claims (13)

1. A device for creating a prediction model by analyzing a quality inspection dataset generated by inspecting a plurality of products and a processing feature dataset associated with the production of the plurality of products, wherein the quality inspection dataset comprises K quality inspection data and the processing feature dataset comprises K processing feature data, the device comprising:
a strong classifier generating module, comprising:
a first generator for generating a first candidate strong classifier group comprising K first candidate strong classifiers according to a first classifier strategy, the processing feature data set and the quality detection data set, wherein a K-th first candidate strong classifier of the K first candidate strong classifiers is generated according to (K-1) parts of quality detection data of the K parts of quality detection data, (K-1) parts of processing feature data of the K parts of processing feature data and the first classifier strategy, wherein K is a positive integer;
a second generator for generating a second candidate strong classifier group comprising K second candidate strong classifiers according to a second classifier strategy, the processed feature data set and the quality detection data set, wherein a kth second candidate strong classifier of the K second candidate strong classifiers is generated according to the (K-1) parts of quality detection data, the (K-1) parts of processed feature data and the second classifier strategy;
a third generator for generating a third candidate strong classifier group comprising K third candidate strong classifiers according to a third classifier strategy, the processed feature data set and the quality detection data set, wherein a kth third candidate strong classifier of the K third candidate strong classifiers is generated according to the (K-1) parts of quality detection data, the (K-1) parts of processed feature data and the third classifier strategy; and the number of the first and second groups,
a primary selection module electrically connected to the first generator, the second generator and the third generator,
wherein when the first selection module determines that the first candidate strong classifier group does not satisfy a first selection condition, the first generator updates the K first candidate strong classifiers after modifying at least one of a plurality of first model structure parameters associated with the first classifier strategy;
when the initial selection module determines that the second candidate strong classifier group does not satisfy the initial selection condition, the second generator updates the K second candidate strong classifiers after modifying at least one of a plurality of second model structure parameters related to the second classifier strategy;
when the initial selection module determines that the third candidate strong classifier group does not satisfy the initial selection condition, the third generator updates the K third candidate strong classifiers after modifying at least one of a plurality of third model structure parameters related to the third classifier strategy.
2. The set-up apparatus of claim 1,
when the first candidate strong classifier group meets the initial selection condition, the first generator generates a first initial selection strong classifier according to the first classifier strategy and a check training data;
when the second candidate strong classifier group meets the initial selection condition, the second generator generates a second initial selection strong classifier according to the second classifier strategy and the check training data; and
when the third candidate strong classifier group satisfies the initial selection condition, the third generator generates a third initial selection strong classifier according to the third classifier strategy and the checking training data.
3. The creating apparatus according to claim 2, wherein the check training data is randomly selected from the machining feature data set, and the machining feature data set is composed of the check training data and a check test data.
4. The apparatus of claim 3, wherein said strong classifier generating module further comprises:
and a check module electrically connected to the first generator, the second generator and the third generator, for selecting two of the first initially selected strong classifier, the second initially selected strong classifier and the third initially selected strong classifier as a first check strong classifier and a second check strong classifier according to the check test data and the quality detection data set.
5. The apparatus of claim 4, wherein the check module comprises:
a verification sequence calculation module, electrically connected to the first generator, the second generator and the third generator, for generating a first verification sequence corresponding to the first initially selected strong classifier, a second verification sequence corresponding to the second initially selected strong classifier and a third verification sequence corresponding to the third initially selected strong classifier.
6. The apparatus of claim 5, wherein the check module further comprises:
a correlation calculation module, electrically connected to the verification sequence calculation module, for calculating a first verification sequence correlation coefficient between the first verification sequence and the second verification sequence, calculating a second verification sequence correlation coefficient between the first verification sequence and the third verification sequence, and calculating a third verification sequence correlation coefficient between the second verification sequence and the third verification sequence; and
a strong classifier selection module, electrically connected to the correlation calculation module, for determining the first and second check strong classifiers according to the comparison of the first, second and third verification sequence correlation coefficients.
7. The apparatus of claim 4, wherein the apparatus further comprises:
and the strong classifier analyzing module is electrically connected with the check module and is used for obtaining a plurality of classification rules and a plurality of rule weights corresponding to the classification rules after reading the paths of a plurality of first weak classifiers contained in the first check strong classifier and reading the paths of a plurality of second weak classifiers contained in the second check strong classifier.
8. The apparatus of claim 1, wherein said first classifier strategy, said second classifier strategy and said third classifier strategy employ a binary tree structure, wherein each of said K first candidate strong classifiers comprises T1 first weak classifiers, each of said K second candidate strong classifiers comprises T2 first weak classifiers, and each of said K third candidate strong classifiers comprises T3 third weak classifiers.
9. The apparatus of claim 8, wherein T1 is equal to T2.
10. The apparatus of claim 8, wherein each of the T1 first weak classifiers has a first depth, each of the T2 second weak classifiers has a second depth, and each of the T3 third weak classifiers has a third depth, wherein the first model structure parameters comprise T1 and the first depth, the second model structure parameters comprise T2 and the second depth, and the third model structure parameters comprise T3 and the third depth.
11. The apparatus of claim 10, wherein the first depth is equal to the second depth.
12. A method for building a prediction model is disclosed, which analyzes a quality detection data set generated by detecting a plurality of products and a processing characteristic data set related to the production of the plurality of products, wherein the quality detection data set comprises K quality detection data, and the processing characteristic data set comprises K processing characteristic data, the method comprises the following steps:
generating a first candidate strong classifier group comprising K first candidate strong classifiers according to a first classifier strategy, the processed feature data set and the quality detection data set, wherein a kth first candidate strong classifier of the K first candidate strong classifiers is generated according to (K-1) parts of quality detection data of the K parts of quality detection data, (K-1) parts of processed feature data of the K parts of processed feature data and the first classifier strategy, wherein K is a positive integer;
generating a second candidate strong classifier group comprising K second candidate strong classifiers according to a second classifier strategy, the processed feature data set and the quality detection data set, wherein a kth second candidate strong classifier of the K second candidate strong classifiers is generated according to the (K-1) quality detection data, the (K-1) processed feature data and the second classifier strategy;
generating a third candidate strong classifier group comprising K third candidate strong classifiers according to a third classifier strategy, the processed feature data set and the quality detection data set, wherein a kth third candidate strong classifier of the K third candidate strong classifiers is generated according to the (K-1) parts of quality detection data, the (K-1) parts of processed feature data and the third classifier strategy; and the number of the first and second groups,
updating the K first candidate strong classifiers after modifying at least one of a plurality of first model structure parameters related to the first classifier strategy when the first candidate strong classifier group does not satisfy a primary selection condition;
updating the K second candidate strong classifiers after modifying at least one of a plurality of second model structure parameters associated with the second classifier strategy when the second candidate strong classifier group does not satisfy the primary selection condition;
updating the K third candidate strong classifiers after modifying at least one of a plurality of third model structure parameters associated with the third classifier strategy when the third candidate strong classifier group does not satisfy the initial selection condition.
13. A product quality monitoring system, comprising:
a quality detection device for detecting a plurality of products and generating a quality detection data set, wherein the quality detection data set comprises K quality detection data;
a data preprocessing device for receiving a plurality of production parameters related to the production of the plurality of products and generating a processing characteristic data set according to the production parameters, wherein the processing characteristic data set comprises K processing characteristic data; and the number of the first and second groups,
a model building apparatus, comprising:
a strong classifier generating module, comprising:
a first generator for generating a first candidate strong classifier group comprising K first candidate strong classifiers according to a first classifier strategy, the processing feature data set and the quality detection data set, wherein a K-th first candidate strong classifier of the K first candidate strong classifiers is generated according to (K-1) parts of quality detection data of the K parts of quality detection data, (K-1) parts of processing feature data of the K parts of processing feature data and the first classifier strategy, wherein K is a positive integer;
a second generator for generating a second candidate strong classifier group comprising K second candidate strong classifiers according to a second classifier strategy, the processed feature data set and the quality detection data set, wherein a kth second candidate strong classifier of the K second candidate strong classifiers is generated according to the (K-1) parts of quality detection data, the (K-1) parts of processed feature data and the second classifier strategy;
a third generator for generating a third candidate strong classifier group comprising K third candidate strong classifiers according to a third classifier strategy, the processed feature data set and the quality detection data set, wherein a kth third candidate strong classifier of the K third candidate strong classifiers is generated according to the (K-1) parts of quality detection data, the (K-1) parts of processed feature data and the third classifier strategy; and the number of the first and second groups,
a primary selection module electrically connected to the first generator, the second generator and the third generator,
when the initial selection module determines that the first candidate strong classifier group does not satisfy an initial selection condition, the first generator updates the K first candidate strong classifiers after modifying at least one of a plurality of first model structure parameters related to the first classifier strategy;
when the initial selection module determines that the second candidate strong classifier group does not satisfy the initial selection condition, the second generator updates the K second candidate strong classifiers after modifying at least one of a plurality of second model structure parameters related to the second classifier strategy;
when the initial selection module determines that the third candidate strong classifier group does not satisfy the initial selection condition, the third generator updates the K third candidate strong classifiers after modifying at least one of a plurality of third model structure parameters related to the third classifier policy.
CN202010058246.XA 2019-12-05 2020-01-19 Device and method for establishing prediction model and product quality monitoring system Active CN112926760B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW108144495 2019-12-05
TW108144495A TWI709054B (en) 2019-12-05 2019-12-05 Building device and building method of prediction model and monitoring system for product quality

Publications (2)

Publication Number Publication Date
CN112926760A CN112926760A (en) 2021-06-08
CN112926760B true CN112926760B (en) 2022-08-09

Family

ID=74202245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010058246.XA Active CN112926760B (en) 2019-12-05 2020-01-19 Device and method for establishing prediction model and product quality monitoring system

Country Status (2)

Country Link
CN (1) CN112926760B (en)
TW (1) TWI709054B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI734641B (en) * 2020-11-06 2021-07-21 財團法人金屬工業研究發展中心 Method for predicting forging stage and system for designing forging using thereof
CN113421264B (en) * 2021-08-24 2021-11-30 深圳市信润富联数字科技有限公司 Wheel hub quality detection method, device, medium, and computer program product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105745659A (en) * 2013-09-16 2016-07-06 佰欧迪塞克斯公司 Classifier generation method using combination of mini-classifiers with regularization and uses thereof
CN106611233A (en) * 2015-10-27 2017-05-03 财团法人资讯工业策进会 Power consumption estimation system and power consumption estimation method suitable for processing machine
CN109635830A (en) * 2018-10-24 2019-04-16 吉林大学 For estimating the screening technique of the valid data of car mass

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10430719B2 (en) * 2014-11-25 2019-10-01 Stream Mosaic, Inc. Process control techniques for semiconductor manufacturing processes
CN104615122B (en) * 2014-12-11 2018-04-06 深圳市永达电子信息股份有限公司 A kind of industry control signal detection system and detection method
US10726377B2 (en) * 2015-12-29 2020-07-28 Workfusion, Inc. Task similarity clusters for worker assessment
SG11201900220RA (en) * 2016-07-18 2019-02-27 Nantomics Inc Distributed machine learning systems, apparatus, and methods
EP3594749A1 (en) * 2018-07-10 2020-01-15 ASML Netherlands B.V. Method to label substrates based on process parameters

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105745659A (en) * 2013-09-16 2016-07-06 佰欧迪塞克斯公司 Classifier generation method using combination of mini-classifiers with regularization and uses thereof
CN106611233A (en) * 2015-10-27 2017-05-03 财团法人资讯工业策进会 Power consumption estimation system and power consumption estimation method suitable for processing machine
CN109635830A (en) * 2018-10-24 2019-04-16 吉林大学 For estimating the screening technique of the valid data of car mass

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
适用于非平衡数据的多关系多分类模型;杨鹤标等;《计算机工程》;20101020(第20期);全文 *

Also Published As

Publication number Publication date
TWI709054B (en) 2020-11-01
CN112926760A (en) 2021-06-08
TW202123054A (en) 2021-06-16

Similar Documents

Publication Publication Date Title
CN112926760B (en) Device and method for establishing prediction model and product quality monitoring system
CN110648305B (en) Industrial image detection method, system and computer readable recording medium
JP7102941B2 (en) Information processing methods, information processing devices, and programs
JP4641537B2 (en) Data classification method and apparatus
CN112148722B (en) Monitoring data abnormity identification and processing method and system
EP4068140A1 (en) Method and system for optimizing a simulation model using machine learning
US20230152786A1 (en) Industrial equipment operation, maintenance and optimization method and system based on complex network model
KR20220061360A (en) Die-casting product defect detection method and system based on deep learning anomaly detection
Pan et al. Unsupervised root-cause analysis for integrated systems
CN113435699A (en) Intelligent quality control method and system
CN117196405B (en) Quality evaluation method and system for steel industry production data
CN113033079B (en) Chemical fault diagnosis method based on unbalance correction convolutional neural network
CN117557827A (en) Plate shape anomaly detection method based on self-coding cascade forests
US20200387148A1 (en) Test time reduction for manufacturing processes by substituting a test parameter
KR20110070411A (en) Failure recognition system
Fallah Nezhad et al. Economic design of acceptance sampling plans based on conforming run lengths using loss functions
US20230021965A1 (en) Methods and systems for assessing printed circuit boards
Galan-Marin et al. Improving neural networks for mechanism kinematic chain isomorphism identification
CN113586554A (en) Hydraulic motor detection system and method
CN112766410A (en) Rotary kiln firing state identification method based on graph neural network feature fusion
JP7345006B1 (en) Learning model generation method and testing device
CN112948203B (en) Elevator intelligent inspection method based on big data
CN116629240B (en) Command checking and error correcting method for intelligent matrix type medical cleaning system
KR102667862B1 (en) heavy electrical equipment monitoring system using information visualization and method therefor
CN117852116B (en) Method for constructing digital twin model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant