CN117876798A - Method, system, equipment and storage medium for detecting faceting defect of engine - Google Patents

Method, system, equipment and storage medium for detecting faceting defect of engine Download PDF

Info

Publication number
CN117876798A
CN117876798A CN202410268717.8A CN202410268717A CN117876798A CN 117876798 A CN117876798 A CN 117876798A CN 202410268717 A CN202410268717 A CN 202410268717A CN 117876798 A CN117876798 A CN 117876798A
Authority
CN
China
Prior art keywords
regression
quality score
engine
classification
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410268717.8A
Other languages
Chinese (zh)
Inventor
郑宏维
肖南峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202410268717.8A priority Critical patent/CN117876798A/en
Publication of CN117876798A publication Critical patent/CN117876798A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a system, equipment and a storage medium for detecting faceting defects of an engine, wherein the method comprises the following steps: collecting a data set for detecting facet defects of an engine; extracting features of the images in the data set through a fast-Rcnn network to obtain a feature map; decoupling the feature map through a decoupling module to obtain classification features and regression features; carrying out convolution operation on the classification features to obtain classification scores; obtaining a quality score and a prediction value of a prediction frame boundary according to the regression characteristics; combining the classification score and the quality score to obtain a predicted quality score; and detecting the predicted value of the boundary of the predicted frame according to the predicted quality score to obtain the target predicted frame of the faceting defect of the engine. According to the invention, a branch is newly added on the basis of the master-rcnn network for predicting the quality score of the pre-selected frame, so that the decoupling degree of the classification characteristic and the regression characteristic is increased, and the quality score obtained by the new branch is adopted to predict the target prediction frame in the NMS stage, thereby improving the detection precision.

Description

Method, system, equipment and storage medium for detecting faceting defect of engine
Technical Field
The invention relates to the field of target detection, in particular to an engine faceting defect detection method, system, equipment and storage medium based on improved Faster-Rcnn.
Background
The existing target detection algorithms are quite many and mainly comprise a two-stage target detection algorithm and a single-stage target detection algorithm, wherein the single-stage target detection algorithm is higher in detection rate, but the detection accuracy is often lower than that of the two-stage target detection algorithm. Faster-RCNN is a marker algorithm of a second-order end target detection algorithm, is improved on the basis of RCNN, and has good detection performance.
However, in most industries today (especially in the automotive field), the detection of defects in the engine facet is done manually. The defect detection is carried out manually, so that the efficiency is low, the accuracy is low, and the defect detection on the faceted surface of the engine is realized by using a target detection algorithm.
However, the existing target detection algorithm has the following technical problems:
(1) The conventional faster-rcnn process is performed by feeding the convolved feature map directly into rpn, where the degree of decoupling is insufficient for classification features and regression features used to predict the preselected frame boundaries.
(2) The traditional master-rcnn only takes the classification score as the final score of each pre-selected box in the nms stage, and the prediction precision of the pre-selected box is not considered at all.
Based on the problems, the detection accuracy of the existing target detection algorithm is relatively low, and particularly the defect detection for faceting of the engine is realized.
Accordingly, there is a need for further improvements and upgrades in the art.
Disclosure of Invention
The invention aims to provide an engine faceting defect detection method, which is used for detecting the faceting defect of an engine through a target detection algorithm with higher precision.
In order to achieve the above object, it is necessary to provide a method, a system, an apparatus and a storage medium for detecting a faceting defect of an engine.
In a first aspect, the present invention provides a method for detecting faceting defects of an engine, the method comprising the steps of:
collecting a data set for detecting facet defects of an engine;
extracting features of the images in the data set through a fast-Rcnn network to obtain a feature map;
decoupling the feature map through a decoupling module to obtain classification features and regression features;
performing convolution operation on the classification features to obtain classification scores;
obtaining a quality score and a prediction value of a prediction frame boundary according to the regression characteristics;
combining the classification score and the quality score to obtain a predicted quality score;
and detecting the predicted value of the boundary of the predicted frame according to the predicted quality score to obtain a target predicted frame of the faceted defect of the engine.
Further, the acquiring a data set for engine faceting defect detection includes:
shooting a plurality of groups of images of the faceted engine through a sensor, and forming a data set by the plurality of groups of images; the sensor comprises a color camera and a TOF depth camera;
and performing supervised data enhancement on the images in the dataset.
Further, the extracting features of the image in the dataset through the fast-Rcnn network to obtain a feature map includes:
extracting original features of the images in the data set through a fast-Rcnn network to obtain 4 original feature images;
and splicing the 4 original feature images to obtain the feature images.
Further, the obtaining a quality score and a prediction value of a prediction frame boundary according to the regression feature includes:
obtaining a prediction value of a prediction frame boundary according to the regression characteristics;
according to the prediction value of the boundary of the prediction frame, calculating a first boundary regression quality value through a center loss function and calculating a second boundary regression quality value through a GIou loss function respectively;
and obtaining the quality score according to the first boundary regression quality value and the second boundary regression quality value.
Further, the quality score is calculated according to the following formula:
p_bbox(x1,y1,x2,y2) = 0.5 * centerness(x1,y1,x2,y2) + 0.5 * giou(x1,y1,x2,y2),
wherein (x 1, y1, x2, y 2) is a predicted frame boundary prediction value, p_bbox (x 1, y1, x2, y 2) is a quality score corresponding to the predicted frame boundary prediction value, center (x 1, y1, x2, y 2) is a first boundary regression quality value, and giou (x 1, y1, x2, y 2) is a second boundary regression quality value.
Further, the combining the classification score and the quality score to obtain a predicted quality score includes:
and carrying out weighted summation on the classification score and the quality score to obtain the predicted quality score.
Further, the decoupling processing is performed on the feature map through a decoupling module to obtain a classification feature and a regression feature, including:
carrying out average pooling layer processing on the feature map to obtain original weight features;
performing full-connection layer processing according to the original weight characteristics to obtain a weight coefficient matrix;
and carrying out weighting treatment on the original weight characteristics and the weight coefficient matrix to obtain the classification characteristics and the regression characteristics.
In a second aspect, the present invention provides an engine faceting defect detection system, the system comprising:
the data acquisition module is used for acquiring a data set for detecting facet defects of the engine;
the feature extraction module is used for extracting features of the images in the data set through a fast-Rcnn network to obtain a feature map;
the decoupling module is used for carrying out decoupling treatment on the feature map through the decoupling module to obtain classification features and regression features;
the classification feature processing module is used for carrying out convolution operation on the classification features to obtain classification scores;
the regression feature processing module is used for obtaining a quality score and a prediction value of a prediction frame boundary according to the regression feature;
a predicted quality score module, configured to combine the classification score and the quality score to obtain a predicted quality score;
and the prediction module is used for detecting the prediction value of the boundary of the prediction frame according to the prediction quality score to obtain a target prediction frame of the faceting defect of the engine.
In a third aspect, the present invention also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
In a fourth aspect, the present invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method.
The above-described present application provides methods, systems, devices, and storage media for detecting an engine faceting defect, collecting a data set for detecting an engine faceting defect; extracting features of the images in the data set through a fast-Rcnn network to obtain a feature map; decoupling the feature map through a decoupling module to obtain classification features and regression features; performing convolution operation on the classification features to obtain classification scores; obtaining a quality score and a prediction value of a prediction frame boundary according to the regression characteristics; combining the classification score and the quality score to obtain a predicted quality score; and detecting the predicted value of the boundary of the predicted frame according to the predicted quality score to obtain a target predicted frame of the faceted defect of the engine. Compared with the prior art, the method and the device can fully consider the prediction precision of the prediction frame per se while taking the classification score as the final score of each pre-selected frame in the nms stage by the traditional master-rcnn, add a decoupling module on the basis of the master-rcnn network and add a branch for predicting the quality score of the pre-selected frame. The decoupling degree of the classification characteristic and the regression characteristic is increased, and the quality score obtained by the new branch can be adopted as the score of each final preselection frame in the NMS stage, so that the detection precision is improved.
Drawings
FIG. 1 is a schematic diagram of an application scenario of an engine faceting defect detection method in an embodiment of the present invention;
FIG. 2 is a flow chart of a method for detecting faceting defects of an engine according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a decoupling module according to an embodiment of the present invention;
FIG. 4 is a flow chart of a decoupling process in an embodiment of the present invention;
fig. 5 is a schematic flow chart of step S15 in fig. 2;
FIG. 6 is a schematic diagram of a structure corresponding to an example of a boundary regression quality score according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of the self-attention mechanism in an embodiment of the present invention;
FIG. 8 is a flow chart of the self-attention mechanism shown in FIG. 7;
FIG. 9 is a system schematic of an engine faceting defect detection system in accordance with an embodiment of the present invention;
fig. 10 is an internal structural view of a computer device in the embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantageous effects of the present application more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples, and it should be understood that the examples described below are only illustrative of the present invention and are not intended to limit the scope of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The engine faceting defect detection method provided by the invention can be understood as an engine faceting defect detection method based on the improved master-rcnn, and can be applied to a terminal or a server as shown in fig. 1. The terminal may be, but not limited to, various personal computers, notebook computers, smartphones, tablet computers and portable wearable devices, and the server may be implemented by a separate server or a server cluster formed by a plurality of servers. The server can carry out target detection by adopting the method for detecting the faceted defects of the engine according to actual application requirements, and the obtained detection result is used for optimization research of a subsequent algorithm of the server or is transmitted to a terminal for a terminal user to check and analyze; furthermore, the method for detecting the defects of the faceted surfaces of the engine is particularly aimed at detecting the defects of the faceted surfaces of the engine. The following examples will explain the object detection method of the present invention in detail.
In one embodiment, as shown in fig. 2 and 3, there is provided a target detection method including the steps of S11 to S17:
s11, collecting a data set for detecting facet defects of an engine;
when the defect detection is carried out on the faceted surface of the engine, firstly, a data set for detection is collected, a plurality of groups of images of the faceted surface of the engine are shot through a sensor, and the plurality of groups of images form the data set; the sensor used in the invention is preferably a Kinect sensor, and consists of two cameras, one is a 1080P color camera and the other is a TOF depth camera, wherein the color camera is used for collecting image color information, an infrared light emitting device is arranged in the middle of the color camera, and the TOF depth camera is used for sensing infrared rays. The sensor acquires a depth map by adopting TOF (Time of Flight) technology with higher precision, sensitivity and resolution, wherein the color image of the original map is 640 multiplied by 480, and the depth map is 320 multiplied by 240.
Of course, the above-described sensor is merely for explaining the acquisition of the original image of the faceted engine, and is not limited to the sensor used for acquiring the data, and other sensors may be used. Of course, for the acquisition of data sets, the main purpose is for the training and optimization of the subsequent neural network model, and therefore, other prior known engine faceted data sets may also be employed.
After the data set is acquired, the data set is further enhanced in order to facilitate subsequent feature extraction. The data enhancements employed in this embodiment mainly include supervised and unsupervised, preferably supervised data enhancements. For supervised data enhancement, to cope with different sample environments, the present embodiment preferably employs the following two supervised data enhancement methods: 1. single sample data enhancement; 2. multiple samples of data are enhanced. For single sample data enhancement, the embodiment of the invention mainly performs various operations of geometric transformation on an image, such as turning, rotation, displacement, clipping, deformation, scaling, some color transformation and the like. While for multi-sample data enhancement, methods that may be employed include SMOTE, samplePairing, mixup, etc. As to what data enhancement method is adopted, this embodiment gives various choices, and may be specific according to actual situations.
S12, extracting features of the images in the data set through a fast-Rcnn network to obtain a feature map;
the fast-Rcnn network in this embodiment is not a conventional fast-Rcnn network, but is modified accordingly on the basis of the conventional fast-Rcnn network, the backbone network includes 13 conv layers, 13 relu layers and 4 mapping layers, original feature extraction is performed on the image in the dataset through the backbone network, so as to obtain 4 original feature images, and the feature images for subsequent decoupling processing are obtained by splicing the four original feature images.
In order to facilitate subsequent feature processing, the embodiment of the invention adds a decoupling module on the basis of the original fast-Rcnn network, and can obtain classification features for predicting classification scores and regression features for predicting coordinate values of a prediction frame through the feature decoupling module.
The decoupling module of the embodiment includes an average pooling layer, a full connection layer and a convolution layer, as shown in fig. 3, and in this embodiment, the split feature map is respectively passed through two decoupling modules shown in fig. 3, so as to obtain a classification feature and a regression feature, where the classification feature is used for subsequent classification prediction, and the regression feature is used for subsequent target frame boundary prediction, as shown in fig. 4, and specifically includes the following steps:
s121, carrying out average pooling layer processing on the feature map to obtain original weight features;
s122, performing full-connection layer processing according to the original weight characteristics to obtain a weight coefficient matrix;
and S123, carrying out weighting treatment on the original weight characteristics and the weight coefficient matrix to obtain the classification characteristics and the regression characteristics.
When the original weight feature extraction and convolution processing are carried out, the following formula is adopted:
when the weighting and decoupling processing is carried out from the original weight characteristics, the following formula is adopted:
wherein relu refers to the relu layer, conv refers to the convolutional layer, decouping refers to the decoupling module,representing classification characteristics->Representing regression characteristics.
S14, carrying out convolution operation on the classification features to obtain classification scores.
And S15, obtaining a quality score and a prediction value of a prediction frame boundary according to the regression characteristics.
In order to increase the decoupling degree of the classification features and the regression features, the embodiment of the invention further increases a branch for predicting the quality score of the prediction frame. Specifically, as shown in fig. 5, the method includes the steps of:
s151, obtaining a prediction value of a prediction frame boundary according to the regression characteristics;
s152, calculating a first boundary regression quality value through a center loss function and a second boundary regression quality value through a GIou loss function according to the prediction value of the boundary of the prediction frame;
and S153, obtaining the quality score according to the first boundary regression quality value and the second boundary regression quality value.
Specifically, the quality score is calculated according to the following formula:
p_bbox(x1,y1,x2,y2) = 0.5 * centerness(x1,y1,x2,y2) + 0.5 * GIoU(x1,y1,x2,y2),
wherein (x 1, y1, x2, y 2) is a predicted frame boundary prediction value, p_bbox (x 1, y1, x2, y 2) is a quality score corresponding to the predicted frame boundary prediction value, center (x 1, y1, x2, y 2) is a first boundary regression quality value, and GIoU (x 1, y1, x2, y 2) is a second boundary regression quality value.
The center is expressed as follows:
wherein l, r, t and b refer to the distances from the current pixel point (i.e. the point on the feature map) to the four boundaries of the prediction frame respectively;
GIoU is represented as follows:
wherein a is a target prediction frame, B is a predicted prediction frame, ac refers to a smallest rectangular frame that may include A, B, U refers to a U B, specifically as shown in fig. 6, where a is a black rectangular frame, representing a real prediction frame, B is a gray rectangular frame, representing a predicted prediction frame, and C is a smallest rectangular frame that may include A, B, that is, ac in the formula. For any two A, B boxes, a minimum box C is first found that can encase them. Then, the ratio of the area of C\ (A.u.B) to the area of C is calculated, note: the area of C\ (A.u.B) is the area of C minus the area of A.u.B, and then the IoU value of A, B minus this ratio gives GIoU.
When obtaining the prediction value of the prediction frame boundary according to the regression feature, the embodiment of the invention decouples the regression feature to obtain an x regression feature for the x direction and a y regression feature for the y direction respectively, and the invention introduces a self-attention module, as shown in fig. 7, and decouples the regression feature by the self-attention module to obtain the x regression feature for the x direction and the y regression feature for the y direction respectively. Through a self-attention mechanism, different areas in the image can be weighted to improve detection performance. As shown in fig. 8, the method specifically comprises the steps of:
s181, carrying out convolution processing on the regression features to obtain a first feature map with the resolution of H multiplied by W multiplied by 64; wherein H is the height of the feature map, and W is the width of the feature map;
s182, performing downsampling and convolution processing on the first feature map to obtain a second feature map with the resolution of H/2 XW/2X 128;
s183, performing downsampling and convolution processing on the second feature map to obtain a third feature map with the resolution of H/4 XW/4X 128;
s184, up-sampling the third feature map, and sequentially performing splicing and convolution operations on the third feature map and the second feature map to obtain a fourth feature map with the resolution of H/2×W/2×64;
s185, up-sampling the fourth feature map, and sequentially performing splicing and convolution operations on the fourth feature map and the second feature map to obtain a fifth feature map with resolution of H multiplied by W multiplied by 128;
s186, carrying out convolution operation on the fifth feature map to obtain an initial attention matrix, and carrying out dot multiplication operation on the initial attention matrix and the regression feature to obtain the x regression feature and the y regression feature.
In step S186, the convolution operation is performed on the fifth feature map to obtain an initial attention matrix with a resolution of h×w×1, then the obtained initial attention matrix is scaled to a range of 0-1 by sigmoid to obtain a final attention matrix att_x/y (h×w×1), that is, an x attention matrix and a y attention matrix, and then the x attention matrix and the y attention moment matrix are respectively subjected to dot multiplication operation with the initial regression feature, that is, the elements at the corresponding positions in each dimension are multiplied to obtain a final x regression feature and a final y regression feature. The specific formula is as follows:
wherein CD refers to splicing and downsampling, CAC refers to splicing and convolution, UP_sample refers to upward function, and Sig refers to sigmoid function.
S16, combining the classification score and the quality score to obtain a predicted quality score.
In order to improve the detection precision, in the nms stage, the embodiment of the invention does not detect the prediction value of the boundary of the prediction frame directly according to the classification score, but fully considers the prediction precision of each prediction frame through the quality score obtained by the new branch introduced before, further processes the classification score to obtain a new quality score, and detects the new quality score. Therefore, the embodiment of the invention combines the classification score and the quality score to obtain the predicted quality score. The predicted quality score may be obtained by specifically weighting and summing the classification score and the quality score.
And S17, detecting the predicted value of the boundary of the predicted frame according to the predicted quality score to obtain a target predicted frame of the faceting defect of the engine.
The prediction quality score of the embodiment of the invention considers the boundary regression quality of the prediction frame, so that the obtained target prediction frame is more accurate.
Through the target detection, the target detection model of the embodiment of the invention can be fully trained and optimized and then used for detecting the faceted defects of the actual engine.
According to the engine faceting defect detection method provided by the invention, the traditional faster-rcnn network structure is improved, and a decoupling module is introduced to increase a branch for predicting the quality score of a prediction frame, so that the decoupling degree of classification characteristics and regression characteristics is increased, and the detection precision is greatly improved.
Although the steps in the flowcharts described above are shown in order as indicated by arrows, these steps are not necessarily executed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders.
Based on the method for detecting the faceting defect based on the engine, the embodiment of the invention also provides a system for detecting the faceting defect of the engine, as shown in fig. 9, the system comprises:
the data acquisition module 1 is used for acquiring a data set for detecting facet defects of the engine;
the feature extraction module 2 is used for extracting features of the images in the data set through a fast-Rcnn network to obtain a feature map;
the decoupling module 3 is used for carrying out decoupling treatment on the feature map through the decoupling module to obtain classification features and regression features;
the classification feature processing module 4 is used for carrying out convolution operation on the classification features to obtain classification scores;
the regression feature processing module 5 is used for obtaining a quality score and a prediction value of a prediction frame boundary according to the regression feature;
a predicted quality score module 6, configured to combine the classification score and the quality score to obtain a predicted quality score;
and the prediction module 7 is used for detecting the predicted value of the boundary of the prediction frame according to the predicted quality score to obtain a target prediction frame of the faceted defect of the engine.
For specific limitation of the target detection system, reference may be made to the limitation of the method for detecting faceted defects of the engine, and corresponding technical effects may be equally obtained, which will not be described herein. The various modules in the engine faceted defect detection system described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Fig. 10 shows an internal structural diagram of a computer device, which may be a terminal or a server in particular, in one embodiment. As shown in fig. 10, the computer device includes a processor, a memory, a network interface, a display, a camera, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program when executed by a processor implements a distribution transformer state estimation method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those of ordinary skill in the art that the architecture shown in fig. 10 is merely a block diagram of some of the architecture relevant to the present application and is not intended to limit the computer device on which the present application may be implemented, and that a particular computing device may include more or fewer components than shown, or may combine certain components, or have the same arrangement of components.
In one embodiment, a computer device is provided comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when the computer program is executed.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, implements the steps of the above method.
In this specification, each embodiment is described in a progressive manner, and all the embodiments are directly the same or similar parts referring to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments. It should be noted that, any combination of the technical features of the foregoing embodiments may be used, and for brevity, all of the possible combinations of the technical features of the foregoing embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples represent only a few preferred embodiments of the present application, which are described in more detail and are not thereby to be construed as limiting the scope of the invention. It should be noted that modifications and substitutions can be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and substitutions should also be considered to be within the scope of the present application. Therefore, the protection scope of the patent application is subject to the protection scope of the claims.

Claims (10)

1. A method for detecting faceting defects of an engine, the method comprising:
collecting a data set for detecting facet defects of an engine;
extracting features of the images in the data set through a fast-Rcnn network to obtain a feature map;
decoupling the feature map through a decoupling module to obtain classification features and regression features;
performing convolution operation on the classification features to obtain classification scores;
obtaining a quality score and a prediction value of a prediction frame boundary according to the regression characteristics;
combining the classification score and the quality score to obtain a predicted quality score;
and detecting the predicted value of the boundary of the predicted frame according to the predicted quality score to obtain a target predicted frame of the faceted defect of the engine.
2. The method of claim 1, wherein the collecting a data set for motor faceting defect detection comprises:
shooting a plurality of groups of images of the faceted engine through a sensor, and forming a data set by the plurality of groups of images; the sensor comprises a color camera and a TOF depth camera;
and performing supervised data enhancement on the images in the dataset.
3. The method for detecting defects on an engine facet according to claim 1, wherein the feature extraction of the image in the dataset through a fast-Rcnn network to obtain a feature map includes:
extracting original features of the images in the data set through a fast-Rcnn network to obtain 4 original feature images;
and splicing the 4 original feature images to obtain the feature images.
4. The method of claim 1, wherein the obtaining a quality score and a prediction value of a prediction frame boundary according to the regression feature comprises:
obtaining a prediction value of a prediction frame boundary according to the regression characteristics;
according to the prediction value of the boundary of the prediction frame, calculating a first boundary regression quality value through a center loss function and calculating a second boundary regression quality value through a GIou loss function respectively;
and obtaining the quality score according to the first boundary regression quality value and the second boundary regression quality value.
5. The engine faceting defect detection method of claim 4, wherein the quality score is calculated according to the formula:
p_bbox(x1,y1,x2,y2) = 0.5 * centerness(x1,y1,x2,y2) + 0.5 * giou(x1,y1,x2,y2),
wherein (x 1, y1, x2, y 2) is a predicted frame boundary prediction value, p_bbox (x 1, y1, x2, y 2) is a quality score corresponding to the predicted frame boundary prediction value, center (x 1, y1, x2, y 2) is a first boundary regression quality value, and giou (x 1, y1, x2, y 2) is a second boundary regression quality value.
6. The engine faceting defect detection method of claim 1, wherein the combining the classification score and the quality score to obtain a predicted quality score comprises:
and carrying out weighted summation on the classification score and the quality score to obtain the predicted quality score.
7. The method for detecting faceted defects of an engine according to claim 5, wherein the step of performing decoupling processing on the feature map by a decoupling module to obtain classification features and regression features includes:
carrying out average pooling layer processing on the feature map to obtain original weight features;
performing full-connection layer processing according to the original weight characteristics to obtain a weight coefficient matrix;
and carrying out weighting treatment on the original weight characteristics and the weight coefficient matrix to obtain the classification characteristics and the regression characteristics.
8. An engine faceting defect detection system, the system comprising:
the data acquisition module is used for acquiring a data set for detecting facet defects of the engine;
the feature extraction module is used for extracting features of the images in the data set through a fast-Rcnn network to obtain a feature map;
the decoupling module is used for carrying out decoupling treatment on the feature map through the decoupling module to obtain classification features and regression features;
the classification feature processing module is used for carrying out convolution operation on the classification features to obtain classification scores;
the regression feature processing module is used for obtaining a quality score and a prediction value of a prediction frame boundary according to the regression feature;
a predicted quality score module, configured to combine the classification score and the quality score to obtain a predicted quality score;
and the prediction module is used for detecting the prediction value of the boundary of the prediction frame according to the prediction quality score to obtain a target prediction frame of the faceting defect of the engine.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202410268717.8A 2024-03-11 2024-03-11 Method, system, equipment and storage medium for detecting faceting defect of engine Pending CN117876798A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410268717.8A CN117876798A (en) 2024-03-11 2024-03-11 Method, system, equipment and storage medium for detecting faceting defect of engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410268717.8A CN117876798A (en) 2024-03-11 2024-03-11 Method, system, equipment and storage medium for detecting faceting defect of engine

Publications (1)

Publication Number Publication Date
CN117876798A true CN117876798A (en) 2024-04-12

Family

ID=90588675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410268717.8A Pending CN117876798A (en) 2024-03-11 2024-03-11 Method, system, equipment and storage medium for detecting faceting defect of engine

Country Status (1)

Country Link
CN (1) CN117876798A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113962934A (en) * 2021-09-17 2022-01-21 北京机械工业自动化研究所有限公司 Defect detection method and system based on fast RCNN (radar cross-section network)
CN114332473A (en) * 2021-09-29 2022-04-12 腾讯科技(深圳)有限公司 Object detection method, object detection device, computer equipment, storage medium and program product
CN114693657A (en) * 2022-04-06 2022-07-01 重庆大学 Intelligent detection method and system for multi-size and multi-category defects on surface of large complex structural member based on Faster R-CNN
WO2022160170A1 (en) * 2021-01-28 2022-08-04 东莞职业技术学院 Method and apparatus for detecting metal surface defects
CN115587969A (en) * 2022-09-07 2023-01-10 北京工商大学 Cross-domain small sample defect target detection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022160170A1 (en) * 2021-01-28 2022-08-04 东莞职业技术学院 Method and apparatus for detecting metal surface defects
CN113962934A (en) * 2021-09-17 2022-01-21 北京机械工业自动化研究所有限公司 Defect detection method and system based on fast RCNN (radar cross-section network)
CN114332473A (en) * 2021-09-29 2022-04-12 腾讯科技(深圳)有限公司 Object detection method, object detection device, computer equipment, storage medium and program product
CN114693657A (en) * 2022-04-06 2022-07-01 重庆大学 Intelligent detection method and system for multi-size and multi-category defects on surface of large complex structural member based on Faster R-CNN
CN115587969A (en) * 2022-09-07 2023-01-10 北京工商大学 Cross-domain small sample defect target detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
沈浩玉: "机器视觉***在汽车制造行业的应用", 设备管理与维修, 31 December 2016 (2016-12-31), pages 22 - 24 *
罗炜儒: "基于解耦思想改进的Faster R-CNN柔性电路板表面缺陷检测方法研究", 信息科技, 15 January 2023 (2023-01-15), pages 1 - 70 *

Similar Documents

Publication Publication Date Title
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN109284670B (en) Pedestrian detection method and device based on multi-scale attention mechanism
US9344690B2 (en) Image demosaicing
US20210334531A1 (en) Generating shift-invariant neural network feature maps and outputs
US20130279762A1 (en) Adaptive search window control for visual search
CN113191489B (en) Training method of binary neural network model, image processing method and device
CN112990219A (en) Method and apparatus for image semantic segmentation
JP2010157118A (en) Pattern identification device and learning method for the same and computer program
CN113705788A (en) Infrared image temperature estimation method and system based on full convolution neural network
CN115496976B (en) Visual processing method, device, equipment and medium for multi-source heterogeneous data fusion
CN112395962A (en) Data augmentation method and device, and object identification method and system
CN114494812A (en) Image small target detection method based on improved CenterNet
CN110516731B (en) Visual odometer feature point detection method and system based on deep learning
CN115345905A (en) Target object tracking method, device, terminal and storage medium
Chen et al. Image splicing localization using residual image and residual-based fully convolutional network
CN113205137B (en) Image recognition method and system based on capsule parameter optimization
Manap et al. PATCH-IQ: a patch based learning framework for blind image quality assessment
WO2023138540A1 (en) Edge extraction method and apparatus, and electronic device and storage medium
CN111507252A (en) Human body falling detection device and method, electronic terminal and storage medium
CN117876798A (en) Method, system, equipment and storage medium for detecting faceting defect of engine
WO2023029559A1 (en) Data processing method and apparatus
CN114463764A (en) Table line detection method and device, computer equipment and storage medium
CN115170456A (en) Detection method and related equipment
CN113313642A (en) Image denoising method and device, storage medium and electronic equipment
CN117636078B (en) Target detection method, target detection system, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination