CN116433747A - Construction method and detection device for detection model of wall thickness of bamboo tube - Google Patents

Construction method and detection device for detection model of wall thickness of bamboo tube Download PDF

Info

Publication number
CN116433747A
CN116433747A CN202310691923.5A CN202310691923A CN116433747A CN 116433747 A CN116433747 A CN 116433747A CN 202310691923 A CN202310691923 A CN 202310691923A CN 116433747 A CN116433747 A CN 116433747A
Authority
CN
China
Prior art keywords
bamboo tube
wall thickness
model
segmentation
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310691923.5A
Other languages
Chinese (zh)
Other versions
CN116433747B (en
Inventor
许鑫达
杨和
刘文哲
童同
高钦泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Deshi Technology Group Co ltd
Original Assignee
Fujian Deshi Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Deshi Technology Group Co ltd filed Critical Fujian Deshi Technology Group Co ltd
Priority to CN202310691923.5A priority Critical patent/CN116433747B/en
Publication of CN116433747A publication Critical patent/CN116433747A/en
Application granted granted Critical
Publication of CN116433747B publication Critical patent/CN116433747B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30161Wood; Lumber
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for constructing a detection model of the wall thickness of a bamboo tube, which are used for dividing and classifying end face images of the bamboo tube to be identified through a constructed division classification model, so that the end face wall thickness information of the current bamboo tube to be identified can be rapidly extracted based on the division images, the wall thickness of the bamboo tube can be rapidly and accurately detected, and the classification and the prediction can be carried out according to imaging of the wall thickness of the bamboo tube. Meanwhile, the regression algorithm is used for predicting the wall thickness and the outer diameter of the end face of the bamboo tube under the target length, the wall thickness information of other end faces of the bamboo tube along the length direction is obtained by predicting a segmentation result diagram and an end face image of the bamboo tube to be identified through the constructed prediction model, and accordingly the integral wall thickness information of the bamboo tube is obtained by integrating the wall thickness information of the end face of the bamboo tube and the wall thickness information of other end faces along the length direction.

Description

Construction method and detection device for detection model of wall thickness of bamboo tube
Technical Field
The invention relates to the technical field of material detection, in particular to a method and a device for constructing a detection model of the wall thickness of a bamboo tube.
Background
Bamboo is widely used in the industry as a natural environment-friendly material. Wherein, cutting the bamboo tube is a key step in the processing process of the bamboo tube. Currently, bamboo tube processing typically employs manual cutting or the use of mechanical structures to detect outer diameter cuts. The wall thickness of the bamboo tube is not uniform due to the natural characteristics of the bamboo. The subjective problem exists in manual measurement, and mechanical measurement is easily interfered by factors such as impurities, defects and the like, so that the accuracy and precision of a measurement result are affected. Meanwhile, the labor cost and the time cost required by the methods are relatively high, and a certain economic burden is brought to production.
Currently, corresponding solutions are proposed in the prior art. As in the patent publication CN102269571a, a scheme for performing bamboo tube diameter measurement using conventional digital image processing technology is disclosed. However, the scheme is only suitable for stable imaging conditions, is influenced by the traditional digital image algorithm, and has low precision. As another example, in the patent document with publication number CN214470652U, a two-stage photographing method is disclosed to measure the diameter of the bamboo tube, but the camera is required to move, and the efficiency is limited.
Meanwhile, in the conventional bamboo tube processing, the cutting length is generally selected according to the wall thickness of the bamboo tube. Because the bamboo tube is long tube-shaped, the thickness information of two ends of the bamboo tube cannot be obtained at the same time during detection, and the situation of length error during cutting is caused, so that material waste is caused. Meanwhile, the image processing technology adopted in the prior art cannot distinguish the normal bamboo tube from the rotten bamboo tube, and the quality of products is affected.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: the method and the device for constructing the detection model of the wall thickness of the bamboo tube are provided, and rapid and accurate detection of the wall thickness of the bamboo tube is realized.
In order to solve the technical problems, the invention adopts the following technical scheme:
the method for constructing the detection model of the wall thickness of the bamboo tube is characterized by comprising the following steps:
acquiring end face images of a preset number of bamboo tubes under different lengths;
obtaining wall thickness values and outer diameter values of each bamboo tube in the end face image under different lengths according to the end face image;
obtaining training sets corresponding to different bamboo tubes according to the corresponding length intervals between different end face images of the same bamboo tube and the corresponding wall thickness values and outer diameter values of different end face images;
obtaining a prediction target length, and obtaining a prediction model according to the prediction target length and a training set; and outputting a wall thickness value and an outer diameter value corresponding to the end face of the bamboo tube under the predicted target length by the prediction model.
In order to solve the technical problems, the invention adopts another technical scheme that:
the device for detecting the wall thickness of the bamboo tube comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor realizes the steps in the method for constructing the detection model of the wall thickness of the bamboo tube when executing the computer program.
The invention has the beneficial effects that: through obtaining the terminal surface image of a plurality of bamboo tubes under different length to calculate wall thickness value, external diameter value and the length interval between the terminal surface that each bamboo tube terminal surface corresponds under different length, thereby train the prediction model based on wall thickness value, external diameter value and the length interval between the terminal surface, can predict the wall thickness value and the external diameter value that this bamboo tube corresponds under the target length terminal surface through the terminal surface image of single bamboo tube, can acquire wall thickness value and the external diameter value of the different terminal surfaces of same bamboo tube promptly, compare in prior art only can acquire the technique of single terminal surface wall thickness, can more accurate discernment bamboo tube wall thickness information, thereby improve the cutting effect of bamboo tube.
Drawings
FIG. 1 is a flow chart of steps of a method for constructing a model for detecting wall thickness of a bamboo tube according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a segmentation classification model in a method for constructing a bamboo tube wall thickness detection model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a prediction model in a method for constructing a detection model of a wall thickness of a bamboo tube according to an embodiment of the present invention;
FIG. 4 is a flowchart of a detection step of a bamboo tube wall thickness-based detection model according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating the correction steps of a bamboo tube wall thickness-based inspection model according to an embodiment of the present invention
Fig. 6 is a schematic structural diagram of a device for detecting wall thickness of a bamboo tube according to an embodiment of the present invention.
Detailed Description
In order to describe the technical contents, the achieved objects and effects of the present invention in detail, the following description will be made with reference to the embodiments in conjunction with the accompanying drawings.
Referring to fig. 1, a method for constructing a detection model of a wall thickness of a bamboo tube includes the steps of:
acquiring end face images of a preset number of bamboo tubes under different lengths;
obtaining wall thickness values and outer diameter values of each bamboo tube in the end face image under different lengths according to the end face image;
obtaining training sets corresponding to different bamboo tubes according to the corresponding length intervals between different end face images of the same bamboo tube and the corresponding wall thickness values and outer diameter values of different end face images;
obtaining a prediction target length, and obtaining a prediction model according to the prediction target length and a training set; and outputting a wall thickness value and an outer diameter value corresponding to the end face of the bamboo tube under the predicted target length by the prediction model.
From the above description, the beneficial effects of the invention are as follows: through obtaining the terminal surface image of a plurality of bamboo tubes under different length to calculate wall thickness value, external diameter value and the length interval between the terminal surface that each bamboo tube terminal surface corresponds under different length, thereby train the prediction model based on wall thickness value, external diameter value and the length interval between the terminal surface, can predict the wall thickness value and the external diameter value that this bamboo tube corresponds under the target length terminal surface through the terminal surface image of single bamboo tube, can acquire wall thickness value and the external diameter value of the different terminal surfaces of same bamboo tube promptly, compare in prior art only can acquire the technique of single terminal surface wall thickness, can more accurate discernment bamboo tube wall thickness information, thereby improve the cutting effect of bamboo tube.
Further, the obtaining the end face images of the preset number of bamboo tubes under different lengths includes:
acquiring end face pictures of a preset number of bamboo tubes under different lengths;
performing perspective transformation on the end face picture to obtain a corrected image;
identifying and extracting a bamboo tube region in the correction image to obtain a target bamboo tube region image;
and carrying out data enhancement on the target bamboo tube region image to obtain the end face image.
From the above description, it can be seen that, by obtaining the end face pictures of the bamboo tube under different lengths and performing perspective transformation on the end face pictures, the imaging effect of the pictures can be converted into a front view of the end face of the bamboo tube, so that the calculation of the size of the end face of the bamboo tube is facilitated, and the target bamboo tube area image is obtained by cutting the bamboo tube area in the corrected image, so that the data volume of image processing is reduced, and the recognition accuracy of the subsequent model can be improved.
Further, obtaining the wall thickness value and the outer diameter value of each bamboo tube corresponding to different lengths in the end face image according to the end face image includes:
constructing a segmentation model, and segmenting the end face image according to the segmentation model to obtain a segmentation result graph;
and obtaining a wall thickness value and an outer diameter value corresponding to the bamboo tube in the end face image according to the segmentation result diagram.
From the above description, it can be seen that the end face image of the bamboo tube is segmented by constructing the segmentation model to obtain the segmented image, so that the end face wall thickness information of the bamboo tube can be rapidly extracted based on the segmented image, and the calculation accuracy of the wall thickness value and the outer diameter value can be improved.
Further, the obtaining the wall thickness value and the outer diameter value corresponding to the bamboo tube in the end face image according to the segmentation result graph includes:
calculating a maximum circumscribed rectangular frame corresponding to the segmentation result diagram, and acquiring a center point and the length of the maximum circumscribed rectangular frame;
taking the center point of the maximum circumscribed rectangular frame as the circle center and the length of the maximum circumscribed rectangular frame as the half-meridian, and performing length detection on the segmentation result graph to obtain pixel length information;
and obtaining pixel precision corresponding to the segmentation result diagram, and obtaining a wall thickness value and an outer diameter value corresponding to the bamboo tube in the end face image according to the pixel length information and the pixel precision.
From the above description, it can be known that by obtaining the maximum circumscribed rectangular frame corresponding to the bamboo tube and obtaining the center point and the length corresponding to the rectangular frame, the length of the drawn boundary region of the bamboo tube can be detected according to the position and the length of the center point, and the end face wall thickness information of the final bamboo tube can be obtained by calculating according to the pixel accuracy after obtaining the accurate pixel information.
Further, the performing length detection on the segmentation result graph to obtain pixel length information includes:
acquiring a preset detected interval angle;
sequentially extending outwards at the interval angle to calculate the pixel length corresponding to the current interval angle until the detection angle is equal to 360 degrees;
and obtaining the pixel length information according to the pixel lengths corresponding to all the interval angles.
From the above description, it can be seen that by sequentially detecting the pixel lengths at the preset interval angle, the wall thickness information such as the average wall thickness, the maximum wall thickness, the minimum wall thickness and the like can be obtained based on the pixel lengths corresponding to all the angles, and the wall thickness of the bamboo tube can be accurately identified.
Further, after outputting the wall thickness value and the outer diameter value corresponding to the end face of the bamboo tube under the predicted target length by the prediction model, the method comprises the following steps:
acquiring an end face picture of the bamboo tube, and acquiring an internal image of the bamboo tube according to the segmentation result picture and the end face picture of the bamboo tube;
acquiring brightness information corresponding to the internal image of the bamboo tube;
and judging whether the bamboo tube in the end face image is a bamboo joint area or not according to the brightness information, and if so, correcting the wall thickness value.
From the above description, it can be seen that whether the current end face is a bamboo joint region is determined by identifying the internal image of the bamboo tube, and when the current end face is identified as the bamboo joint region, the wall thickness value corresponding to the bamboo tube is corrected, so that more accurate wall thickness information is provided.
Further, the constructing the segmentation model includes:
acquiring a bamboo tube image dataset;
a segmentation network is built by adopting PP-LiteSeg, and a cross entropy loss function is set at an output layer of the segmentation network to obtain a segmentation learning model;
and training the segmentation learning model according to the bamboo tube image dataset to obtain the segmentation model.
From the above description, it can be seen that the wall thickness of the bamboo tube is effectively and rapidly measured by constructing a segmentation model by adopting a lightweight segmentation network and extracting the wall thickness information of the current bamboo tube according to the imaging of the end face of the bamboo tube, so as to achieve the effect of improving the detection precision and speed.
Further, the step of segmenting the end face image according to the segmentation model, after obtaining a segmentation result graph, includes:
constructing a segmentation classification model, and inputting the segmentation result graph and the end face image into the segmentation classification model to obtain a classification result;
and judging whether the bamboo tube in the end face image has defects according to the classification result.
From the above description, it can be seen that the division classification model is further constructed based on the division model, so that the quality requirement in the processing process of the bamboo tube can be achieved by dividing the classification branch part in the division classification model into a normal bamboo tube and a rotten bamboo tube.
Further, the constructing the segmentation classification model includes:
adopting RepVGG to construct a classification network, and setting a label smooth loss function at an output layer of the classification network to obtain a classification learning model;
acquiring a training result set corresponding to the bamboo tube image data set generated in the segmentation model training process;
correspondingly multiplying the training result set with the bamboo tube image data set to obtain a classified training set;
and training the classification learning model according to the classification training set to obtain the segmentation classification model.
From the above description, it can be seen that by constructing the classification network by using the RepVGG, the model can be maintained with high accuracy while the speed and efficiency of the model are improved.
The embodiment provides a device for detecting the wall thickness of a bamboo tube, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the steps in the method for constructing the detection model of the wall thickness of the bamboo tube when executing the computer program.
The method and the device for constructing the detection model of the wall thickness of the bamboo tube can be suitable for identifying the wall thickness of various tubular structural substances, and particularly can identify the wall thickness of the bamboo tube, and the method and the device are described in the following specific embodiments:
example 1
Referring to fig. 1, a method for constructing a detection model of a wall thickness of a bamboo tube includes the steps of:
s1, acquiring end face images of a preset number of bamboo tubes under different lengths, and specifically:
s11, obtaining end face pictures of a preset number of bamboo tubes under different lengths; acquiring original image data of the bamboo tube by adopting a high-resolution camera, and obtaining end face pictures of the bamboo tube under different lengths in a cutting mode;
s12, performing perspective transformation on the end face picture to obtain a corrected image; in the process of acquiring the end face picture, the front view of the end face of the bamboo tube cannot be obtained usually due to the limitation of a mechanical structure and the like, so that the imaging effect of the end face picture can be the front view of the end face of the bamboo tube by performing perspective transformation correction on the end face picture;
s13, identifying and extracting a bamboo tube region in the correction image to obtain a target bamboo tube region image; the method for identifying and extracting the image is not limited to the traditional image processing method and the target detection method, the positioning of the target bamboo tube area is realized, and the target bamboo tube area is cut after the positioning is finished to obtain the target bamboo tube area image;
s14, carrying out data enhancement on the target bamboo tube region image to obtain the end face image; among them, data enhancement operations include, but are not limited to, operations on the image such as rotation, flipping, affine transformation, and altering the brightness, contrast, etc. of the image.
S2, obtaining wall thickness values and outer diameter values of each bamboo tube corresponding to different lengths in the end face image according to the end face image, and specifically: the closed area in the end face image, namely the part of the wall of the bamboo tube, can be identified through an image identification method, and then the distance between the central point and the wall of the bamboo tube is calculated by combining the central point position of the end face image to obtain the wall thickness value and the outer diameter value.
S3, obtaining training sets corresponding to different bamboo tubes according to the length intervals corresponding to the end face images of the same bamboo tube and the wall thickness values and the outer diameter values corresponding to the end face images.
S4, obtaining a predicted target length, and obtaining a predicted model according to the predicted target length and a training set; the input data is a wall thickness value, an outer diameter value and a predicted target length, and the input data is the wall thickness value and the outer diameter value corresponding to the position of the predicted target length; constructing an MLP (Multilayer Perceptron, multi-layer perceptron) regression prediction model, and training the model after unified standardization of training set data; and outputting the wall thickness value and the outer diameter value corresponding to the end face of the bamboo tube under the predicted target length by the prediction model.
Example two
The embodiment specifically defines a method for obtaining a wall thickness value and an outer diameter value;
the step S2 specifically comprises the following steps:
s21, constructing a segmentation model, and segmenting the end face image according to the segmentation model to obtain a segmentation result diagram, wherein the segmentation result diagram is specifically:
referring to fig. 2, in an alternative embodiment, the split model is constructed by adopting a PP-LiteSeg to construct a split network; the PP-LiteSeg is a deep learning segmentation model based on a layer jump connection structure, and consists of an encoder, a decoder and an aggregation module; the encoder adopts a lightweight network, extracts characteristics from different stages, reduces the number of channels along with the increase of the number of network layers, and can avoid influencing the model efficiency due to excessive model parameters; the decoder is a reverse encoder, the number of shallow characteristic channels is gradually reduced, and finally, a pyramid module is adopted to aggregate the context information, so that richer semantic information is obtained; meanwhile, an attention mechanism is applied to the decoder, so that fusion and utilization of multi-scale semantic features are realized, and the segmentation precision is improved; the decoded middle three-layer feature diagram is required to be up-sampled and trained during training, and only the last layer of result is up-sampled and output during reasoning; training the model after acquiring the bamboo tube image dataset in the mode of the first embodiment, so that the segmentation model can obtain a corresponding segmentation result diagram based on the input bamboo tube end face picture; wherein the segmentation result graph may be a binary graph; referring to fig. 3, the prediction model may be further trained based on a binary image.
On the basis of constructing the segmentation model, further constructing and obtaining a segmentation classification model, specifically:
multiplying the segmentation image in the training result set with the bamboo tube image in the bamboo tube image data set, namely multiplying the segmentation result with the original bamboo tube image to obtain a fusion image; filling the background area in the fusion image to obtain classified images, and taking all the obtained classified images as a training set of a classification model; the classification model can judge whether the bamboo tube in the end face image has defects or not based on the input bamboo tube end face image and the segmentation result, and the rotten bamboo tube is distinguished from the normal bamboo tube; in an alternative embodiment, a classification network is constructed using RepVGG; the RepVGG module is a neural network module based on structural heavy parameterization, and in a specific implementation, the RepVGG module adopts three 3x3 convolution layers, and each RepVGG module can be regarded as a combination of three simple convolution layers; the three convolution layers can be trained simultaneously in the training stage, so that the model can learn more effective characteristic representation, and the classification performance is improved; in the reasoning stage, the RepVGG module uses a structure re-parameterization technology to integrate the three convolution layers into an equivalent convolution layer, so that the speed and efficiency are improved, and the memory consumption is reduced.
Wherein, further introduce two loss functions in the classification model of cutting apart and carry out the mixed training, specifically: introducing cross entropy loss function in split branches
Figure SMS_1
The method is used for training the segmentation effect of the bamboo tube, and superposition of cross entropy loss of each pixel is generally adopted; introducing a tag smooth loss function in the classification branch>
Figure SMS_2
The method is used for training the classification of the wall defects of the bamboo tube and enhancing the generalization of the bamboo tube;
Figure SMS_3
Figure SMS_4
Figure SMS_5
Figure SMS_6
wherein N is the number of image pixel points,
Figure SMS_8
is the real label of each pixel point, < +.>
Figure SMS_12
The prediction probability that each pixel belongs to the foreground; k is the sample class number, < >>
Figure SMS_14
Is a true tag classified as belonging to class i, < +.>
Figure SMS_9
Is the probability of being predicted as class i, +.>
Figure SMS_11
Is a small constant such that the optimization objective in the penalty is no longer 1 and 0 to avoid overfitting; />
Figure SMS_13
And->
Figure SMS_15
The coefficients representing the specific gravities of the two losses, respectively, were set to 0.5; />
Figure SMS_7
Represents the total loss, finally according to +.>
Figure SMS_10
And (5) performing weight iterative optimization. In an alternative embodiment, the trained optimizer uses SGD, the weight decay parameter weight_decay is set to 4e-5, the momentum parameter momentum is set to 0.9, the initial learning rate is 0.01, the number of samples per iteration batch_size is set to 32, and the number of training iterations iters is 5000.
S22, calculating a maximum circumscribed rectangular frame corresponding to the segmentation result diagram, and acquiring a center point and the length of the maximum circumscribed rectangular frame;
s23, taking the center point of the maximum circumscribed rectangular frame as the center and the length of the maximum circumscribed rectangular frame as the half-meridian, and performing length detection on the segmentation result graph to obtain pixel length information, wherein the specific information is as follows:
s231, acquiring a preset detected interval angle; if the preset interval angle is 1 degree;
s232, sequentially extending outwards at the interval angle to calculate the pixel length corresponding to the current interval angle until the detection angle is equal to 360 degrees; detecting the pixel length between the center point and the inner and outer walls of the bamboo tube under the current angle from 0 to 360 degrees in sequence with the interval angle of 1 degrees;
s233, obtaining the pixel length information according to the pixel lengths corresponding to all the interval angles; obtaining the pixel length corresponding to each angle of 0-360 degrees;
s24, obtaining pixel precision corresponding to the segmentation result diagram, and obtaining a wall thickness value and an outer diameter value corresponding to the bamboo tube in the end face image according to the pixel length information and the pixel precision; namely: length = pixel precision x pixel length; thus obtaining the related data of the average wall thickness, the maximum wall thickness, the minimum wall thickness, the average outer diameter, the maximum outer diameter and the minimum outer diameter of the bamboo tube.
Example III
This embodiment differs from one or two of the embodiments in that the predicted wall thickness values are also corrected, in particular:
referring to fig. 4 and fig. 5, after obtaining a binary image of an end face image of a bamboo tube to be identified through a segmentation classification model, further calculating to obtain a corresponding wall thickness value and an outer diameter value according to the binary image, inputting the wall thickness value, the outer diameter value and a target length to be predicted into the prediction model, and outputting the wall thickness value and the outer diameter value corresponding to the end face of the bamboo tube under the predicted target length by the prediction model;
while the predicting step is being performed, a correction judging step is performed:
a1, obtaining a picture of the end face of the bamboo tube to be identified;
a2, obtaining an internal image of the bamboo tube according to the segmentation result and the end face picture of the bamboo tube to be identified;
a3, acquiring brightness information corresponding to the inner image of the bamboo tube, judging whether the bamboo tube in the end face image is a bamboo joint area according to the brightness information, and if so, executing the step A4; if not, directly outputting the wall thickness value and the outer diameter value of the prediction model; if the brightness corresponding to the image inside the bamboo tube is larger, the possibility of the bamboo joint is larger; the judgment can be performed by setting a brightness threshold;
a4, correcting the wall thickness value; adjusting the brightness and the brightness threshold value corresponding to the internal image of the bamboo tube; or correcting according to the size ratio of the bamboo joint position to the hollow position under the normal condition, for example, setting the correction value to be 10% -20%.
Example IV
Referring to fig. 6, a device for detecting a wall thickness of a bamboo tube includes a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor implements each step in a method for constructing a model for detecting a wall thickness of a bamboo tube according to the first, second and third embodiments when executing the computer program.
In summary, the method for constructing the detection model of the wall thickness of the bamboo tube and the detection device provided by the invention adopt the deep learning model, and divide and classify the end face image of the bamboo tube to be identified through the constructed division classification model, so that the end face wall thickness information of the current bamboo tube to be identified can be rapidly extracted based on the division image, the rapid and accurate wall thickness detection of the bamboo tube can be realized, and the classification and the prediction can be carried out according to the imaging of the end face image. Meanwhile, the regression algorithm is used for predicting the wall thickness and the outer diameter of the end face of the bamboo tube under the target length, the wall thickness information of other end faces of the bamboo tube along the length direction is obtained by predicting a segmentation result diagram and an end face image of the bamboo tube to be identified through the constructed prediction model, and accordingly the integral wall thickness information of the bamboo tube is obtained by integrating the wall thickness information of the end face of the bamboo tube and the wall thickness information of other end faces along the length direction.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent changes made by the specification and drawings of the present invention, or direct or indirect application in the relevant art, are included in the scope of the present invention.

Claims (10)

1. The method for constructing the detection model of the wall thickness of the bamboo tube is characterized by comprising the following steps:
acquiring end face images of a preset number of bamboo tubes under different lengths;
obtaining wall thickness values and outer diameter values of each bamboo tube in the end face image under different lengths according to the end face image;
obtaining training sets corresponding to different bamboo tubes according to the corresponding length intervals between different end face images of the same bamboo tube and the corresponding wall thickness values and outer diameter values of different end face images;
obtaining a prediction target length, and obtaining a prediction model according to the prediction target length and a training set; and outputting a wall thickness value and an outer diameter value corresponding to the end face of the bamboo tube under the predicted target length by the prediction model.
2. The method for constructing a model for detecting wall thickness of bamboo tubes according to claim 1, wherein the step of obtaining end face images of a preset number of bamboo tubes at different lengths comprises:
acquiring end face pictures of a preset number of bamboo tubes under different lengths;
performing perspective transformation on the end face picture to obtain a corrected image;
identifying and extracting a bamboo tube region in the correction image to obtain a target bamboo tube region image;
and carrying out data enhancement on the target bamboo tube region image to obtain the end face image.
3. The method for constructing a bamboo tube wall thickness detection model according to claim 1, wherein the obtaining the wall thickness value and the outer diameter value of each bamboo tube in the end face image under different lengths according to the end face image comprises:
constructing a segmentation model, and segmenting the end face image according to the segmentation model to obtain a segmentation result graph;
and obtaining a wall thickness value and an outer diameter value corresponding to the bamboo tube in the end face image according to the segmentation result diagram.
4. The method for constructing a bamboo tube wall thickness detection model according to claim 3, wherein the obtaining the wall thickness value and the outer diameter value corresponding to the bamboo tube in the end face image according to the segmentation result graph comprises:
calculating a maximum circumscribed rectangular frame corresponding to the segmentation result diagram, and acquiring a center point and the length of the maximum circumscribed rectangular frame;
taking the center point of the maximum circumscribed rectangular frame as the circle center and the length of the maximum circumscribed rectangular frame as the half-meridian, and performing length detection on the segmentation result graph to obtain pixel length information;
and obtaining pixel precision corresponding to the segmentation result diagram, and obtaining a wall thickness value and an outer diameter value corresponding to the bamboo tube in the end face image according to the pixel length information and the pixel precision.
5. The method for constructing a bamboo tube wall thickness detection model according to claim 4, wherein the performing length detection on the segmentation result graph to obtain pixel length information comprises:
acquiring a preset detected interval angle;
sequentially extending outwards at the interval angle to calculate the pixel length corresponding to the current interval angle until the detection angle is equal to 360 degrees;
and obtaining the pixel length information according to the pixel lengths corresponding to all the interval angles.
6. The method for constructing a bamboo tube wall thickness detection model according to claim 3, wherein the outputting of the wall thickness value and the outer diameter value corresponding to the end face of the bamboo tube under the predicted target length by the prediction model comprises:
acquiring an end face picture of the bamboo tube, and acquiring an internal image of the bamboo tube according to the segmentation result picture and the end face picture of the bamboo tube;
acquiring brightness information corresponding to the internal image of the bamboo tube;
and judging whether the bamboo tube in the end face image is a bamboo joint area or not according to the brightness information, and if so, correcting the wall thickness value.
7. A method for constructing a model for detecting wall thickness of a bamboo tube according to claim 3, wherein the constructing a segmentation model comprises:
acquiring a bamboo tube image dataset;
a segmentation network is built by adopting PP-LiteSeg, and a cross entropy loss function is set at an output layer of the segmentation network to obtain a segmentation learning model;
and training the segmentation learning model according to the bamboo tube image dataset to obtain the segmentation model.
8. The method for constructing a model for detecting wall thickness of a bamboo tube according to claim 7, wherein the steps of dividing the end face image according to the division model to obtain a division result graph comprise:
constructing a segmentation classification model, and inputting the segmentation result graph and the end face image into the segmentation classification model to obtain a classification result;
and judging whether the bamboo tube in the end face image has defects according to the classification result.
9. The method for constructing a bamboo tube wall thickness detection model according to claim 8, wherein the constructing a segmentation classification model comprises:
adopting RepVGG to construct a classification network, and setting a label smooth loss function at an output layer of the classification network to obtain a classification learning model;
acquiring a training result set corresponding to the bamboo tube image data set generated in the segmentation model training process;
correspondingly multiplying the training result set with the bamboo tube image data set to obtain a classified training set;
and training the classification learning model according to the classification training set to obtain the segmentation classification model.
10. A device for detecting the wall thickness of a bamboo tube, comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the processor implements the steps of a method for constructing a model for detecting the wall thickness of a bamboo tube according to any one of claims 1 to 9 when executing the computer program.
CN202310691923.5A 2023-06-13 2023-06-13 Construction method and detection device for detection model of wall thickness of bamboo tube Active CN116433747B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310691923.5A CN116433747B (en) 2023-06-13 2023-06-13 Construction method and detection device for detection model of wall thickness of bamboo tube

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310691923.5A CN116433747B (en) 2023-06-13 2023-06-13 Construction method and detection device for detection model of wall thickness of bamboo tube

Publications (2)

Publication Number Publication Date
CN116433747A true CN116433747A (en) 2023-07-14
CN116433747B CN116433747B (en) 2023-08-18

Family

ID=87080072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310691923.5A Active CN116433747B (en) 2023-06-13 2023-06-13 Construction method and detection device for detection model of wall thickness of bamboo tube

Country Status (1)

Country Link
CN (1) CN116433747B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113203363A (en) * 2021-04-08 2021-08-03 福建呈祥机械制造有限公司 Bamboo tube measuring method and measuring device based on digital image processing technology
CN113689415A (en) * 2021-08-30 2021-11-23 安徽工业大学 Steel pipe wall thickness online detection method based on machine vision
WO2021238826A1 (en) * 2020-05-26 2021-12-02 苏宁易购集团股份有限公司 Method and apparatus for training instance segmentation model, and instance segmentation method
CN115620301A (en) * 2022-10-20 2023-01-17 天津大学 Method for extracting production sequence number of machine vision oil-gas pipeline bracket

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021238826A1 (en) * 2020-05-26 2021-12-02 苏宁易购集团股份有限公司 Method and apparatus for training instance segmentation model, and instance segmentation method
CN113203363A (en) * 2021-04-08 2021-08-03 福建呈祥机械制造有限公司 Bamboo tube measuring method and measuring device based on digital image processing technology
CN113689415A (en) * 2021-08-30 2021-11-23 安徽工业大学 Steel pipe wall thickness online detection method based on machine vision
CN115620301A (en) * 2022-10-20 2023-01-17 天津大学 Method for extracting production sequence number of machine vision oil-gas pipeline bracket

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
尚新龙;毛腾飞;管鑫;王堋人;李东晓;: "天然竹筒内竹纤维的分布规律研究", 玻璃钢/复合材料, no. 03 *

Also Published As

Publication number Publication date
CN116433747B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
WO2023077816A1 (en) Boundary-optimized remote sensing image semantic segmentation method and apparatus, and device and medium
CN108509978B (en) Multi-class target detection method and model based on CNN (CNN) multi-level feature fusion
CN115049936B (en) High-resolution remote sensing image-oriented boundary enhanced semantic segmentation method
CN113436169B (en) Industrial equipment surface crack detection method and system based on semi-supervised semantic segmentation
CN112364931B (en) Few-sample target detection method and network system based on meta-feature and weight adjustment
CN110796009A (en) Method and system for detecting marine vessel based on multi-scale convolution neural network model
CN116228792A (en) Medical image segmentation method, system and electronic device
CN114663380A (en) Aluminum product surface defect detection method, storage medium and computer system
CN113420619A (en) Remote sensing image building extraction method
CN115861281A (en) Anchor-frame-free surface defect detection method based on multi-scale features
CN115731400A (en) X-ray image foreign matter detection method based on self-supervision learning
CN116403042A (en) Method and device for detecting defects of lightweight sanitary products
CN114998360A (en) Fat cell progenitor cell segmentation method based on SUnet algorithm
CN110991374A (en) Fingerprint singular point detection method based on RCNN
CN114612803A (en) Transmission line insulator defect detection method for improving CenterNet
CN116934687B (en) Injection molding product surface defect detection method based on semi-supervised learning semantic segmentation
CN116433747B (en) Construction method and detection device for detection model of wall thickness of bamboo tube
CN110136098B (en) Cable sequence detection method based on deep learning
CN112418229A (en) Unmanned ship marine scene image real-time segmentation method based on deep learning
CN117274355A (en) Drainage pipeline flow intelligent measurement method based on acceleration guidance area convolutional neural network and parallel multi-scale unified network
CN116363610A (en) Improved YOLOv 5-based aerial vehicle rotating target detection method
CN113269734B (en) Tumor image detection method and device based on meta-learning feature fusion strategy
CN114359229A (en) Defect detection method based on DSC-UNET model
CN116309601B (en) Leather defect real-time detection method based on Lite-EDNet
Jia et al. LPSST: Improved Transformer Based Drainage Pipeline Defect Recognition Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant