CN113450382B - Different fiber segmentation method based on image center regression - Google Patents

Different fiber segmentation method based on image center regression Download PDF

Info

Publication number
CN113450382B
CN113450382B CN202110804142.3A CN202110804142A CN113450382B CN 113450382 B CN113450382 B CN 113450382B CN 202110804142 A CN202110804142 A CN 202110804142A CN 113450382 B CN113450382 B CN 113450382B
Authority
CN
China
Prior art keywords
image
fiber
segmentation
cottonres
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110804142.3A
Other languages
Chinese (zh)
Other versions
CN113450382A (en
Inventor
魏巍
曾霖
张晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhimu Intelligent Technology Partnership LP
Original Assignee
Wuhan Zhimu Intelligent Technology Partnership LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhimu Intelligent Technology Partnership LP filed Critical Wuhan Zhimu Intelligent Technology Partnership LP
Priority to CN202110804142.3A priority Critical patent/CN113450382B/en
Publication of CN113450382A publication Critical patent/CN113450382A/en
Application granted granted Critical
Publication of CN113450382B publication Critical patent/CN113450382B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a different fiber segmentation method based on image center regression, which relates to the field of different fiber image processing, and aims at solving the problem that the integral proportion content of different fibers cannot be obtained only by counting the number of different fibers, the following scheme is proposed, and the method comprises the following steps: s1, inputting the foreign fiber images of the cotton flow in the foreign fiber machine, which are acquired by the image acquisition equipment; s2, segmenting the foreign fiber image in S1 through an algorithm CottonRes-Ynet, and determining whether pixel points in the segmented image belong to a target or a background through the CottonRes-Ynet; and S3, calculating the actual size corresponding to each pixel point by combining the mechanical parameters of the image acquisition equipment in the different fiber machine, and calculating to obtain the length or the area of the target in the segmented different fiber image. The invention provides a foreign fiber segmentation mixed network cottonRes-YNet, and the segmentation precision reaches 90.3% on a verification set.

Description

Different fiber segmentation method based on image center regression
Technical Field
The invention relates to the field of different fiber image processing, in particular to a different fiber segmentation method based on image center regression.
Background
Cotton is one of the national important strategic substances, is closely related to the daily life of people, and plays an important role in national economy. Various foreign fibers are inevitably mixed in the planting, transportation and production links of cotton, and if the foreign fibers are not cleaned in time, the problems of equipment damage and the like of textile machinery can be caused, the quality of final cotton textile products can be reduced, and economic loss is caused.
The foreign fiber detection product based on machine vision is widely applied in cotton spinning factories at present, and the quantity of each type of foreign fiber can be known in the process of detecting and classifying the foreign fibers, so that the content of the foreign fibers can be estimated quantitatively. However, this method is not particularly accurate, for example, for polypropylene filaments of different lengths, the effect on the yarn product is necessarily proportional to the length, the longer the length, the more severe the effect on the rear end yarn. The integral proportion content of the different fibers cannot be obtained only through the statistics of the number of the different fibers, so that the yarn quality influence cannot be comprehensively evaluated. Therefore, a different fiber segmentation method based on image center regression is designed to solve the problems.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides an image center regression-based different fiber segmentation method.
In order to achieve the purpose, the invention adopts the following technical scheme:
a heterogeneous fiber segmentation method based on image center regression comprises the following steps:
s1, inputting the foreign fiber images of the cotton flow in the foreign fiber machine, which are acquired by the image acquisition equipment;
s2, segmenting the foreign fiber image in S1 through an algorithm CottonRes-Ynet, and determining whether pixel points in the segmented image belong to a target or a background through the CottonRes-Ynet;
s3, calculating the actual size corresponding to each pixel point by combining mechanical parameters of image acquisition equipment in the different fiber machine, and calculating to obtain the length or area of a target in the segmented different fiber image;
the CottonRes-YNet inputs image slice data, each slice size is 1/8 of the original picture, and outputs 8 split mask pictures and 8 prediction coordinate error energy maps;
the segmentation mask contains the label of the target or foreground, and the regression branch contains the coordinate relative position of the target prediction point and the slice center.
Preferably, the CottonRes-YNET encoder part is consistent with the CottonRes network body, the decoding network is symmetrically designed, the characteristic cascade is added on the basis of U-Net, and the training loss function is defined as:
Figure GDA0003386836590000021
the network loss function is formed by combining slice type output and position coordinate loss, and the position prediction loss function of the special fiber image characteristic points is defined as follows:
Figure GDA0003386836590000022
wherein Pos represents a set of heterofiber coordinates of the marker,
Figure GDA0003386836590000023
for predicted coordinate position, PjThe relative position of the calibrated characteristic point coordinate and the slice image central point is obtained;
the position prediction loss function of the feature points defined by the equation (3-8) is used for training the relative distance between the predicted target feature point position and the slice center in the segmentation network.
Preferably, the image acquisition device is a line-scan camera in a different fiber machine, and assuming that the line-scan camera samples S per second, the corresponding target different fiber speed is V0Then, in the direction of the movement of the different fibers, i.e. the longitudinal direction, the corresponding length of each row of pixel points is V0and/S, the transverse camera shooting range is assumed to be P, the transverse resolution of the camera is D, and the corresponding length of each pixel point in the transverse direction is calculated to be D/P.
The invention has the beneficial effects that: the image segmentation framework based on the deep learning technology increases the coordinate regression branch of the image center and the target position on the basis of a segmentation network, provides a foreign fiber segmentation mixed network cottonRes-YNet, and achieves the segmentation precision of 90.3% on a verification set by uniform intersection ratio. And calculating the corresponding relation between the image and the real foreign fiber size by combining the physical mechanical structure of the acquisition equipment, wherein the mean square error between the size estimation result and the data of the specified size is within 4 percent. Compared with a sampling evaluation mode in national standards, the content of the foreign fibers in the raw cotton can be more comprehensively evaluated.
Drawings
FIG. 1 is a schematic diagram of a full convolutional neural network codec according to the present invention;
FIG. 2 is a schematic diagram of the structure of CottonRes-YNET in the present invention;
FIG. 3 is a schematic view of a fine structure of the linear foreign fiber of the present invention;
FIG. 4 is a diagram illustrating predicted results of different fiber partitions according to the present invention;
FIG. 5 is a schematic diagram showing the comparison of the different fiber image segmentation results of the present invention and the conventional algorithm;
fig. 6 is a schematic diagram showing a comparison of the different fiber image segmentation results of the present invention and a classical network.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
A heterogeneous fiber segmentation method based on image center regression comprises the following steps:
s1, inputting the foreign fiber images of the cotton flow in the foreign fiber machine, which are acquired by the image acquisition equipment;
s2, segmenting the foreign fiber image in S1 through an algorithm CottonRes-Ynet, and determining whether pixel points in the segmented image belong to a target or a background through the CottonRes-Ynet;
and S3, calculating the actual size corresponding to each pixel point by combining the mechanical parameters of the image acquisition equipment in the different fiber machine, and calculating to obtain the length or the area of the target in the segmented different fiber image.
The image segmentation architecture can be regarded as an encoder-decoder network, an encoder is used for feature extraction, and the commonly used classical feature extraction networks mainly comprise VGG, Res-Net and the like, and the conventional feature extraction networks obtain very good results in various large-scale data set classification competitions. FCN, U-Net and SegNet mainly are differences in some feature fusion means of a decoder, but the goal of the decoders of all architectures is to invert discriminable features learned by an encoder to a pixel space from image semantics so as to obtain a classification result of each pixel point.
Referring to fig. 1, a full convolution segmentation network (FCN) modifies a full connection layer (FC) in a classification network into a convolution layer to form a full convolution network, and then a deconvolution technique is used to up-sample a last layer of features in the classification network to reduce the last layer of features to a dimension the same as the original image size, so that each pixel point is classified and predicted to serve as a final original image segmentation result.
The encoder portion of the FCN and the classification network backbone remain substantially identical. The last layer of convolution of the encoder is gradually restored to the original image size by a deconvolution method. And the FCN output uses a Softmax method to perform pixel-level classification processing on the feature map to obtain a segmentation image.
Referring to fig. 2, the inventor is inspired by a Mask-RCNN candidate frame mechanism, divides a target different-fiber picture into a plurality of equal parts according to a specified rule, and adds a coordinate regression branch for predicting the relative distance between the pixel point coordinate and the slice center on the basis of network division. Taking U-Net as an example, the improved network structure is shown in FIG. 2, and since the structure is like the letter Y, the invention defines it as CottonRes-YNet.
The CottonRes-YNet inputs image slice data, each slice size being 1/8 of the original picture. The output is 8 split mask pictures and 8 prediction coordinate error energy maps. The segmentation mask contains the label of the target or foreground and the regression branch contains the coordinate relative position of the target prediction point and the slice center. And obtaining a final segmentation result through the feature fusion of the coordinate energy diagram and the mask. In the overall structure, the inventor adds a residual module in a high-level network because the slice data scale is small.
The CottonRes-Ynet uses a regression branch to restrict single pixel class prediction in the segmentation network by taking an image slice as a unit, and improves the limitation of the conventional segmentation network on single pixel processing. Through the concept of the target block, the position information of the different-fiber target is fused into the network training, so that the segmentation network pays more attention to the whole target, and the segmentation precision is further improved.
The CottonRes-YNet encoder part is consistent with the CottonRes network main body, a decoding network is symmetrically designed, and the characteristic cascade is added based on the U-Net thought. The training loss function is defined as:
Figure GDA0003386836590000051
the network loss function is formed by combining slice type output and position coordinate loss, and the inventor defines the position prediction loss function of the special fiber image characteristic points as follows:
Figure GDA0003386836590000061
wherein Pos represents a set of heterofiber coordinates of the marker,
Figure GDA0003386836590000062
for the predicted coordinate position (position where there may be foreign fibers in the slice data), PjThe relative position of the calibrated characteristic point coordinate and the central point of the slice image is obtained.
The position prediction loss function of the feature points defined by the equation (3-8) is used for training the relative distance between the predicted target feature point position and the slice center in the segmentation network, and the position information of the target is also trained in this way. The regression branch is used for further improving the segmentation precision under the dimension of the position information.
The accurate segmented image is a necessary condition for evaluating and inputting the content of the foreign fibers, if the corresponding relation between the segmented image and the physical size of an actual target needs to be established, the fixation of the optical path mechanical structure of the image acquisition equipment is necessary,
all cameras are locked through four-way angle screws, and therefore the linear corresponding relation can be established between each acquired image and the size of an actual object. In the different fiber detection, the basic morphological processing and pixel point statistics are carried out on the different fibers after being divided, if each pixel point is givenThe corresponding physical size can easily calculate the actual length or area of the corresponding different fiber. Assuming the sampling action S per second of the linear array camera, the corresponding target different fiber speed is V0Then, in the direction of the movement of the different fibers, i.e. the longitudinal direction, the corresponding length of each row of pixel points is V0and/S, the transverse camera shooting range is assumed to be P, the transverse resolution of the camera is D, and the corresponding length of each pixel point in the transverse direction is calculated to be D/P.
Referring to fig. 3, the specific parameters of the foreign fiber sorting machine are 9125 lines of sampling lines per second, if the foreign fiber speed is 10 meters per second, the length of each corresponding longitudinal pixel point is 1.08mm, the transverse resolution is 2096, the shooting range is generally 621mm, the length of each corresponding transverse pixel point is 0.296mm, and the size of the area can be multiplied by the area of a single pixel through the number of the pixel points. And the length estimation needs to firstly carry out thinning operation, the connectivity of small parts of the image is reserved, the redundant pixel distribution is reduced, and the target form information is highlighted. As shown in fig. 3, the number of pixels can be counted after thinning, and the length of the linear foreign fiber can be estimated by combining the relationship between the transverse size and the longitudinal size.
In order to evaluate the performance index of the method for segmenting the dissimilar fiber image, the inventor establishes a dissimilar fiber image segmentation data set, and performs a large number of experiments by using data information in the data set, wherein the data set is shown in table 1:
TABLE 1 different fiber image segmentation dataset information
Figure GDA0003386836590000071
The data set covers the main production line speed condition of most cotton spinning plants, and the quantity of the mulching film data set is slightly more than that of other types. The reason is that the mulching film is mainly composed of foreign fibers of domestic raw cotton, and accurate statistics of the content of the mulching film has guiding significance for the quality evaluation of the domestic raw cotton at present. In the deep learning training process, 80% of each category is selected as a training set, 10% is selected as a verification set, 10% is selected as a test set, the algorithm segmentation effect is evaluated through the verification set and is used for adjusting the network training hyper-parameters, and the test set is used for testing the generalization ability of the final algorithm.
Reference is made to fig. 4, which is a segmentation result, T, of the film foreign fibers that may appear1Is a real Ground foreign fiber area (Ground Truth), T0Partly cotton flow under artificial marking and a blank background area, P1For predicted foreign fiber regions of the mulch film, P0In part, the predicted cotton flow and the blank background area.
These four parts can be divided into four combinations, defined as follows:
TP: and judging the positive sample (True positive) as the different fiber, and marking the positive sample as the different fiber. Namely T1And P1Intersection of parts, T1∩P1
TN: and a Negative sample (True Negative), which is judged as background and marked as background. Namely T0And P0Region outside the part, T0∪P0
FN: negative sample (False Negative), judged as background, label belongs to a part of foreign fiber, i.e. T1In part except for P1The region outside the portion.
FP, negative sample (False Positive), is judged as background, and the label belongs to a part of the background. I.e. P1In part except for T1The region outside the portion.
According to the above definition, the more common evaluation criteria of the segmentation network are Pixel Accuracy (PA), Recall (Recall) and average Intersection over Union (MIoU), and the calculation formulas are shown as (3-1) and (3-2).
Figure GDA0003386836590000081
Figure GDA0003386836590000082
Wherein t represents a detection threshold, that is, the prediction result of each pixel point exceeds the threshold to be considered as a correct classification, and is generally set to be common values such as 0.5, 0.75, 0.9 and the like for comparison. In the heterogeneous fiber segmentation, since the objective segmentation task is simple compared to the disclosure of a large data set such as COCO, the objective is to expect accurate physical dimensions. If the threshold value is too small, the calculated size result is generally large easily, and the accurate evaluation of the yarn quality is influenced, so that the method only selects 0.9 as the segmentation threshold value to evaluate. In possible prediction results, TP represents the intersection of the predicted different fiber part and the real different fiber part, TN represents the intersection of the predicted cotton part and the real cotton part, and both represent the collection of pixel points with correct prediction, namely PA reflects the proportion of all pixel points with correct prediction to the total pixel points, theoretically reflects the correct detection condition, and the Recall rate (Recall) reflects the proportion of the correct pixel points in the correct prediction result, and represents the network false detection condition to a certain extent. The average intersection ratio (mIoU) reflects the ratio of the intersection and the union between the predicted result and the real result, and the calculation formula is shown as the formula (3-3).
Figure GDA0003386836590000091
Unlike PA, the mIoU removes TN, and the main reason is that the mIoU is used for measuring the performance of predicting as a target sample class.
In order to fully evaluate the performance of the segmented network, in addition to the above three common evaluation criteria, the present invention also introduces the evaluation criteria of the segmented network in the medical field: a Dice coefficient, RVD (voxel Relative error), FNR (False negative rate), FPR (False positive rate), and the like. The calculation formulas of the segmentation quality evaluation indexes are shown in formulas (3-4), (3-5) and (3-6).
Figure GDA0003386836590000092
Figure GDA0003386836590000093
Figure GDA0003386836590000094
Figure GDA0003386836590000101
The Dice coefficient is more biased to a measurement function of prediction similarity, the measurement function is usually used for calculating the similarity between a segmentation result and a real value, the RVD represents the voxel difference between the prediction result and the real value, the index can well measure the prediction capability of the network for a positive sample, the FNR is the proportion of pixels reflecting the missing of an actual segmentation result relative to the real value, and the FPR is the proportion of pixels reflecting the more segmentation of the actual segmentation result relative to the real value.
Finally, when the network segmentation performance indexes are evaluated, the calculation time consumption of the network model needs to be compared, and the shorter the segmentation time consumption is, the more excellent the various indexes are, and the better the network architecture performance is represented.
Compared with the traditional segmentation algorithm
The classical algorithm is as follows: a rough set segmentation technology, a wavelet transform-based different fiber segmentation method.
Rough Set Theory (RST) is mainly based on a data knowledge expression system such as an information table, i.e., a Set of target features to be detected in a general sense. The invention selects the related foreign fiber knowledge of hair, feather, polypropylene fiber and the like to construct a coarse foreign fiber set. The knowledge system comprises conventional profile moment, appearance ratio, duty ratio, roundness and RGB mean property sequences.
The Wavelet Transform (WT) -based heterogeneous fiber segmentation is divided into four steps: firstly, performing threshold segmentation twice to obtain a suspected image; secondly, performing wavelet transformation on the image, and obtaining feature maps in different scales and different directions by using extreme value wavelet mean values and a calculation rule of difference values; and thirdly, removing noise of the characteristic diagram of each scale, and finally obtaining the different-fiber target edge information of the original image by using a wavelet inverse transformation technology.
Since the heterogeneous segmented image does not have a public dataset, the present invention compares the use of classical algorithms on the self-acquired segmented dataset with the present invention algorithm.
The method selects 300 pictures of common types of foreign fibers, including waste paper scraps, colored fiber cloth, packaging bags, animal feathers, polypropylene fibers and the most common mulching film in machine-harvested cotton, uses mIoU, PA and Recall indexes to evaluate the various foreign fibers, and the comparison result is shown in Table 2;
TABLE 2 accuracy of different algorithms on the different fiber split data (best results are shown bold)
Figure GDA0003386836590000111
Table 2 is concluded as follows: (1) the CottonRes-YNet based on the deep learning technology has obvious advantages on main indexes. (2) For different fibers with obvious color characteristics and larger outlines, such as colored fiber cloth pieces, waste paper scraps and the like, the classical algorithm indexes can also meet the application requirements, because the characteristics of the different fiber categories are easier to design and obtain. (3) On polypropylene fiber with a light color and a mulching film as a target, the difference between the polypropylene fiber and the mulching film is difficult to capture by a characteristic extractor of color and texture in the traditional method, especially on a foreign fiber picture in a complex environment. (4) The deep learning technology can automatically learn characteristics through a convolutional neural network algorithm, and results meeting practical application requirements can be obtained in all categories.
Referring to fig. 5, which is a segmentation result label diagram (with segmentation noise removed) on a typical heterogeneous fiber for three algorithms, the following conclusion is reached: (1) the color characteristics and the appearance of the special fibers are obvious, the three algorithms have similar segmentation effects, for example, the special fibers of the wastepaper scraps in the first row of the graph 5 obtain better segmentation results. (2) Mulch foreign fibers (last row in fig. 5) with characteristics close to cotton, only CottonRes-YNet can effectively carry out segmentation identification. (3) Different fibers (the second row and the third row of fig. 5) with inconspicuous colors or details, the light polypropylene fibers are slightly distinguished from the background, and the RST and WT methods are seriously segmented by mistake, so that the application requirements cannot be met. (4) In the aspect of the segmentation of the different-fiber data, the problem of the different-fiber segmentation cannot be completely solved only by using the characteristics of colors and outlines or manually designed characteristics. (5) On these test samples, the CottonRes-YNet segmentation based on convolutional neural networks works best (IoU stable and highest).
Compared with the classical algorithm, the image segmentation method based on the deep learning technology can learn more characteristics which cannot be designed manually, so that the universality of the segmentation algorithm under different types, different production lines and different light path conditions is ensured.
Segmentation model improvement method contrast analysis
In order to verify the segmentation performance of CottonRes-YNet, the invention aims at the comparison of the segmentation precision and the training speed of FCN, SegNet and U-Net which are commonly used in the deep learning technology. The contents of comparison include segmentation indexes IoU, PA (accuracy), Recall (misclassification), Dice (accuracy), RVD (target identification capability), FNR (under-segmentation index), FPR (over-segmentation index). The hardware and software related detail parameters for training are shown in table 3.
TABLE 3 training hyper-parameters and platform configuration
Figure GDA0003386836590000121
Figure GDA0003386836590000131
The experimental comparison results are shown in table 4.
TABLE 4 segmentation indicators for different network architectures (best results are shown bold)
Figure GDA0003386836590000132
In Table 4, CottonRes-YNet takes a leading result on all indexes, the index of U-Net is only second to CottonRes-YNet, the fact that the added coordinate regression branch can effectively improve the classification accuracy of the traditional U-Net is fully demonstrated, and IoU of 3.6% is added on the basis of U-Net through the addition of a relative position energy diagram.
Referring to fig. 6, CottonRes-YNet introduces a coordinate regression model into the segmentation network, and constrains the neural network to focus on the peripheral region of the target by an image slice method, thereby improving the integrity of the contour of the foreign fiber edge. Fig. 6 shows the output results of some typical samples under different algorithms. The following is concluded from table 4 and fig. 6: (1) since the output of the segmented network is obtained by deconvolution using the feature map, all network outputs appear as detail blurring at the boundaries. (2) The CottonRes-YNet works better at the detail outline than other methods by regression of the center coordinates. But since the network uses the slicing technique, information of the slice position is lost (second line of fig. 6). (3) The segmentation output of CottonRes-YNet is visually closer to the manual label, with IoU values also being higher.
Differential fiber size estimation test
The method for manually and quantitatively calculating the size of the different fibers comprises the steps of measuring the length of the linear different fibers by using a caliper, and estimating the block-shaped irregular different fibers by using an area measurer, wherein the measurer consists of squares with the length and the width of 0.5cm, and if the squares are full of the different fibers, the area of the square can be considered to be 0.25cm2. If the other pixels are less than one cell, the ratio of the target pixel in the cell is judged, and if the ratio exceeds 40%, the target area is considered to be 0.25cm2Otherwise, it is 0.
The hand-picking cotton generally contains more foreign fibers such as polypropylene fibers, waste paper scraps and the like which are artificially introduced, has higher content and belongs to raw cotton with poor grade. Machine-harvested cotton is concentrated in Xinjiang areas in China at present, and in the cotton planting stage of the areas, in order to lock water, plastic film greenhouses are mainly used for covering, so that more films are mixed in the machine during harvesting. The artificial foreign fibers such as polypropylene fibers, waste paper scraps and the like in the machine-harvested cotton are less, and a plurality of cotton spinning plants usually adopt a mixed mode of machine-harvested cotton and hand-harvested cotton as raw materials for production in view of cost saving. On two types of yarn production lines with different raw cotton ratios, the inventor randomly selects a certain two-hour time period for comparison, and specific results are shown in table 5.
TABLE 5 statistics of various types of fibers at given time periods
Figure GDA0003386836590000141
In table 5, the number of polypropylene filaments picked by hand is the majority, and the number of foreign fibers in mulching films in machine-picked cotton is the majority. When the number of samples of some classes is too small, the value of RMSE is not counted, and the counting error is avoided. As can be seen from Table 5, the algorithmic measurements and the artificially labeled data achieved a good match with less than 4% cumulative mean square error (RMSE) for all categories (RMSE 3.16 for hand picking and 2.43% for machine picking).
According to the invention, the foreign fiber target is segmented from the cotton image through an image segmentation algorithm based on a deep learning technology, and the new method for evaluating the content of the foreign fiber is realized by combining the parameters of the acquisition equipment.
The quality grade of the raw cotton and the quality of the corresponding yarn products can be fed back through the evaluation of the content of the different fibers of the raw cotton, the number of each type of different fibers can be counted in the process of classifying the different fibers, but the quality of the raw cotton cannot be comprehensively evaluated only through the number counting. If more accurate raw cotton foreign fiber content is required, the size of the foreign fibers can be estimated by using the image for feedback. Through comparison and experimental demonstration of the classic segmentation network structure, an improved algorithm CottonRes-Ynet based on a U-Net image segmentation architecture and an image center regression network is provided. By utilizing the regression fitting method of the image block center coordinates and the target position, the segmentation precision is further improved from the dimension of the coordinate position, the defect that the outline is lost due to the fact that neighborhood information is ignored in the traditional single-pixel point classification is overcome, and compared with the traditional U-Net, the segmentation indexes are all combined and improved to 90.3% from 86.7%.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the equipment or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (3)

1. A heterogeneous fiber segmentation method based on image center regression is characterized by comprising the following steps:
s1, inputting the foreign fiber images of the cotton flow in the foreign fiber machine, which are acquired by the image acquisition equipment;
s2, segmenting the foreign fiber image in S1 through an algorithm CottonRes-Ynet, and determining whether pixel points in the segmented image belong to a target or a background through the CottonRes-Ynet;
s3, calculating the actual size corresponding to each pixel point by combining mechanical parameters of image acquisition equipment in the different fiber machine, and calculating to obtain the length or area of a target in the segmented different fiber image;
the CottonRes-YNet inputs image slice data, each slice size is 1/8 of the original picture, and outputs 8 split mask pictures and 8 prediction coordinate error energy maps;
the segmentation mask contains the label of the target or foreground, and the regression branch contains the coordinate relative position of the target prediction point and the slice center.
2. The method of claim 1, wherein the CottonRes-YNET network encoder part is consistent with the CottonRes network body, the decoding network is symmetrically designed, a characteristic cascade is added on the basis of U-Net, and a training loss function is defined as:
Figure FDA0003386836580000011
the network loss function is formed by combining slice type output and position coordinate loss, and the position prediction loss function of the special fiber image characteristic points is defined as follows:
Figure FDA0003386836580000012
wherein Pos represents a set of heterofiber coordinates of the marker,
Figure FDA0003386836580000021
for predicted coordinate position, PjThe relative position of the calibrated characteristic point coordinate and the slice image central point is obtained;
the position prediction loss function of the feature points defined by the equation (3-8) is used for training the relative distance between the predicted target feature point position and the slice center in the segmentation network.
3. The method as claimed in claim 1, wherein the image acquisition device is a line camera in a fiber profile machine, assuming that the line camera samples S per second, and the corresponding target fiber profile speed is V0Then, in the direction of the movement of the different fibers, i.e. the longitudinal direction, the corresponding length of each row of pixel points is V0S, the horizontal camera shooting range is assumed to be P, and the camera horizontal resolutionThe rate is D, and the corresponding length of each pixel point in the transverse direction is calculated to be D/P.
CN202110804142.3A 2021-07-16 2021-07-16 Different fiber segmentation method based on image center regression Active CN113450382B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110804142.3A CN113450382B (en) 2021-07-16 2021-07-16 Different fiber segmentation method based on image center regression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110804142.3A CN113450382B (en) 2021-07-16 2021-07-16 Different fiber segmentation method based on image center regression

Publications (2)

Publication Number Publication Date
CN113450382A CN113450382A (en) 2021-09-28
CN113450382B true CN113450382B (en) 2022-03-11

Family

ID=77816395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110804142.3A Active CN113450382B (en) 2021-07-16 2021-07-16 Different fiber segmentation method based on image center regression

Country Status (1)

Country Link
CN (1) CN113450382B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260055A (en) * 2020-01-13 2020-06-09 腾讯科技(深圳)有限公司 Model training method based on three-dimensional image recognition, storage medium and equipment
CN111882549A (en) * 2020-07-31 2020-11-03 陕西长岭软件开发有限公司 Automatic detection and identification method and system for grayish green small foreign fibers

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190195853A1 (en) * 2017-12-26 2019-06-27 Petr PERNER Method and system for yarn quality monitoring
CN109584251A (en) * 2018-12-06 2019-04-05 湘潭大学 A kind of tongue body image partition method based on single goal region segmentation
EP3671542A1 (en) * 2018-12-18 2020-06-24 Visteon Global Technologies, Inc. Method for multilane detection using a convolutional neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260055A (en) * 2020-01-13 2020-06-09 腾讯科技(深圳)有限公司 Model training method based on three-dimensional image recognition, storage medium and equipment
CN111882549A (en) * 2020-07-31 2020-11-03 陕西长岭软件开发有限公司 Automatic detection and identification method and system for grayish green small foreign fibers

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Content Estimation of Foreign Fibers in Cotton Based on Deep Learning;Wei Wei et al.;《Electronics》;20201029;第1-22页 *
基于全卷积U-Net的MR图像***区域分割;卫雪;《中国优秀硕士学位论文全文数据库 医药卫生科技辑》;20210315;第2021年卷(第3期);第E060-70页 *
基于改进YOLOv3的棉花异性纤维检测;巫明秀 等;《液晶与显示》;20201130;第35卷(第11期);第1195-1203页 *

Also Published As

Publication number Publication date
CN113450382A (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN114549522B (en) Textile quality detection method based on target detection
CN114723704B (en) Textile quality evaluation method based on image processing
Guha et al. Measurement of yarn hairiness by digital image processing
CN105844621A (en) Method for detecting quality of printed matter
CN108181316B (en) Bamboo strip defect detection method based on machine vision
CN111160451A (en) Flexible material detection method and storage medium thereof
CN103394472B (en) A kind of greening potato based on machine vision detects stage division
CN110781889B (en) Deep learning-based nondestructive testing method for total sugar content in blueberry fruits
CN107610119B (en) The accurate detection method of steel strip surface defect decomposed based on histogram
Chen et al. Evaluating fabric pilling with light-projected image analysis
CN116563279B (en) Measuring switch detection method based on computer vision
Daniel et al. Automatic road distress detection and analysis
CN111191628A (en) Remote sensing image earthquake damage building identification method based on decision tree and feature optimization
CN106056078B (en) Crowd density estimation method based on multi-feature regression type ensemble learning
CN110619619A (en) Defect detection method and device and electronic equipment
CN111652883A (en) Glass surface defect detection method based on deep learning
PT1770037E (en) Process of detecting double feeding of postal items by image analysis of the upright items
CN113435460A (en) Method for identifying brilliant particle limestone image
CN111060455B (en) Northeast cold-cool area oriented remote sensing image crop marking method and device
CN109115775A (en) A kind of betel nut level detection method based on machine vision
CN114998205A (en) Method for detecting foreign matters in bottle in liquid filling process based on optical means
CN114299059A (en) Method for judging scratch defects of unsorted casting blanks on surfaces of hot-rolled strip steel
CN112183640B (en) Detection and classification method based on irregular object
CN113450382B (en) Different fiber segmentation method based on image center regression
Janardhana et al. Computer aided inspection system for food products using machine vision—a review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant