CN115358972A - Tobacco leaf grading method and system based on visual feature fusion - Google Patents
Tobacco leaf grading method and system based on visual feature fusion Download PDFInfo
- Publication number
- CN115358972A CN115358972A CN202210868044.0A CN202210868044A CN115358972A CN 115358972 A CN115358972 A CN 115358972A CN 202210868044 A CN202210868044 A CN 202210868044A CN 115358972 A CN115358972 A CN 115358972A
- Authority
- CN
- China
- Prior art keywords
- tobacco leaf
- tobacco
- image
- extracting
- dimensional features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 235000002637 Nicotiana tabacum Nutrition 0.000 title claims abstract description 154
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000004927 fusion Effects 0.000 title claims abstract description 29
- 230000000007 visual effect Effects 0.000 title claims abstract description 28
- 244000061176 Nicotiana tabacum Species 0.000 title 1
- 241000208125 Nicotiana Species 0.000 claims abstract description 153
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 16
- 238000000605 extraction Methods 0.000 claims description 22
- 238000007781 pre-processing Methods 0.000 claims description 22
- 238000001914 filtration Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 230000000877 morphologic effect Effects 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 2
- 230000006378 damage Effects 0.000 claims description 2
- 208000027418 Wounds and injury Diseases 0.000 claims 1
- 208000014674 injury Diseases 0.000 claims 1
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 235000019505 tobacco product Nutrition 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000013145 classification model Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000006740 morphological transformation Effects 0.000 description 2
- 239000002994 raw material Substances 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 210000003462 vein Anatomy 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a tobacco leaf grading method and system based on visual feature fusion, belonging to the technical field of tobacco leaf grading and comprising the following steps: s1, extracting an interested area of tobacco leaves to be classified from a first tobacco leaf image; s2, extracting low-dimensional features according to the first tobacco leaf image and the region of interest; s3, extracting high-dimensional features from a second tobacco leaf image under a light source based on a convolutional neural network; s4, combining the low-dimensional features and the high-dimensional features to obtain combined features; and S5, inputting the combined features into a classifier, and outputting a classification result of the tobacco leaves to be classified. Has the beneficial effects that: according to the invention, the low-dimensional features and the high-dimensional features are fused, so that the tobacco leaf classification is carried out based on the visual feature fusion, and the accuracy and the efficiency of classification results are improved.
Description
Technical Field
The invention relates to the technical field of tobacco leaf grading, in particular to a tobacco leaf grading method and system based on visual feature fusion.
Background
The tobacco leaves are important raw materials in the production of tobacco products, the quality of the tobacco products is directly influenced by the grade quality of the tobacco leaves, and in order to provide tobacco leaf raw materials meeting the requirements for various tobacco products and promote the production and development of the tobacco leaves, the national tobacco industry establishes scientific and reasonable grading standards for grading the tobacco leaves.
The existing mainstream tobacco leaf grading method mainly comprises the following steps:
(1) The tobacco leaf image features are extracted as much as possible by using a traditional method so as to obtain a richer feature matrix. The method needs to depend on characteristic design, but the tobacco leaves are greatly deformed after being baked, and particularly important detailed characteristics such as tobacco leaf veins and the like are difficult to express;
(2) The feature extraction is performed by training a classification model based on a convolutional neural network method. However, the tobacco data samples are difficult to obtain, the cost is high, and mass data are difficult to realize, so that under the condition of less data sets, the network structure needs to be modified and the loss function needs to be adjusted to make up for the defects caused by less samples, but a better classification model is difficult to train in sequence;
(3) And extracting multi-modal characteristics by combining sensors such as gravity and the like for fusion. The method has higher requirements on the sensitivity of the sensor and higher cost.
Disclosure of Invention
In order to solve the technical problems, the invention provides a tobacco leaf grading method and system based on visual feature fusion.
The technical problem solved by the invention can be realized by adopting the following technical scheme:
a tobacco leaf grading method based on visual feature fusion comprises the following steps:
s1, extracting an interested area of tobacco leaves to be classified from a first tobacco leaf image;
s2, extracting low-dimensional features according to the first tobacco leaf image and the region of interest;
s3, extracting high-dimensional features from a second tobacco leaf image under a light source based on a convolutional neural network;
s4, combining the low-dimensional features and the high-dimensional features to obtain combined features;
and S5, inputting the combined features into a classifier, and outputting a classification result of the tobacco leaves to be classified.
Preferably, in step S1, the method for extracting the region of interest includes:
s11, preprocessing the first tobacco leaf image, wherein the preprocessing at least comprises any one or more of graying conversion and median filtering;
s12, obtaining tobacco leaf edge information, and performing morphological conversion to obtain a tobacco leaf edge binary image;
s13, extracting the contour of the binary image of the tobacco leaf edge, filtering the contour, and keeping the contour with the largest contour area;
s14, filling the reserved outline, and creating a tobacco leaf template;
and S15, performing bit-wise AND processing on the first tobacco leaf image and the pixel value of the tobacco leaf template, and extracting to obtain the region of interest.
Preferably, in the step S2, the low-dimensional features at least include any one or more combinations of a mean value and a variance of sampled pixel values of the region of interest in HSV channels, a tobacco leaf morphological coordinate, and a tobacco leaf residual area ratio.
Preferably, the method for extracting the tobacco leaf morphological coordinates comprises the following steps:
establishing an auxiliary two-dimensional coordinate system in the reserved contour, wherein the abscissa of the auxiliary two-dimensional coordinate system is the same as the ordinate of a pixel coordinate system, and the ordinate of the auxiliary two-dimensional coordinate system is opposite to the abscissa of the pixel coordinate system;
and extracting end point coordinates of the tobacco leaves in the length direction and the width direction to obtain the tobacco leaf form coordinates.
Preferably, the method for extracting the ratio of the area of the tobacco leaves damaged comprises the following steps:
wherein,a profile area representing the ith said profile; t is Smax Representing a maximum threshold value corresponding to the outline area; c max Representing the contour with the largest contour area; c ratio And expressing the ratio of the damaged area of the tobacco leaves.
Preferably, in step S3, the method further includes:
clustering the high-dimensional features based on a K mean algorithm, and dividing the number of elements;
the high-dimensional features comprise h elements, wherein h is a clustering parameter of the K-means algorithm.
Preferably, in step S4, the classifier is a Light GBM classifier.
The invention also provides a tobacco leaf grading system based on visual feature fusion, which is used for implementing the tobacco leaf grading method based on visual feature fusion, and comprises the following steps:
the first image acquisition module is used for acquiring a first tobacco leaf image;
the second image acquisition module is used for acquiring a second tobacco leaf image under the light source;
the image preprocessing module is connected with the first image acquisition module and is used for preprocessing images and extracting an interested area of the tobacco leaves;
a feature extraction module comprising: the first feature extraction unit is connected with the image preprocessing module and used for extracting low-dimensional features; the second feature extraction unit is connected with the second image acquisition module and used for extracting high-dimensional features based on a convolutional neural network;
a combined feature module is constructed, connected with the feature extraction module and used for combining the low-dimensional features and the high-dimensional features to obtain combined features;
and the classification module is connected with the construction joint feature module, is preset with a classifier and is used for outputting the classification result of the tobacco leaves to be classified according to the joint feature.
Preferably, the image preprocessing module performs image preprocessing based on an Open CV operator.
Preferably, the feature extraction module further comprises:
and the dividing unit is connected with the second feature extraction unit and used for clustering the high-dimensional features based on a K-means algorithm and dividing the number of elements.
The technical scheme of the invention has the advantages or beneficial effects that:
according to the invention, the low-dimensional features and the high-dimensional features are fused, so that the tobacco leaves are classified based on visual feature fusion, and the accuracy and efficiency of the classification result are improved.
Drawings
FIG. 1 is a schematic flow chart of a tobacco leaf grading method based on visual feature fusion according to a preferred embodiment of the present invention;
FIG. 2 is a flowchart illustrating the step S1 according to a preferred embodiment of the present invention;
FIG. 3 is a block diagram of a tobacco leaf grading system based on visual feature fusion according to a preferred embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive efforts based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
It should be noted that the embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
In a preferred embodiment of the present invention, based on the above problems in the prior art, there is provided a method for classifying tobacco leaves based on visual feature fusion, which belongs to the technical field of tobacco leaf classification, and as shown in fig. 1, the method includes:
s1, extracting an interested area of tobacco leaves to be classified from a first tobacco leaf image;
specifically, the extracting process of the region of interest may be implemented in an Open Source Computer Vision Library (Open CV).
As a preferred embodiment, as shown in fig. 2, in step S1, the method for extracting the region of interest includes:
s11, preprocessing a first tobacco leaf image;
the first tobacco leaf image can be obtained by shooting through a first image acquisition module, and the first image acquisition module is preferably a camera. Then, image preprocessing is carried out, wherein the preprocessing at least comprises any one or more of graying conversion and median filtering processing. In order to reduce the computational calculation force, the region of interest of the RGB or HSV color space is converted into a gray map by a gray scale conversion method; then, the gray value of each pixel point is set to be the median of the gray values of all the pixel points in a certain neighborhood window of the point by adopting a median filtering processing mode, and the edges of the tobacco leaves are protected from being blurred while noise is filtered.
S12, obtaining tobacco leaf edge information, and performing morphological conversion to obtain a tobacco leaf edge binary image;
specifically, the tobacco leaf edge information is obtained by using a method including, but not limited to, canny operator. For better processing the edge information, performing Morphological transformation (Morphological Transformations) to obtain a tobacco leaf edge binary image;
s13, extracting the outline of the binary image of the tobacco leaf edge, filtering the outline and keeping the outline with the largest outline area;
specifically, the contour of the tobacco leaf edge binary image is extracted by a method including but not limited to finding an image contour (find Contours) operator.
Then, filtering the contour to keep the Maximum contour area, wherein the filtering method includes but is not limited to a Non Maximum Suppression (NMS) algorithm, and the specific method is as follows:
wherein, C i Representing the ith profile;a contour area representing the ith contour; t is Smin Representing a minimum threshold corresponding to the outline area; c max The contour having the largest contour area is shown.
S14, filling the reserved outline and creating a tobacco leaf template;
specifically, the retained contour is filled based on a draw contacts operator and a tobacco leaf template is created, and the contour filling method comprises the following steps:
wherein, mask represents a tobacco template; c max Representing the outline with the largest outline area; filled represents a fill value; r, G, B represent the color intensities of three color channels of red (R), green (G), and blue (B), respectively.
And S15, carrying out bit and processing on the first tobacco leaf image and the pixel value of the tobacco leaf template, and extracting to obtain the region of interest.
Specifically, the first unprocessed tobacco leaf image and the tobacco leaf template created in the step S14 are subjected to pixel value bitwise and processing to extract the tobacco leaf region of interest, and the bitwise and processing method is as follows:
dst=src&mask;
wherein dst represents the region of interest; src represents the first tobacco leaf image without processing; mask represents the tobacco template.
The tobacco grading has a close and inseparable relationship with the growing position of the tobacco, the color of the tobacco, the vein distribution and the damage degree of the tobacco, and the growing position of the tobacco is related with the shapes of the length, the width and the like of the tobacco. In order to better extract the characteristics of the tobacco leaves, two image acquisition modules are adopted to acquire data in a real-time tobacco leaf grading scene.
S2, extracting to obtain low-dimensional features according to the first tobacco leaf image and the region of interest;
specifically, according to the low-dimensional features of the first tobacco leaf image extracted image acquired by the first image acquisition module:
wherein S is dst Representing the extracted low-dimensional features; dst HSV|H Means and variances of sampled pixel values representing hue (H) channels of the region of interest; dst HSV|S Means and variances of sampled pixel values representing a region of interest saturation (S) channel; dst HSV|V Means and variances of the sampled pixel values representing the region of interest lightness (V) channel; c maxij Representing the tobacco leaf form coordinates of the ith row and the jth column corresponding to the outline with the maximum outline area; c ratio The ratio of the area of the damaged tobacco leaves is shown.
Preferably, in step S2, the low-dimensional features at least include any one or more combinations of the mean and variance of sampled pixel values of the region of interest in HSV channels, tobacco morphology coordinates, and ratio of tobacco-damaged area.
Further, the method for extracting the mean and variance of the sampled pixel values of the region of interest in each HSV channel includes: extracting channels H, S and V of HSV color space of the tobacco interested region, and randomly selecting n patches in each channel, wherein n is a positive integer larger than 0; the mean and variance of the pixel values sampled for each channel are then calculated separately.
In a preferred embodiment, the method for extracting the form coordinates of the tobacco leaves includes:
establishing an auxiliary two-dimensional coordinate system in the reserved contour, wherein the abscissa of the auxiliary two-dimensional coordinate system is the same as the ordinate of the pixel coordinate system, and the ordinate of the auxiliary two-dimensional coordinate system is opposite to the abscissa of the pixel coordinate system;
and extracting end point coordinates of the tobacco leaves in the length direction and the width direction to obtain tobacco leaf form coordinates.
Specifically, the contour C retained in step S13 max And establishing an auxiliary two-dimensional coordinate system for extracting coordinate values of four end points of the tobacco leaves to describe the form of the tobacco leaves.
Furthermore, the abscissa of the auxiliary two-dimensional coordinate system is a connecting line in the length direction of the tobacco leaves, and the direction of the connecting line is the same as the ordinate of the pixel coordinate system; the ordinate of the auxiliary two-dimensional coordinate system is a connecting line in the width direction of the tobacco leaves (the widest position of the tobacco leaves), and the direction of the ordinate is opposite to the abscissa of the pixel coordinate system.
In a preferred embodiment, the method for extracting the ratio of the area of the tobacco leaf residue comprises:
wherein,a contour area representing the ith contour; t is Smax Representing a maximum threshold value corresponding to the outline area; c max Representing the outline with the largest outline area; c ratio The ratio of the area of the damaged tobacco leaves is shown.
S3, extracting high-dimensional features from a second tobacco leaf image under a light source based on a convolutional neural network;
specifically, the second tobacco leaf image can be obtained by shooting through a second image acquisition module, and the second image acquisition module is preferably a camera. In a preferred embodiment, a compensation light source is arranged below the second image acquisition module for obtaining more detailed characteristics. When the tobacco leaves are conveyed below the second image acquisition module, the compensating light source is preferably located right below the tobacco leaves.
Furthermore, the first tobacco leaf image and the second tobacco leaf image are obtained by shooting at an interval of a preset time, wherein the preset time is longer than the time for the camera to acquire one image and is shorter than the transmission interval time for two adjacent tobacco leaves to be transmitted to the same camera.
In a preferred embodiment, step S3 further includes:
and clustering the high-dimensional features based on a K-means algorithm, and dividing the number of elements.
Further, the high-dimensional features include h elements, where h is a clustering parameter of a K-means algorithm, that is, the number of clusters (elements) divided after the high-dimensional features are clustered.
Specifically, in the present embodiment, the high-dimensional feature S is extracted based on a Convolutional Neural Network (CNN) CNN And then clustering the high-dimensional features by using a K-Means algorithm, and dividing the number of elements.
The convolutional neural network can adopt the existing and trained CNN model for tobacco leaf grading feature extraction, and is not described herein again.
S4, combining the low-dimensional features and the high-dimensional features to obtain combined features;
specifically, in order to accelerate convergence, the low-dimensional features and the high-dimensional features are normalized and combined to obtain a combined feature:
F=S dst ∪S CNN
wherein F represents a combined characteristic and contains 8+ h element characteristics; s dst Representing the low-dimensional features; s CNN Representing the high-dimensional features described above.
And S5, inputting the combined features into a classifier, and outputting a classification result of the tobacco leaves to be classified.
Furthermore, the classifier is a Light GBM (Light Gradient Boosting Machine) classifier, is realized based on a GBDT (Gradient Boosting Decision Tree) algorithm framework, supports high-efficiency parallel training, and has the advantages of higher training speed, lower memory consumption, higher accuracy and the like. And (4) sending the combined features into a Light GBM classifier for prediction to obtain a classification result of the tobacco leaves to be classified.
Furthermore, because the extracted features are mainly weak features and the sample size is relatively small, the Light GBM classifier is a strong classifier formed by combining a plurality of weak classifiers through boosting thought, and the classification accuracy is improved.
The invention also provides a tobacco leaf grading system based on visual characteristic fusion, which is used for implementing the tobacco leaf grading method based on visual characteristic fusion, and as shown in fig. 3, the tobacco leaf grading system based on visual characteristic fusion comprises:
the first image acquisition module 1 is used for acquiring a first tobacco leaf image;
the second image acquisition module 2 is used for acquiring a second tobacco leaf image under the light source;
the image preprocessing module 3 is connected with the first image acquisition module 1 and is used for preprocessing images and extracting an interested area of the tobacco leaves;
the feature extraction module 4 includes: a first feature extraction unit 41 connected to the image preprocessing module for extracting low-dimensional features; the second feature extraction unit 42 is connected with the second image acquisition module and is used for extracting high-dimensional features based on a convolutional neural network;
specifically, in the feature extraction module 4, feature extraction is performed by combining the national tobacco leaf classification rules and the simulated artificial classification process;
a combined feature module 5 is constructed and connected with the feature extraction module 4, and is used for combining the low-dimensional features and the high-dimensional features to obtain combined features;
and the classification module 6 is connected with the combined feature module 5, a classifier is preset in the classification module 6 and used for importing combined features and outputting a classification result of the tobacco leaves to be classified according to the combined features.
As a preferred embodiment, the image preprocessing module 3 performs image preprocessing based on an Open CV operator, removes noise, and extracts a tobacco region of interest.
As a preferred embodiment, the feature extraction module 4 further includes:
and the dividing unit 43 is connected to the second feature extracting unit 42, and is configured to cluster the high-dimensional features based on a K-means algorithm, and divide the number of elements.
Specifically, the dividing unit 43 divides the CNN high-dimensional feature elements based on the K-means algorithm.
The technical scheme of the invention has the following advantages or beneficial effects: according to the invention, the low-dimensional features and the high-dimensional features are fused, so that the tobacco leaves are classified based on visual feature fusion, and the accuracy and efficiency of the classification result are improved.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made without departing from the spirit and scope of the invention.
Claims (10)
1. A tobacco leaf grading method based on visual feature fusion is characterized by comprising the following steps:
s1, extracting an interested area of tobacco leaves to be classified from a first tobacco leaf image;
s2, extracting low-dimensional features according to the first tobacco leaf image and the region of interest;
s3, extracting high-dimensional features from a second tobacco leaf image under a light source based on a convolutional neural network;
s4, combining the low-dimensional features and the high-dimensional features to obtain combined features;
and S5, inputting the combined features into a classifier, and outputting the classification result of the tobacco leaves to be classified.
2. The visual feature fusion-based tobacco leaf grading method according to claim 1, wherein in the step S1, the extracting method of the region of interest comprises:
s11, preprocessing the first tobacco leaf image, wherein the preprocessing at least comprises any one or more of graying conversion and median filtering;
s12, obtaining tobacco leaf edge information, and performing morphological conversion to obtain a tobacco leaf edge binary image;
s13, extracting the contour of the binary image of the tobacco leaf edge, filtering the contour, and keeping the contour with the largest contour area;
s14, filling the reserved outline, and creating a tobacco leaf template;
and S15, performing bit-wise AND processing on the first tobacco leaf image and the pixel value of the tobacco leaf template, and extracting to obtain the region of interest.
3. The visual feature fusion-based tobacco leaf grading method according to claim 2, wherein in the step S2, the low-dimensional features at least comprise any one or more combinations of mean and variance of sampled pixel values of the region of interest in HSV channels, tobacco leaf morphological coordinates and tobacco leaf stump area ratio.
4. The tobacco leaf grading method based on visual feature fusion according to claim 3, wherein the extraction method of the tobacco leaf morphological coordinates comprises:
establishing an auxiliary two-dimensional coordinate system in the reserved contour, wherein the abscissa of the auxiliary two-dimensional coordinate system is the same as the ordinate of a pixel coordinate system, and the ordinate of the auxiliary two-dimensional coordinate system is opposite to the abscissa of the pixel coordinate system;
and extracting end point coordinates of the tobacco leaves in the length direction and the width direction to obtain the tobacco leaf form coordinates.
5. The visual feature fusion-based tobacco leaf grading method according to claim 3, wherein the method for extracting the ratio of the tobacco leaf residual injury area comprises the following steps:
6. The visual feature fusion-based tobacco leaf grading method according to claim 1, wherein in the step S3, further comprising:
clustering the high-dimensional features based on a K mean algorithm, and dividing the number of elements;
the high-dimensional features comprise h elements, wherein h is a clustering parameter of the K-means algorithm.
7. The visual feature fusion-based tobacco leaf classification method according to the claim 1, wherein in the step S4, the classifier is a Light GBM classifier.
8. A visual feature fusion based tobacco leaf classification system for implementing the visual feature fusion based tobacco leaf classification method according to any one of claims 1 to 7, comprising:
the first image acquisition module is used for acquiring a first tobacco leaf image;
the second image acquisition module is used for acquiring a second tobacco leaf image under the light source;
the image preprocessing module is connected with the first image acquisition module and used for preprocessing images and extracting an interested area of the tobacco leaves;
a feature extraction module comprising: the first feature extraction unit is connected with the image preprocessing module and used for extracting low-dimensional features; the second feature extraction unit is connected with the second image acquisition module and used for extracting high-dimensional features based on a convolutional neural network;
a combined feature module is constructed, connected with the feature extraction module and used for combining the low-dimensional features and the high-dimensional features to obtain combined features;
and the classification module is connected with the construction combined feature module, and a classifier is preset in the classification module and used for outputting a classification result of the tobacco leaves to be classified according to the combined feature.
9. The visual feature fusion-based tobacco grading system according to claim 8, wherein the image preprocessing module performs image preprocessing based on an Open CV operator.
10. The visual feature fusion-based tobacco leaf grading system according to claim 8, wherein the feature extraction module further comprises:
and the dividing unit is connected with the second feature extraction unit and used for clustering the high-dimensional features based on a K-means algorithm and dividing the number of elements.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210868044.0A CN115358972A (en) | 2022-07-21 | 2022-07-21 | Tobacco leaf grading method and system based on visual feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210868044.0A CN115358972A (en) | 2022-07-21 | 2022-07-21 | Tobacco leaf grading method and system based on visual feature fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115358972A true CN115358972A (en) | 2022-11-18 |
Family
ID=84031122
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210868044.0A Pending CN115358972A (en) | 2022-07-21 | 2022-07-21 | Tobacco leaf grading method and system based on visual feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115358972A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115953384A (en) * | 2023-01-10 | 2023-04-11 | 杭州首域万物互联科技有限公司 | On-line detection and prediction method for tobacco morphological parameters |
-
2022
- 2022-07-21 CN CN202210868044.0A patent/CN115358972A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115953384A (en) * | 2023-01-10 | 2023-04-11 | 杭州首域万物互联科技有限公司 | On-line detection and prediction method for tobacco morphological parameters |
CN115953384B (en) * | 2023-01-10 | 2024-02-02 | 杭州首域万物互联科技有限公司 | Online detection and prediction method for morphological parameters of tobacco leaves |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104050471B (en) | Natural scene character detection method and system | |
CN107742274A (en) | Image processing method, device, computer-readable recording medium and electronic equipment | |
CN108960011B (en) | Partially-shielded citrus fruit image identification method | |
CN110309806B (en) | Gesture recognition system and method based on video image processing | |
CN109740721B (en) | Wheat ear counting method and device | |
CN109711268B (en) | Face image screening method and device | |
CN116071763B (en) | Teaching book intelligent correction system based on character recognition | |
CN111160194B (en) | Static gesture image recognition method based on multi-feature fusion | |
CN111967319A (en) | Infrared and visible light based in-vivo detection method, device, equipment and storage medium | |
EP3989161A1 (en) | Method and system for leaf age estimation based on morphological features extracted from segmented leaves | |
CN108711160B (en) | Target segmentation method based on HSI (high speed input/output) enhanced model | |
CN110032946A (en) | A kind of aluminium based on machine vision/aluminium blister package tablet identification and localization method | |
Cai et al. | Perception preserving decolorization | |
CN113781421A (en) | Underwater-based target identification method, device and system | |
CN114998214A (en) | Sampling speed control method and system for cable defect detection | |
CN115358972A (en) | Tobacco leaf grading method and system based on visual feature fusion | |
CN107292898B (en) | A kind of license plate shadow Detection and minimizing technology based on HSV | |
CN112258545A (en) | Tobacco leaf image online background processing system and online background processing method | |
CN114511567A (en) | Tongue body and tongue coating image identification and separation method | |
CN111192213A (en) | Image defogging adaptive parameter calculation method, image defogging method and system | |
CN112465753B (en) | Pollen particle detection method and device and electronic equipment | |
CN107239761A (en) | Fruit tree branch pulling effect evaluation method based on skeleton Corner Detection | |
CN111768456A (en) | Feature extraction method based on wood color space | |
CN111414960A (en) | Artificial intelligence image feature extraction system and feature identification method thereof | |
CN113610187B (en) | Wood texture extraction and classification method based on image technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |