CN115358972A - Tobacco leaf grading method and system based on visual feature fusion - Google Patents

Tobacco leaf grading method and system based on visual feature fusion Download PDF

Info

Publication number
CN115358972A
CN115358972A CN202210868044.0A CN202210868044A CN115358972A CN 115358972 A CN115358972 A CN 115358972A CN 202210868044 A CN202210868044 A CN 202210868044A CN 115358972 A CN115358972 A CN 115358972A
Authority
CN
China
Prior art keywords
tobacco leaf
tobacco
image
extracting
dimensional features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210868044.0A
Other languages
Chinese (zh)
Inventor
吴彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhu Tusvision Information Technology Co ltd
Original Assignee
Wuhu Tusvision Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhu Tusvision Information Technology Co ltd filed Critical Wuhu Tusvision Information Technology Co ltd
Priority to CN202210868044.0A priority Critical patent/CN115358972A/en
Publication of CN115358972A publication Critical patent/CN115358972A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a tobacco leaf grading method and system based on visual feature fusion, belonging to the technical field of tobacco leaf grading and comprising the following steps: s1, extracting an interested area of tobacco leaves to be classified from a first tobacco leaf image; s2, extracting low-dimensional features according to the first tobacco leaf image and the region of interest; s3, extracting high-dimensional features from a second tobacco leaf image under a light source based on a convolutional neural network; s4, combining the low-dimensional features and the high-dimensional features to obtain combined features; and S5, inputting the combined features into a classifier, and outputting a classification result of the tobacco leaves to be classified. Has the beneficial effects that: according to the invention, the low-dimensional features and the high-dimensional features are fused, so that the tobacco leaf classification is carried out based on the visual feature fusion, and the accuracy and the efficiency of classification results are improved.

Description

Tobacco leaf grading method and system based on visual feature fusion
Technical Field
The invention relates to the technical field of tobacco leaf grading, in particular to a tobacco leaf grading method and system based on visual feature fusion.
Background
The tobacco leaves are important raw materials in the production of tobacco products, the quality of the tobacco products is directly influenced by the grade quality of the tobacco leaves, and in order to provide tobacco leaf raw materials meeting the requirements for various tobacco products and promote the production and development of the tobacco leaves, the national tobacco industry establishes scientific and reasonable grading standards for grading the tobacco leaves.
The existing mainstream tobacco leaf grading method mainly comprises the following steps:
(1) The tobacco leaf image features are extracted as much as possible by using a traditional method so as to obtain a richer feature matrix. The method needs to depend on characteristic design, but the tobacco leaves are greatly deformed after being baked, and particularly important detailed characteristics such as tobacco leaf veins and the like are difficult to express;
(2) The feature extraction is performed by training a classification model based on a convolutional neural network method. However, the tobacco data samples are difficult to obtain, the cost is high, and mass data are difficult to realize, so that under the condition of less data sets, the network structure needs to be modified and the loss function needs to be adjusted to make up for the defects caused by less samples, but a better classification model is difficult to train in sequence;
(3) And extracting multi-modal characteristics by combining sensors such as gravity and the like for fusion. The method has higher requirements on the sensitivity of the sensor and higher cost.
Disclosure of Invention
In order to solve the technical problems, the invention provides a tobacco leaf grading method and system based on visual feature fusion.
The technical problem solved by the invention can be realized by adopting the following technical scheme:
a tobacco leaf grading method based on visual feature fusion comprises the following steps:
s1, extracting an interested area of tobacco leaves to be classified from a first tobacco leaf image;
s2, extracting low-dimensional features according to the first tobacco leaf image and the region of interest;
s3, extracting high-dimensional features from a second tobacco leaf image under a light source based on a convolutional neural network;
s4, combining the low-dimensional features and the high-dimensional features to obtain combined features;
and S5, inputting the combined features into a classifier, and outputting a classification result of the tobacco leaves to be classified.
Preferably, in step S1, the method for extracting the region of interest includes:
s11, preprocessing the first tobacco leaf image, wherein the preprocessing at least comprises any one or more of graying conversion and median filtering;
s12, obtaining tobacco leaf edge information, and performing morphological conversion to obtain a tobacco leaf edge binary image;
s13, extracting the contour of the binary image of the tobacco leaf edge, filtering the contour, and keeping the contour with the largest contour area;
s14, filling the reserved outline, and creating a tobacco leaf template;
and S15, performing bit-wise AND processing on the first tobacco leaf image and the pixel value of the tobacco leaf template, and extracting to obtain the region of interest.
Preferably, in the step S2, the low-dimensional features at least include any one or more combinations of a mean value and a variance of sampled pixel values of the region of interest in HSV channels, a tobacco leaf morphological coordinate, and a tobacco leaf residual area ratio.
Preferably, the method for extracting the tobacco leaf morphological coordinates comprises the following steps:
establishing an auxiliary two-dimensional coordinate system in the reserved contour, wherein the abscissa of the auxiliary two-dimensional coordinate system is the same as the ordinate of a pixel coordinate system, and the ordinate of the auxiliary two-dimensional coordinate system is opposite to the abscissa of the pixel coordinate system;
and extracting end point coordinates of the tobacco leaves in the length direction and the width direction to obtain the tobacco leaf form coordinates.
Preferably, the method for extracting the ratio of the area of the tobacco leaves damaged comprises the following steps:
Figure BDA0003760212570000031
wherein,
Figure BDA0003760212570000032
a profile area representing the ith said profile; t is Smax Representing a maximum threshold value corresponding to the outline area; c max Representing the contour with the largest contour area; c ratio And expressing the ratio of the damaged area of the tobacco leaves.
Preferably, in step S3, the method further includes:
clustering the high-dimensional features based on a K mean algorithm, and dividing the number of elements;
the high-dimensional features comprise h elements, wherein h is a clustering parameter of the K-means algorithm.
Preferably, in step S4, the classifier is a Light GBM classifier.
The invention also provides a tobacco leaf grading system based on visual feature fusion, which is used for implementing the tobacco leaf grading method based on visual feature fusion, and comprises the following steps:
the first image acquisition module is used for acquiring a first tobacco leaf image;
the second image acquisition module is used for acquiring a second tobacco leaf image under the light source;
the image preprocessing module is connected with the first image acquisition module and is used for preprocessing images and extracting an interested area of the tobacco leaves;
a feature extraction module comprising: the first feature extraction unit is connected with the image preprocessing module and used for extracting low-dimensional features; the second feature extraction unit is connected with the second image acquisition module and used for extracting high-dimensional features based on a convolutional neural network;
a combined feature module is constructed, connected with the feature extraction module and used for combining the low-dimensional features and the high-dimensional features to obtain combined features;
and the classification module is connected with the construction joint feature module, is preset with a classifier and is used for outputting the classification result of the tobacco leaves to be classified according to the joint feature.
Preferably, the image preprocessing module performs image preprocessing based on an Open CV operator.
Preferably, the feature extraction module further comprises:
and the dividing unit is connected with the second feature extraction unit and used for clustering the high-dimensional features based on a K-means algorithm and dividing the number of elements.
The technical scheme of the invention has the advantages or beneficial effects that:
according to the invention, the low-dimensional features and the high-dimensional features are fused, so that the tobacco leaves are classified based on visual feature fusion, and the accuracy and efficiency of the classification result are improved.
Drawings
FIG. 1 is a schematic flow chart of a tobacco leaf grading method based on visual feature fusion according to a preferred embodiment of the present invention;
FIG. 2 is a flowchart illustrating the step S1 according to a preferred embodiment of the present invention;
FIG. 3 is a block diagram of a tobacco leaf grading system based on visual feature fusion according to a preferred embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive efforts based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
It should be noted that the embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
In a preferred embodiment of the present invention, based on the above problems in the prior art, there is provided a method for classifying tobacco leaves based on visual feature fusion, which belongs to the technical field of tobacco leaf classification, and as shown in fig. 1, the method includes:
s1, extracting an interested area of tobacco leaves to be classified from a first tobacco leaf image;
specifically, the extracting process of the region of interest may be implemented in an Open Source Computer Vision Library (Open CV).
As a preferred embodiment, as shown in fig. 2, in step S1, the method for extracting the region of interest includes:
s11, preprocessing a first tobacco leaf image;
the first tobacco leaf image can be obtained by shooting through a first image acquisition module, and the first image acquisition module is preferably a camera. Then, image preprocessing is carried out, wherein the preprocessing at least comprises any one or more of graying conversion and median filtering processing. In order to reduce the computational calculation force, the region of interest of the RGB or HSV color space is converted into a gray map by a gray scale conversion method; then, the gray value of each pixel point is set to be the median of the gray values of all the pixel points in a certain neighborhood window of the point by adopting a median filtering processing mode, and the edges of the tobacco leaves are protected from being blurred while noise is filtered.
S12, obtaining tobacco leaf edge information, and performing morphological conversion to obtain a tobacco leaf edge binary image;
specifically, the tobacco leaf edge information is obtained by using a method including, but not limited to, canny operator. For better processing the edge information, performing Morphological transformation (Morphological Transformations) to obtain a tobacco leaf edge binary image;
s13, extracting the outline of the binary image of the tobacco leaf edge, filtering the outline and keeping the outline with the largest outline area;
specifically, the contour of the tobacco leaf edge binary image is extracted by a method including but not limited to finding an image contour (find Contours) operator.
Then, filtering the contour to keep the Maximum contour area, wherein the filtering method includes but is not limited to a Non Maximum Suppression (NMS) algorithm, and the specific method is as follows:
Figure BDA0003760212570000061
wherein, C i Representing the ith profile;
Figure BDA0003760212570000062
a contour area representing the ith contour; t is Smin Representing a minimum threshold corresponding to the outline area; c max The contour having the largest contour area is shown.
S14, filling the reserved outline and creating a tobacco leaf template;
specifically, the retained contour is filled based on a draw contacts operator and a tobacco leaf template is created, and the contour filling method comprises the following steps:
Figure BDA0003760212570000063
wherein, mask represents a tobacco template; c max Representing the outline with the largest outline area; filled represents a fill value; r, G, B represent the color intensities of three color channels of red (R), green (G), and blue (B), respectively.
And S15, carrying out bit and processing on the first tobacco leaf image and the pixel value of the tobacco leaf template, and extracting to obtain the region of interest.
Specifically, the first unprocessed tobacco leaf image and the tobacco leaf template created in the step S14 are subjected to pixel value bitwise and processing to extract the tobacco leaf region of interest, and the bitwise and processing method is as follows:
dst=src&mask;
wherein dst represents the region of interest; src represents the first tobacco leaf image without processing; mask represents the tobacco template.
The tobacco grading has a close and inseparable relationship with the growing position of the tobacco, the color of the tobacco, the vein distribution and the damage degree of the tobacco, and the growing position of the tobacco is related with the shapes of the length, the width and the like of the tobacco. In order to better extract the characteristics of the tobacco leaves, two image acquisition modules are adopted to acquire data in a real-time tobacco leaf grading scene.
S2, extracting to obtain low-dimensional features according to the first tobacco leaf image and the region of interest;
specifically, according to the low-dimensional features of the first tobacco leaf image extracted image acquired by the first image acquisition module:
Figure BDA0003760212570000071
wherein S is dst Representing the extracted low-dimensional features; dst HSV|H Means and variances of sampled pixel values representing hue (H) channels of the region of interest; dst HSV|S Means and variances of sampled pixel values representing a region of interest saturation (S) channel; dst HSV|V Means and variances of the sampled pixel values representing the region of interest lightness (V) channel; c maxij Representing the tobacco leaf form coordinates of the ith row and the jth column corresponding to the outline with the maximum outline area; c ratio The ratio of the area of the damaged tobacco leaves is shown.
Preferably, in step S2, the low-dimensional features at least include any one or more combinations of the mean and variance of sampled pixel values of the region of interest in HSV channels, tobacco morphology coordinates, and ratio of tobacco-damaged area.
Further, the method for extracting the mean and variance of the sampled pixel values of the region of interest in each HSV channel includes: extracting channels H, S and V of HSV color space of the tobacco interested region, and randomly selecting n patches in each channel, wherein n is a positive integer larger than 0; the mean and variance of the pixel values sampled for each channel are then calculated separately.
In a preferred embodiment, the method for extracting the form coordinates of the tobacco leaves includes:
establishing an auxiliary two-dimensional coordinate system in the reserved contour, wherein the abscissa of the auxiliary two-dimensional coordinate system is the same as the ordinate of the pixel coordinate system, and the ordinate of the auxiliary two-dimensional coordinate system is opposite to the abscissa of the pixel coordinate system;
and extracting end point coordinates of the tobacco leaves in the length direction and the width direction to obtain tobacco leaf form coordinates.
Specifically, the contour C retained in step S13 max And establishing an auxiliary two-dimensional coordinate system for extracting coordinate values of four end points of the tobacco leaves to describe the form of the tobacco leaves.
Furthermore, the abscissa of the auxiliary two-dimensional coordinate system is a connecting line in the length direction of the tobacco leaves, and the direction of the connecting line is the same as the ordinate of the pixel coordinate system; the ordinate of the auxiliary two-dimensional coordinate system is a connecting line in the width direction of the tobacco leaves (the widest position of the tobacco leaves), and the direction of the ordinate is opposite to the abscissa of the pixel coordinate system.
In a preferred embodiment, the method for extracting the ratio of the area of the tobacco leaf residue comprises:
Figure BDA0003760212570000081
wherein,
Figure BDA0003760212570000082
a contour area representing the ith contour; t is Smax Representing a maximum threshold value corresponding to the outline area; c max Representing the outline with the largest outline area; c ratio The ratio of the area of the damaged tobacco leaves is shown.
S3, extracting high-dimensional features from a second tobacco leaf image under a light source based on a convolutional neural network;
specifically, the second tobacco leaf image can be obtained by shooting through a second image acquisition module, and the second image acquisition module is preferably a camera. In a preferred embodiment, a compensation light source is arranged below the second image acquisition module for obtaining more detailed characteristics. When the tobacco leaves are conveyed below the second image acquisition module, the compensating light source is preferably located right below the tobacco leaves.
Furthermore, the first tobacco leaf image and the second tobacco leaf image are obtained by shooting at an interval of a preset time, wherein the preset time is longer than the time for the camera to acquire one image and is shorter than the transmission interval time for two adjacent tobacco leaves to be transmitted to the same camera.
In a preferred embodiment, step S3 further includes:
and clustering the high-dimensional features based on a K-means algorithm, and dividing the number of elements.
Further, the high-dimensional features include h elements, where h is a clustering parameter of a K-means algorithm, that is, the number of clusters (elements) divided after the high-dimensional features are clustered.
Specifically, in the present embodiment, the high-dimensional feature S is extracted based on a Convolutional Neural Network (CNN) CNN And then clustering the high-dimensional features by using a K-Means algorithm, and dividing the number of elements.
The convolutional neural network can adopt the existing and trained CNN model for tobacco leaf grading feature extraction, and is not described herein again.
S4, combining the low-dimensional features and the high-dimensional features to obtain combined features;
specifically, in order to accelerate convergence, the low-dimensional features and the high-dimensional features are normalized and combined to obtain a combined feature:
F=S dst ∪S CNN
wherein F represents a combined characteristic and contains 8+ h element characteristics; s dst Representing the low-dimensional features; s CNN Representing the high-dimensional features described above.
And S5, inputting the combined features into a classifier, and outputting a classification result of the tobacco leaves to be classified.
Furthermore, the classifier is a Light GBM (Light Gradient Boosting Machine) classifier, is realized based on a GBDT (Gradient Boosting Decision Tree) algorithm framework, supports high-efficiency parallel training, and has the advantages of higher training speed, lower memory consumption, higher accuracy and the like. And (4) sending the combined features into a Light GBM classifier for prediction to obtain a classification result of the tobacco leaves to be classified.
Furthermore, because the extracted features are mainly weak features and the sample size is relatively small, the Light GBM classifier is a strong classifier formed by combining a plurality of weak classifiers through boosting thought, and the classification accuracy is improved.
The invention also provides a tobacco leaf grading system based on visual characteristic fusion, which is used for implementing the tobacco leaf grading method based on visual characteristic fusion, and as shown in fig. 3, the tobacco leaf grading system based on visual characteristic fusion comprises:
the first image acquisition module 1 is used for acquiring a first tobacco leaf image;
the second image acquisition module 2 is used for acquiring a second tobacco leaf image under the light source;
the image preprocessing module 3 is connected with the first image acquisition module 1 and is used for preprocessing images and extracting an interested area of the tobacco leaves;
the feature extraction module 4 includes: a first feature extraction unit 41 connected to the image preprocessing module for extracting low-dimensional features; the second feature extraction unit 42 is connected with the second image acquisition module and is used for extracting high-dimensional features based on a convolutional neural network;
specifically, in the feature extraction module 4, feature extraction is performed by combining the national tobacco leaf classification rules and the simulated artificial classification process;
a combined feature module 5 is constructed and connected with the feature extraction module 4, and is used for combining the low-dimensional features and the high-dimensional features to obtain combined features;
and the classification module 6 is connected with the combined feature module 5, a classifier is preset in the classification module 6 and used for importing combined features and outputting a classification result of the tobacco leaves to be classified according to the combined features.
As a preferred embodiment, the image preprocessing module 3 performs image preprocessing based on an Open CV operator, removes noise, and extracts a tobacco region of interest.
As a preferred embodiment, the feature extraction module 4 further includes:
and the dividing unit 43 is connected to the second feature extracting unit 42, and is configured to cluster the high-dimensional features based on a K-means algorithm, and divide the number of elements.
Specifically, the dividing unit 43 divides the CNN high-dimensional feature elements based on the K-means algorithm.
The technical scheme of the invention has the following advantages or beneficial effects: according to the invention, the low-dimensional features and the high-dimensional features are fused, so that the tobacco leaves are classified based on visual feature fusion, and the accuracy and efficiency of the classification result are improved.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made without departing from the spirit and scope of the invention.

Claims (10)

1. A tobacco leaf grading method based on visual feature fusion is characterized by comprising the following steps:
s1, extracting an interested area of tobacco leaves to be classified from a first tobacco leaf image;
s2, extracting low-dimensional features according to the first tobacco leaf image and the region of interest;
s3, extracting high-dimensional features from a second tobacco leaf image under a light source based on a convolutional neural network;
s4, combining the low-dimensional features and the high-dimensional features to obtain combined features;
and S5, inputting the combined features into a classifier, and outputting the classification result of the tobacco leaves to be classified.
2. The visual feature fusion-based tobacco leaf grading method according to claim 1, wherein in the step S1, the extracting method of the region of interest comprises:
s11, preprocessing the first tobacco leaf image, wherein the preprocessing at least comprises any one or more of graying conversion and median filtering;
s12, obtaining tobacco leaf edge information, and performing morphological conversion to obtain a tobacco leaf edge binary image;
s13, extracting the contour of the binary image of the tobacco leaf edge, filtering the contour, and keeping the contour with the largest contour area;
s14, filling the reserved outline, and creating a tobacco leaf template;
and S15, performing bit-wise AND processing on the first tobacco leaf image and the pixel value of the tobacco leaf template, and extracting to obtain the region of interest.
3. The visual feature fusion-based tobacco leaf grading method according to claim 2, wherein in the step S2, the low-dimensional features at least comprise any one or more combinations of mean and variance of sampled pixel values of the region of interest in HSV channels, tobacco leaf morphological coordinates and tobacco leaf stump area ratio.
4. The tobacco leaf grading method based on visual feature fusion according to claim 3, wherein the extraction method of the tobacco leaf morphological coordinates comprises:
establishing an auxiliary two-dimensional coordinate system in the reserved contour, wherein the abscissa of the auxiliary two-dimensional coordinate system is the same as the ordinate of a pixel coordinate system, and the ordinate of the auxiliary two-dimensional coordinate system is opposite to the abscissa of the pixel coordinate system;
and extracting end point coordinates of the tobacco leaves in the length direction and the width direction to obtain the tobacco leaf form coordinates.
5. The visual feature fusion-based tobacco leaf grading method according to claim 3, wherein the method for extracting the ratio of the tobacco leaf residual injury area comprises the following steps:
Figure FDA0003760212560000021
wherein,
Figure FDA0003760212560000022
a profile area representing the ith said profile; t is Smax Representing a maximum threshold value corresponding to the outline area; c max Representing the contour with the largest contour area; c ratio And expressing the ratio of the damaged area of the tobacco leaves.
6. The visual feature fusion-based tobacco leaf grading method according to claim 1, wherein in the step S3, further comprising:
clustering the high-dimensional features based on a K mean algorithm, and dividing the number of elements;
the high-dimensional features comprise h elements, wherein h is a clustering parameter of the K-means algorithm.
7. The visual feature fusion-based tobacco leaf classification method according to the claim 1, wherein in the step S4, the classifier is a Light GBM classifier.
8. A visual feature fusion based tobacco leaf classification system for implementing the visual feature fusion based tobacco leaf classification method according to any one of claims 1 to 7, comprising:
the first image acquisition module is used for acquiring a first tobacco leaf image;
the second image acquisition module is used for acquiring a second tobacco leaf image under the light source;
the image preprocessing module is connected with the first image acquisition module and used for preprocessing images and extracting an interested area of the tobacco leaves;
a feature extraction module comprising: the first feature extraction unit is connected with the image preprocessing module and used for extracting low-dimensional features; the second feature extraction unit is connected with the second image acquisition module and used for extracting high-dimensional features based on a convolutional neural network;
a combined feature module is constructed, connected with the feature extraction module and used for combining the low-dimensional features and the high-dimensional features to obtain combined features;
and the classification module is connected with the construction combined feature module, and a classifier is preset in the classification module and used for outputting a classification result of the tobacco leaves to be classified according to the combined feature.
9. The visual feature fusion-based tobacco grading system according to claim 8, wherein the image preprocessing module performs image preprocessing based on an Open CV operator.
10. The visual feature fusion-based tobacco leaf grading system according to claim 8, wherein the feature extraction module further comprises:
and the dividing unit is connected with the second feature extraction unit and used for clustering the high-dimensional features based on a K-means algorithm and dividing the number of elements.
CN202210868044.0A 2022-07-21 2022-07-21 Tobacco leaf grading method and system based on visual feature fusion Pending CN115358972A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210868044.0A CN115358972A (en) 2022-07-21 2022-07-21 Tobacco leaf grading method and system based on visual feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210868044.0A CN115358972A (en) 2022-07-21 2022-07-21 Tobacco leaf grading method and system based on visual feature fusion

Publications (1)

Publication Number Publication Date
CN115358972A true CN115358972A (en) 2022-11-18

Family

ID=84031122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210868044.0A Pending CN115358972A (en) 2022-07-21 2022-07-21 Tobacco leaf grading method and system based on visual feature fusion

Country Status (1)

Country Link
CN (1) CN115358972A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953384A (en) * 2023-01-10 2023-04-11 杭州首域万物互联科技有限公司 On-line detection and prediction method for tobacco morphological parameters

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953384A (en) * 2023-01-10 2023-04-11 杭州首域万物互联科技有限公司 On-line detection and prediction method for tobacco morphological parameters
CN115953384B (en) * 2023-01-10 2024-02-02 杭州首域万物互联科技有限公司 Online detection and prediction method for morphological parameters of tobacco leaves

Similar Documents

Publication Publication Date Title
CN104050471B (en) Natural scene character detection method and system
CN107742274A (en) Image processing method, device, computer-readable recording medium and electronic equipment
CN108960011B (en) Partially-shielded citrus fruit image identification method
CN110309806B (en) Gesture recognition system and method based on video image processing
CN109740721B (en) Wheat ear counting method and device
CN109711268B (en) Face image screening method and device
CN116071763B (en) Teaching book intelligent correction system based on character recognition
CN111160194B (en) Static gesture image recognition method based on multi-feature fusion
CN111967319A (en) Infrared and visible light based in-vivo detection method, device, equipment and storage medium
EP3989161A1 (en) Method and system for leaf age estimation based on morphological features extracted from segmented leaves
CN108711160B (en) Target segmentation method based on HSI (high speed input/output) enhanced model
CN110032946A (en) A kind of aluminium based on machine vision/aluminium blister package tablet identification and localization method
Cai et al. Perception preserving decolorization
CN113781421A (en) Underwater-based target identification method, device and system
CN114998214A (en) Sampling speed control method and system for cable defect detection
CN115358972A (en) Tobacco leaf grading method and system based on visual feature fusion
CN107292898B (en) A kind of license plate shadow Detection and minimizing technology based on HSV
CN112258545A (en) Tobacco leaf image online background processing system and online background processing method
CN114511567A (en) Tongue body and tongue coating image identification and separation method
CN111192213A (en) Image defogging adaptive parameter calculation method, image defogging method and system
CN112465753B (en) Pollen particle detection method and device and electronic equipment
CN107239761A (en) Fruit tree branch pulling effect evaluation method based on skeleton Corner Detection
CN111768456A (en) Feature extraction method based on wood color space
CN111414960A (en) Artificial intelligence image feature extraction system and feature identification method thereof
CN113610187B (en) Wood texture extraction and classification method based on image technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination