CN108416814B - Method and system for quickly positioning and identifying pineapple head - Google Patents

Method and system for quickly positioning and identifying pineapple head Download PDF

Info

Publication number
CN108416814B
CN108416814B CN201810139803.3A CN201810139803A CN108416814B CN 108416814 B CN108416814 B CN 108416814B CN 201810139803 A CN201810139803 A CN 201810139803A CN 108416814 B CN108416814 B CN 108416814B
Authority
CN
China
Prior art keywords
image
pineapple
head
color
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810139803.3A
Other languages
Chinese (zh)
Other versions
CN108416814A (en
Inventor
刘长红
钟志鹏
程健翔
黄楠
陈建堂
吴文浩
舒华
彭绍湖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN201810139803.3A priority Critical patent/CN108416814B/en
Publication of CN108416814A publication Critical patent/CN108416814A/en
Application granted granted Critical
Publication of CN108416814B publication Critical patent/CN108416814B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a system for quickly positioning and identifying a pineapple head, wherein the method specifically comprises the following steps: collecting an RGB image possibly containing pineapple information; converting the RGB image into an HSV image, performing threshold segmentation and contour processing on the HSV image, locating the position of the suspected target area with the largest area, and extracting the image area of the position in the RGB image to serve as an interested area; generating a color histogram of the region of interest, matching the color histogram with preset pineapple head color histograms in different environments, and judging whether the similarity reaches a set threshold value; simultaneously extracting features of pineapple head eyes in the region of interest, inputting the features into a classifier, judging whether the pineapple head eyes exist or not, and calculating the center coordinates of the pineapple eyes; and obtaining the similarity and the characteristics of the fruit eyes through the steps. The pineapple picking head locating and recognizing device can realize rapid locating and recognizing of the pineapple head, is beneficial to reducing the risk of injury in the pineapple picking process, and improves the labor efficiency.

Description

Method and system for quickly positioning and identifying pineapple head
Technical Field
The invention mainly relates to computer vision identification and positioning, in particular to a method and a system for quickly positioning and identifying a pineapple head.
Background
Fruits are necessary in daily life, and how to better pick the fruits becomes a problem to be solved. The pineapple is a fruit with strong picking seasonality, and the peak period of the pineapple fruit production is only about half a month, so that the improvement of the pineapple picking efficiency is a necessary development direction. At present, most of pineapples are picked manually, so a large amount of manpower and material resources are consumed, and pineapples have large thorns and are likely to cause damage to fruit growers. Therefore, the research on the automatic picking of the pineapples has great significance for reducing the labor loss, increasing the pineapple picking quality, stabilizing the harvest of the pineapples and improving the fruit sales rate.
Disclosure of Invention
The invention aims to relate to a method for quickly positioning and identifying the head of a pineapple, which is simple in calculation and can quickly identify the head of the pineapple, so that the problem of automatic identification of the pineapple is solved, and the automatic picking quality of the pineapple can be effectively improved.
The purpose of the invention can be realized by the following technical scheme:
a method for quickly positioning and identifying the head of a pineapple specifically comprises the following steps:
s1, collecting RGB images possibly containing pineapple information;
s2, converting the RGB image into an HSV image, performing threshold segmentation and image preprocessing on the HSV image to obtain a plurality of suspected target areas, positioning the position of the area with the largest area, and extracting the image area of the position in the RGB image to be used as an interested area;
s3, generating a color histogram of the region of interest, matching the color histogram with preset pineapple head color histograms in different environments, and judging whether the similarity reaches a preset value;
meanwhile, extracting pineapple head fruit eye features of the region of interest, inputting the features into a classifier, judging whether pineapple head fruit eyes exist or not, and calculating center coordinates of the pineapple eyes;
and S4, judging whether the collected image contains the pineapple head or not by outputting the pineapple head color similarity and the characteristics of the fruit eyes obtained in the step S3.
Further, the threshold segmentation specifically includes the steps of:
firstly, setting 3 groups of thresholds, namely a hue threshold range [ H _ down, H _ up ], a saturation threshold range [ S _ down, S _ up ] and a brightness threshold range [ V _ down, V _ up ], according to the maximum color proportion of an HSV image inspection target after conversion; comparing the hue, saturation and brightness values of the HSV image with the 3 groups of threshold ranges respectively, specifically: if the threshold value falls within the threshold value range, the value is set to be 255; if the threshold value range is exceeded, setting the threshold value range to be 0; the 255 represents white pixel points; the 0 indicates a black pixel.
Further, when the image is subjected to threshold segmentation in step S2, since most pineapples are planted in the field environment, the brightness value of the pineapple head captured by the camera may fluctuate under different light intensities. Therefore, after the light emission intensity is obtained by solving the light emission intensity formula, the brightness threshold value needs to be compensated according to the light emission intensity.
Thus, 3 sets of thresholds [ H _ down, H _ up ], [ S _ down, S _ up ], [ V _ down, V _ up ] are set as follows:
H_up=upper_H
H_down=lower_H
S_up=upper_S
S_down=lower_S
V_up=upper_V+I×b
V_down=lower_V+I×b
wherein, upper-limit color values and lower-limit color values of hue, saturation and brightness are respectively represented by upper-limit _ H, lower _ H, upper _ S, lower _ S, upper _ V, lower _ V and are obtained by checking according to an HSV color comparison table, b is an illumination compensation coefficient which is a set value, I is illumination intensity, and the calculation is carried out by the following formula:
I=hvN/(At)
in the above expression, I × b indicates that the fluctuation range of the upper threshold and the lower threshold is [0, I × b ], and does not indicate a strict increase in the value of I × b.
Further, in the image preprocessing after the threshold segmentation in step S2, the image is converted into a gray scale image, and then the corresponding processing is performed. The conversion into a grayscale map can reduce the thickness of the image and increase the processing speed of the image. The conversion formula for converting the image after threshold segmentation into a gray scale image is as follows:
Gray=0.3414×H+0.5478×S+0.1108×V
the image after threshold segmentation is a binary image and is a three-channel image, the color thickness is 3, the value of a pixel point in the converted gray-scale image is only 0 and 255, the color thickness of the converted image is 1, and the operation speed is greatly improved.
Further, in the image preprocessing, the corresponding processing after the image conversion specifically includes: filtering processing and edge filling.
The filtering process adopts a method of grinding discontinuous white points of continuous X-axis and Y-axis, and is named as XY-axis discontinuous denoising method. Further, the XY-axis discontinuous denoising method specifically includes:
presetting a judgment value, respectively grinding discontinuous small white points on an X axis and a Y axis of the converted gray level image, if the continuity of the white points on the X axis or the Y axis is less than the judgment value, grinding the small white points on the X axis or the Y axis to be flat, namely changing the small white points into black; if the continuity of the white point on the X axis or the Y axis is greater than or equal to the judgment value, the small white point on the X axis or the Y axis is not flattened, namely, the white point is kept as white. And through an XY axis discontinuous denoising method, redundant color points of the non-target object after threshold segmentation can be rapidly removed.
After the image is denoised according to the XY axis discontinuous denoising method, the edge of the target image becomes incomplete, and the contour is not convenient to capture, so that the denoised image needs to be subjected to edge filling.
Further, the edge filling specifically includes:
firstly, rough edge processing of an X axis is carried out on an image after filtering processing, right subtraction and left subtraction are carried out on pixel points by utilizing the image after thresholding, the absolute value is taken as a result, if the white point is subtracted or the black point is subtracted, the subtracted values are all 0, and when the black point is subtracted from the white point, the absolute value or the white point is subtracted from the black point, the obtained value is 255; points after rough edge processing are edge points of the image;
performing edge filling, namely firstly selecting an edge point, and judging the edge in the filtered image by taking the edge point as the circle center and the radius of the edge point as 1; if continuous points exist in the small circle with the radius of 1, namely the number of 255 pixel points is more than or equal to 3, namely the edge is continuous, the selected edge point is taken as the center of a circle, and the radius of a filling circle is set for edge filling; and repeating the steps until all edge points of the image obtained by the rough edge processing are judged, and obtaining the image with the edge being filled. And carrying out accurate edge detection on the image after edge filling, and completely storing the outline.
Dividing the contour of the continuous area into a plurality of independent contours for storage, performing maximum rectangle fitting on each contour, namely taking the maximum value of the length and the width of the contour as the length and the width of the maximum fitting rectangular area, and positioning the maximum fitting area of each contour. And positioning the position of the fitting region with the largest area, and extracting an image region of the position in the RGB image to be used as an interested region.
Further, the step S3 of performing statistics on the pixel color of the region of interest according to the color distribution and statistical algorithm specifically includes:
counting the colors represented by each pixel point in the region of interest, setting a certain color partition interval, generating a color histogram, matching the color histogram with preset color histograms in different environments, and representing the pineapple head with certain similarity if the similarity reaches a threshold value.
Further, the preset color histograms in different environments specifically include:
acquiring color quantity values of different environments in the same color interval, wherein Xi is expressed as an environment value variable of the ith environment, and F (X1, X2, X3..) is used for expressing the color quantity values in one color interval of the different environments. By inputting a large amount of data in the same color interval (such as yellow), the Weight value and the biases value of the neural network are continuously modified, the deviation value is reduced, a similar function is fitted, and a standard function histogram is generated.
Further, the identifying of the eye features of the pineapple head in the region of interest in step S3 specifically includes:
the method comprises the steps of collecting pineapple head and fruit eye samples, dividing the pineapple head and fruit eye samples into positive and negative samples, and training the positive and negative samples through an OpenCV training classifier, wherein the positive samples are target samples to be detected, and the negative samples are other arbitrary pictures. Setting the size of a picture and the proportion of positive and negative samples, carrying out feature extraction training through a large amount of pineapple head and fruit eye data, and setting parameters of a classifier, such as the maximum deviation degree of background color, the maximum rotation angle, a corrected weight value and a biases value; and putting the positive and negative samples in the same catalogue for training, and obtaining the characteristics of the fruit eyes after training.
Another objective of the present invention is to provide a system based on a rapid positioning and identification method of pineapple head, which facilitates the research on the automatic picking system of pineapple.
A quick positioning and identifying system for pineapple head specifically comprises: the device comprises an acquisition module, a processing module and a support module;
furthermore, the acquisition module is used for acquiring images of the pineapples and transmitting the acquired images to the processing module;
further, the processing module is used for carrying out color statistics on the image transmitted by the acquisition module, extracting and matching features of the head of the pineapple and outputting information after image processing; the acquisition module is fixed on the support module;
further, the support module specifically includes: parallel plates, a support frame; the parallel plate is placed at a certain height away from the ground through two support frames, the acquisition module is placed at one side of the center of the parallel plate close to the ground, and the processing module is placed at one side of the center of the parallel plate far away from the ground; the supporting module is used for supporting the processing module and providing a certain height and angle for the acquisition module to acquire images.
Further, the embedded processor in the processing module may be replaced with any combination of computers having image processing capabilities.
Furthermore, the mirror surface of the camera of the acquisition module is parallel to the ground; the camera of the acquisition module acquires an image with the resolution of 640X480 and the size of 350X 350; under the resolution and the size, the camera can accurately capture the features and the details of the pineapple head, and the processing time is effectively reduced.
Compared with the prior art, the invention has the following beneficial effects:
the calculation method adopted by the invention is simple and has high calculation speed, can realize quick positioning and accurate identification of the pineapple head, does not need to adopt a high-speed processor in an actual system, and can reduce the cost.
Drawings
FIG. 1 is a flow chart of a method for rapidly locating and identifying a pineapple head according to an embodiment of the present invention;
FIG. 2 is an image after thresholding in an embodiment of the invention;
FIG. 3 is a schematic diagram of the XY-axis discontinuous denoising method in the embodiment of the present invention;
FIG. 4 is a diagram illustrating the determination of the Y-axis in the embodiment of the present invention;
FIG. 5 is a diagram of the embodiment of the present invention after denoising the X-axis and the Y-axis;
FIG. 6 is an image of the gray scale image after being converted and filtered by an XY-axis discontinuous denoising method in the embodiment of the present invention;
FIG. 7 is a schematic diagram of selecting edge pixels according to an embodiment of the present invention;
FIG. 8 is an image of selected edge points before edge filling according to an embodiment of the present invention;
FIG. 9 is an image after edge filling of selected edge points according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating an edge-filled image after filtering processing according to an embodiment of the present invention;
FIG. 11 is a diagram of a maximum fit region and a center coordinate of the maximum fit region obtained by performing maximum fit processing on an entire target image in the embodiment of the present invention;
FIG. 12 is a color 24 partition map for color statistics in an embodiment of the present invention;
fig. 13 is an eye diagram of a pineapple fruit in an embodiment of the invention.
Detailed Description
The technical solution of the present invention is further described with reference to the drawings and examples, but the scope of the present invention is not limited thereto.
Example (b):
a flow chart of a method for rapidly positioning and identifying the head of a pineapple is shown in fig. 1.
S1, collecting RGB images possibly containing pineapple head portrait information;
furthermore, a camera adopted by the image acquisition is placed on a parallel plate with the height of 1.2 m from the ground, and is attached to one side, close to the ground, of the center of the parallel plate.
Furthermore, the camera adopted by the image acquisition adopts 640X480 resolution and 350X350 size to acquire the image, and under the resolution and the size, the camera can accurately capture the features and the details of the pineapple head, so that the processing time is effectively reduced.
S2, converting the RGB image into an HSV image, performing threshold segmentation and image preprocessing on the HSV image to obtain a plurality of suspected target areas, positioning the position of the area with the largest area, and extracting the image area of the position in the RGB image to be used as an interested area;
further, the threshold segmentation specifically includes:
firstly, setting 3 groups of thresholds, namely a hue threshold range [ H _ down, H _ up ], a saturation threshold range [ S _ down, S _ up ] and a brightness threshold range [ V _ down, V _ up ], according to the maximum color proportion of an HSV image inspection target after conversion; comparing the hue, saturation and brightness values of the HSV image with the 3 groups of threshold ranges respectively, specifically: if the threshold value falls within the threshold value range, the value is set to be 255; if the threshold value range is exceeded, setting the threshold value range to be 0; the 255 represents white pixel points; the 0 indicates a black pixel.
Further, since most pineapples are planted in a field environment, the brightness value of the head of the pineapple grabbed by the camera has a certain fluctuation value under different luminous intensities. Therefore, after the light emission intensity is obtained by solving the light emission intensity formula, the brightness threshold value needs to be compensated according to the light emission intensity.
Thus, 3 sets of thresholds [ H _ down, H _ up ], [ S _ down, S _ up ], [ V _ down, V _ up ] are set as follows:
H_up=upper_H
H_down=lower_H
S_up=upper_S
S_down=lower_S
V_up=upper_V+I×b
V_down=lower_V+I×b
wherein, upper-limit color values and lower-limit color values of hue, saturation and brightness are respectively represented by upper-limit _ H, lower _ H, upper _ S, lower _ S, upper _ V, lower _ V and are obtained by checking according to an HSV color comparison table, b is an illumination compensation coefficient which is a set value, I is illumination intensity, and the calculation is carried out by the following formula:
I=hvN/(At)
wherein h is Planck constant, v represents frequency, A represents the area of an irradiation region, and N represents the number of photons irradiated onto A within a time interval t; the HSV color comparison table is as follows.
TABLE 1 HSV color comparison Table
Figure BDA0001573739810000071
In the present embodiment, since the place of image acquisition is indoors, the fluctuation value can be set to: luminescence intensity 0.5%;
therefore, the brightness threshold is set to [ V _ down, V _ up ], which specifically sets the formula as follows:
V_up=upper_V+I×0.005
V_down=lower_V+I×0.005
wherein, upper _ V, lower _ V is the brightness upper limit color value and the brightness lower limit color value respectively, which are obtained by checking according to the HSV color comparison table, the illumination compensation coefficient is set to 0.005, and I is the illumination intensity.
The indoor general illumination intensity under the ideal condition is 2000lx, so in this embodiment, the brightness upper threshold is set to 41+10, the brightness lower threshold is set to 153+10, and this data is obtained by indoor normal illumination under the ideal condition in daytime; under different illumination intensities, values corresponding to the upper limit and the lower limit in the HSV color comparison table are also changed, so that in reality, a specific illumination compensation value and values of the upper limit color and the lower limit color are determined according to actual conditions; in this embodiment, the fluctuation value of the brightness threshold value is known to fluctuate within the range of [0, I × 0.005] by the brightness threshold value setting process, so that the brightness threshold value of the pineapple head in the actual situation can be set to adjust the brightness value of the image according to the HSV color comparison table and the effect of the actual feedback.
Fig. 2 shows an image obtained after thresholding an original captured image, where the white area in fig. 2 is the largest pineapple head, and many small white dots included in the image can be seen, and these small white dots are noise points of non-target objects in the image.
The filtering process adopts a method of grinding discontinuous white points of continuous X-axis and Y-axis, and is named as XY-axis discontinuous denoising method. A schematic diagram of the XY-axis discrete denoising method is shown in fig. 3.
Further, the XY-axis discontinuous denoising method specifically includes:
presetting a judgment value, respectively grinding discontinuous small white points on an X axis and a Y axis of the converted gray level image, if the continuity of the white points on the X axis or the Y axis is less than the judgment value, grinding the small white points on the X axis or the Y axis to be flat, namely changing the small white points into black; if the continuity of the white dot on the X-axis or the Y-axis is heavy rain or equal to the judgment value, the small white dot on the X-axis or the Y-axis is not flattened, i.e., remains white. And through an XY axis discontinuous denoising method, redundant color points of the non-target object after threshold segmentation can be rapidly removed.
As shown in fig. 4, the grayscale map is denoised according to the XY-axis discontinuous denoising method: the judgment value is set to 3, 3 points on the X axis do not accord with the judgment value, and 2 points on the Y axis do not accord with the judgment value. The result of denoising the grayscale map in fig. 4 is shown in fig. 5: 3 points located on the X axis were not ground flat and 2 points located on the Y axis were ground flat. In this embodiment, the XY-axis discontinuous denoising method is selected to perform denoising with the judgment value of 2.
After the image is denoised according to the XY-axis discontinuous denoising method, the edge of the target image becomes incomplete, and the contour is not convenient to capture, as shown in fig. 6, so that the denoised image needs to be edge-filled.
Further, the edge filling specifically includes:
firstly, rough edge processing of an X axis is carried out on an image after filtering processing, right subtraction and left subtraction are carried out on pixel points by utilizing the image after thresholding, the absolute value is taken as a result, if the white point is subtracted or the black point is subtracted, the subtracted values are all 0, and when the black point is subtracted from the white point, the absolute value or the white point is subtracted from the black point, the obtained value is 255; points after rough edge processing are edge points of the image;
according to the rough edge processing, the edge points in the image are obtained, and according to the obtained edge points, the edge filling is performed on the filtered image, that is, fig. 6 in this embodiment.
Performing edge filling, first selecting an edge point, and the principle of selecting the edge point is shown in fig. 7; judging the edge in the image by taking the selected edge point as the circle center and the radius of the selected edge point as 1; if there are continuous points in the small circle with the radius of 1, that is, the number of the 255 pixels is greater than or equal to 3, that is, the edge is a continuous edge, then the selected edge point is taken as the center of a circle, the image before the selected edge point is subjected to edge filling is shown in fig. 8, and the radius of a filling circle is set for edge filling; in the present embodiment, the radius of the filled circle is taken as R — 3; setting all the pixel points with the value of 0 to be 255 in the circle, and displaying the image after the selected edge points are subjected to edge filling as shown in fig. 9; and repeating the steps until all edge points of the image obtained by the rough edge processing are judged, and obtaining the image with the edge being filled. Fig. 10 shows an image obtained by edge-filling the filtered image of fig. 6 according to this embodiment.
And carrying out accurate edge detection on the image after edge filling, and completely storing the outline.
Dividing the contour of the continuous area into a plurality of independent contours for storage, and performing maximum rectangle fitting on each contour, namely taking the maximum value of the length and the width of the contour as the length and the width of the maximum fitting rectangular area, thereby obtaining the maximum fitting area of each contour.
The maximum rectangle fitting method specifically comprises the steps of searching edge pixel points of a contour, defining the maximum value of an X coordinate as Max.x, the minimum value as Min.x, the maximum value of a Y axis as Max.y and the minimum value as Min.y, so that the width of the contour is defined as Wide, the length of the contour is defined as L ength and the central coordinate of the contour is defined as (X, Y), and solving formulas of the width of the contour, the length of the contour and the central coordinate of the contour are as follows:
Wide=Max.x-Min.x
Length=Max.y-Min.y
X=(Max.x+Min.x)/2
Y=(Max.y+Min.y)/2
referring to fig. 11, a maximum fitting region and a maximum fitting region center coordinate obtained by performing maximum fitting processing on an entire target image are shown, wherein W in the drawing represents a maximum width, L represents a maximum length, and black at the center represents a contour center pixel;
s3, generating a color histogram of the region of interest, matching the color histogram with preset pineapple head color histograms in different environments, and judging whether the similarity reaches a preset value;
extracting pineapple head fruit eye features of the region of interest, inputting the features into a classifier, judging whether pineapple head fruit eyes exist or not, and calculating center coordinates of the fruit eyes;
further, the performing statistics of pixel colors on the region of interest according to the color distribution and statistical algorithm specifically includes:
counting the colors represented by each pixel point in the region of interest, setting a certain color partition interval, generating a color histogram, matching the color histogram with preset color histograms in different environments, and representing the pineapple head with certain similarity if the similarity reaches a threshold value.
Further, the color division areas can be divided into color division areas such as 12, 24, 46, 72, etc., and the more the color division areas are, the higher the accuracy is, but the slower the calculation efficiency is, so in the present embodiment, in order to realize quick recognition, 24 color division areas are adopted for pineapple head feature recognition. The color interval of 24 points is shown in fig. 12.
Further, the preset color histograms in different environments specifically include:
acquiring color quantity values of different environments in the same color interval, wherein Xi is expressed as an environment value variable of the ith environment, and F (X1, X2, X3..) is used for expressing the color quantity values in one color interval of the different environments. By inputting a large amount of data in the same color interval (such as yellow), the Weight value and the biases value of the neural network are continuously modified, the deviation value is reduced, and a similar function is fitted.
In the present embodiment, to reduce the complexity of the calculation, an indoor environment with an illumination intensity of 2000lx is used as the environment variable. Then there is a coordinate axis of Y ═ f (x), the sample is input, data points are generated, and a function fit is performed on the data points. After a better fitting function is obtained in different color intervals, a color statistical graph for standard comparison can be generated. Matching with the target object, and obtaining whether the head of the pineapple is the head of the pineapple according to the similarity.
Further, the identifying of the pineapple head-eye features of the region of interest specifically includes:
the method comprises the steps of collecting pineapple head and fruit eye samples, dividing the pineapple head and fruit eye samples into positive and negative samples, and training the positive and negative samples through an OpenCV training classifier, wherein the positive samples are target samples to be detected, and the negative samples are other arbitrary pictures. Setting the size of the picture and the proportion of positive and negative samples, carrying out feature extraction training through a large amount of pineapple head fruit eye data, setting parameters of a classifier, such as the maximum deviation degree of background color, the maximum rotation angle, the corrected weight value and the biases value, putting the positive and negative samples in the same catalogue for training, and obtaining the features of fruit eyes after training. The head eye of the pineapple is shown in fig. 13.
And S4, judging whether the acquired image contains the pineapple head or not by outputting the pineapple head color similarity and the features of the fruit eyes obtained after the image recognition processing in the step S3.
The above description is only for the preferred embodiment of the present invention, but the protection scope of the present invention is not limited thereto, and any person skilled in the art can substitute or change the equivalent of the inventive concept or technical solution of the present invention in the disclosure of the present invention, and fall into the protection scope of the present invention.

Claims (10)

1. A method for quickly positioning and identifying the head of a pineapple is characterized by comprising the following steps: the method comprises the following specific steps:
s1, collecting RGB images possibly containing pineapple information;
s2, converting the RGB image into an HSV image, performing threshold segmentation and image preprocessing on the HSV image to obtain a plurality of suspected target areas, positioning the position of the area with the largest area, and extracting the image area of the position in the RGB image to be used as an interested area;
the image preprocessing comprises filtering processing and edge filling;
s3, generating a color histogram of the region of interest, matching the color histogram with preset pineapple head color histograms in different environments, and judging whether the similarity reaches a set threshold value;
meanwhile, extracting the features of the pineapple head eyes in the region of interest, inputting the features of the pineapple head eyes into a classifier, judging whether the pineapple head eyes exist or not, and calculating the center coordinates of the pineapple eyes;
and S4, judging whether the acquired image contains the pineapple head or not by outputting the pineapple head color similarity obtained in the step S3 and the characteristics of the pineapple head eyes.
2. The method for rapidly positioning and identifying the head of the pineapple as claimed in claim 1, wherein: the threshold segmentation specifically includes:
firstly, setting 3 groups of thresholds, namely a hue threshold range [ H _ down, H _ up ], a saturation threshold range [ S _ down, S _ up ] and a brightness threshold range [ V _ down, V _ up ], according to the maximum color proportion of an HSV image inspection target after conversion; comparing the hue, saturation and brightness values of the HSV image with the 3 groups of threshold ranges respectively, specifically: if the threshold value falls within the threshold value range, the value is set to be 255; if the threshold value range is exceeded, setting the threshold value range to be 0; the 255 represents white pixel points; the 0 indicates a black pixel.
3. The method for rapidly positioning and identifying the head of the pineapple as claimed in claim 2, wherein: the 3 sets of thresholds [ H _ down, H _ up ], [ S _ down, S _ up ], [ V _ down, V _ up ] are set as follows:
H_up=upper_H
H_down=lower_H
S_up=upper_S
S_down=lower_S
V_up=upper_V+I×b
V_down=lower_V+I×b
wherein upper-limit and lower-limit color values of hue, saturation and brightness are respectively represented by upper-limit _ H, lower _ H, upper _ S, lower _ S, upper _ V, lower _ V and are obtained by checking according to an HSV color comparison table; the brightness value has fluctuation values under different illumination intensities, so the brightness threshold value needs to be compensated according to the illumination intensities; b is an illumination compensation coefficient and is a set value, I is illumination intensity, and the method is calculated by the following formula:
I=hvN/(At)
where h is the planck constant, v represents the frequency, a represents the area of the illuminated area, and N represents the number of photons illuminated onto a within the time interval t.
4. The method for rapidly positioning and identifying the head of the pineapple as claimed in claim 1, wherein: in the image preprocessing after the threshold segmentation, in order to reduce the thickness of the image and improve the processing speed of the image, the HSV image needs to be converted into a gray-scale image, and the conversion formula is as follows:
Gray=0.3414×H+0.5478×S+0.1108×V
where Gray is the image Gray level, and H, S, V are the values of the hue, saturation, and brightness of the image, respectively.
5. The method for rapidly positioning and identifying the head of the pineapple as claimed in claim 4, wherein: in the image preprocessing of step S2, the converted grayscale image is subjected to filtering processing, and the method includes:
presetting a judgment value, and respectively grinding discontinuous white points on an X axis and a Y axis of the converted gray scale image, namely: if the continuity of the white point on the X axis or the Y axis is less than the judgment value, the white point on the X axis or the Y axis is ground flat, namely becomes black; if the continuity of the white point on the X axis or the Y axis is larger than or equal to the judgment value, the white point on the X axis or the Y axis is not flattened, namely, the white point is kept as white.
6. The method for rapidly positioning and identifying the head of the pineapple as claimed in claim 1, wherein: in the image preprocessing of step S2, the contour processing is performed on the image after the filtering processing, and the specific steps include:
firstly, rough edge processing of an X axis is carried out on an image after filtering processing, pixel points are subjected to right subtraction and left subtraction on the image after filtering processing, and the absolute value is obtained as a result, if the white points are subtracted or the black points are subtracted, the subtracted values are all 0, and when the black points are subtracted, the absolute value or the white points are subtracted, the obtained value is 255; points after rough edge processing are edge points of the image;
then, performing edge filling on the filtered image, firstly selecting an edge point, and judging the edge in the image by taking the edge point as the circle center and the radius of the edge point as 1; if continuous points exist in the small circle with the radius of 1, namely the number of 255 pixel points is more than or equal to 3, namely the edge is continuous, the selected edge point is taken as the center of a circle, and the radius of a filling circle is set for edge filling; repeating the steps until all edge points of the image obtained by the rough edge processing are judged, and obtaining the image with the finished edge filling; and carrying out accurate edge detection on the image after edge filling, and completely storing the outline.
7. The method for rapidly positioning and identifying the head of the pineapple as claimed in claim 1, wherein: in step S3, the colors represented by each pixel point in the region of interest are counted, a certain color partition interval is set, a color histogram is generated, the color histogram is matched with color histograms preset in different environments, and if the similarity reaches a threshold, it is indicated that the acquired image includes a pineapple head;
the method for identifying the features of the pineapple head and fruit eyes of the region of interest specifically comprises the following steps:
collecting pineapple head and fruit eye samples, dividing the pineapple head and fruit eye samples into positive and negative samples, and training the positive and negative samples through an OpenCV (open channel vehicle dynamics) training classifier, wherein the positive samples are target samples to be detected, and the negative samples are other arbitrary pictures; setting the size of the picture and the proportion of positive and negative samples, carrying out feature extraction training through a large amount of pineapple head fruit eye data, setting parameters of a classifier, putting the positive and negative samples in the same catalogue for training, and obtaining the features of fruit eyes after training.
8. The method for rapidly positioning and identifying the head of the pineapple as claimed in claim 7, wherein: the preset color histograms in different environments specifically include:
collecting color quantity values of different environments in the same color interval, wherein Xi represents an environment value variable of the ith environment, and F (X1, X2, X3..) represents the color quantity value of one color interval of different environments; by inputting a large amount of data in the same color interval, the Weight value and the biases value of the neural network are continuously modified, the deviation value is reduced, a similar function is fitted, and a standard function histogram is generated.
9. A rapid positioning and identification system for implementing the rapid positioning and identification method of the pineapple head as claimed in any one of claims 1 to 8, characterized in that: the system comprises:
the acquisition module is used for acquiring images of the pineapples and transmitting the acquired images to the processing module;
the processing module is used for carrying out color statistics on the image transmitted by the acquisition module, extracting and matching the features of the head of the pineapple and outputting the information after image processing; the acquisition module is fixed on the support module;
the support module comprises parallel plates and a support frame; the parallel plate is placed at a certain height away from the ground through two support frames, the acquisition module is placed at one side of the center of the parallel plate close to the ground, and the processing module is placed at one side of the center of the parallel plate far away from the ground; the supporting module is used for supporting the processing module and providing a certain height and angle for the acquisition module to acquire images.
10. The rapid location and identification system of claim 9, wherein: the camera mirror surface of the acquisition module is parallel to the ground; the camera of the acquisition module acquires images with the resolution of 640X480 and the size of 350X 350.
CN201810139803.3A 2018-02-08 2018-02-08 Method and system for quickly positioning and identifying pineapple head Active CN108416814B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810139803.3A CN108416814B (en) 2018-02-08 2018-02-08 Method and system for quickly positioning and identifying pineapple head

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810139803.3A CN108416814B (en) 2018-02-08 2018-02-08 Method and system for quickly positioning and identifying pineapple head

Publications (2)

Publication Number Publication Date
CN108416814A CN108416814A (en) 2018-08-17
CN108416814B true CN108416814B (en) 2020-07-31

Family

ID=63128336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810139803.3A Active CN108416814B (en) 2018-02-08 2018-02-08 Method and system for quickly positioning and identifying pineapple head

Country Status (1)

Country Link
CN (1) CN108416814B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376257A (en) * 2018-10-24 2019-02-22 贵州省机电研究设计院 Tealeaves recognition methods based on image procossing
CN109376746A (en) * 2018-10-25 2019-02-22 黄子骞 A kind of image identification method and system
CN109903275B (en) * 2019-02-13 2021-05-18 湖北工业大学 Fermented grain mildewing area detection method based on self-adaptive multi-scale filtering and histogram comparison
CN110415181B (en) * 2019-06-12 2023-07-14 勤耕仁现代农业科技发展(淮安)有限责任公司 Intelligent identification and grade judgment method for RGB (red, green and blue) images of flue-cured tobacco in open environment
CN112183230A (en) * 2020-09-09 2021-01-05 上海大学 Identification and central point positioning method for pears in natural pear orchard environment
CN113902909A (en) * 2021-10-15 2022-01-07 江阴仟亿日化包装有限公司 Cloud storage type appearance analysis system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201600330U (en) * 2009-09-23 2010-10-06 中国农业大学 System for recognizing and locating mature pineapples
CN105095880A (en) * 2015-08-20 2015-11-25 中国民航大学 LGBP encoding-based finger multi-modal feature fusion method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160203591A1 (en) * 2015-01-09 2016-07-14 Umm Al-Qura University System and process for monitoring the quality of food in a refrigerator

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201600330U (en) * 2009-09-23 2010-10-06 中国农业大学 System for recognizing and locating mature pineapples
CN105095880A (en) * 2015-08-20 2015-11-25 中国民航大学 LGBP encoding-based finger multi-modal feature fusion method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Construction and in-field experiment of low-cost binocular vision platform for pineapple harvesting robot;Li Bin et.al;《农业工程学报》;20121031;第28卷;第188-192页 *
Fruit Classification by Extracting Color Chromaticity, Shape and Texture Features: Towards an Application for Supermarkets;F. García et.al;《IEEE LATIN AMERICA TRANSACTIONS》;20160731;第14卷(第7期);第3434-3443页 *
Simulation and Segmentation Techniques for Crop Maturity Identification of Pineapple Fruit;Muhammad Azmi Ahmed Nawawi et.al;《Springer Nature Singapore Pte Ltd》;20171231;第3-11页 *
基于单目视觉的田间菠萝果实识别;李斌 等;《农业工程学报》;20101031;第26卷(第10期);第345-349页 *

Also Published As

Publication number Publication date
CN108416814A (en) 2018-08-17

Similar Documents

Publication Publication Date Title
CN108416814B (en) Method and system for quickly positioning and identifying pineapple head
CN110389127B (en) System and method for identifying metal ceramic parts and detecting surface defects
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
US8340420B2 (en) Method for recognizing objects in images
CN109447945B (en) Quick counting method for basic wheat seedlings based on machine vision and graphic processing
Ranjan et al. Detection and classification of leaf disease using artificial neural network
CN109255757B (en) Method for segmenting fruit stem region of grape bunch naturally placed by machine vision
CN111915704A (en) Apple hierarchical identification method based on deep learning
CN109409355B (en) Novel transformer nameplate identification method and device
CN107909081B (en) Method for quickly acquiring and quickly calibrating image data set in deep learning
CN108491788A (en) A kind of intelligent extract method and device for financial statement cell
CN108133216B (en) Nixie tube reading identification method capable of realizing decimal point reading based on machine vision
CN102426649A (en) Simple steel seal digital automatic identification method with high accuracy rate
Masood et al. Plants disease segmentation using image processing
CN103914708A (en) Food variety detection method and system based on machine vision
CN109977899B (en) Training, reasoning and new variety adding method and system for article identification
Patki et al. Cotton leaf disease detection & classification using multi SVM
CN113222959B (en) Fresh jujube wormhole detection method based on hyperspectral image convolutional neural network
CN108460344A (en) Dynamic area intelligent identifying system in screen and intelligent identification Method
CN110807367A (en) Method for dynamically identifying personnel number in motion
CN113744191A (en) Automatic cloud detection method for satellite remote sensing image
CN111665199A (en) Wire and cable color detection and identification method based on machine vision
CN112581452A (en) Industrial accessory surface defect detection method and system, intelligent device and storage medium
CN113610185B (en) Wood color sorting method based on dominant hue identification
CN114581824A (en) Method for identifying abnormal behaviors of sorting center based on video detection technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant