CN110443811B - Full-automatic segmentation method for complex background leaf image - Google Patents

Full-automatic segmentation method for complex background leaf image Download PDF

Info

Publication number
CN110443811B
CN110443811B CN201910683687.6A CN201910683687A CN110443811B CN 110443811 B CN110443811 B CN 110443811B CN 201910683687 A CN201910683687 A CN 201910683687A CN 110443811 B CN110443811 B CN 110443811B
Authority
CN
China
Prior art keywords
image
background
segmentation
point
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910683687.6A
Other languages
Chinese (zh)
Other versions
CN110443811A (en
Inventor
高理文
林小桦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University of Chinese Medicine
Original Assignee
Guangzhou University of Chinese Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University of Chinese Medicine filed Critical Guangzhou University of Chinese Medicine
Priority to CN201910683687.6A priority Critical patent/CN110443811B/en
Publication of CN110443811A publication Critical patent/CN110443811A/en
Application granted granted Critical
Publication of CN110443811B publication Critical patent/CN110443811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a full-automatic segmentation method of a complex background leaf image, which directly obtains a gradient on a color image, then grasps the characteristic that a leaf region contains veins, and obtains a more accurate foreground marker picture by effectively enhancing and extracting the veins. And finally, realizing the segmentation of the image by adopting a watershed marking method. The method basically overcomes the difficult problem of full-automatic segmentation of the complex background blade image, and promotes the development of complex background blade image identification. Although the method is designed for the leaf, the method has a certain reference significance for the problem of fully automatic image segmentation of a single target under a complex background and with texture veins of a target region.

Description

Full-automatic segmentation method for complex background leaf image
Technical Field
The invention relates to the field of image processing, in particular to a full-automatic segmentation method for a complex background blade image.
Background
The medicinal plants are the main sources of the traditional Chinese medicinal materials and are the material basis of the traditional Chinese medicine for treating diseases and saving people. However, in recent years, medicinal plant resources have been remarkably atrophied due to deterioration of ecological environment. It is very urgent to strengthen the protection of the medicinal plants.
If the geographical distribution of the endangered medicinal plants can be deeply discovered, a geographical information resource library is constructed, and the method can play an important supporting role in protecting, introducing, utilizing and the like the wild medicinal plants. However, since it is difficult for general people to identify the plant types in a complicated field ecological environment, the current resource investigation can only adopt a sampling point selection mode, and is a certain distance away from the comprehensive and deep investigation. Even so, it has been a huge expenditure of manpower and material resources.
The method is characterized in that the plant leaf images are shot, and then machine identification is carried out, so that more people with only preliminary foundation can relatively accurately identify the types of the current plants through simple mobile phone operation in the field. As with most image recognition problems, segmentation of leaf images is the first difficulty. The existing method for accurately segmenting the leaf image of the medicinal plant with the complex background needs manual participation;
many studies have been reported on plant machine identification based on leaf images. According to the image acquisition mode, the method can be divided into two modes, firstly, the blade is taken off and then shot or scanned, and a simple background blade image is obtained. The method has the advantages that the image is easy to segment, and the segmentation accuracy is high; the disadvantage is damage to the plant. The existing research mostly adopts the mode. And secondly, directly shooting the leaves on the branches to obtain the complex background leaf images. Has the advantages of not causing any damage to plants; the method has the defects that the image contains background objects such as branches, soil, other leaves and the like besides target leaves, the segmentation difficulty is high, the segmentation accuracy is low, and the accuracy of subsequent classification identification is seriously influenced.
Due to the advantage of non-loss, it is important to study machine identification based on complex background leaf images. Therefore, accurate segmentation of the complex background leaf image is the problem to be solved firstly. Despite current deep learning classification methods, mapping of images to classes can be achieved. However, plant leaf image samples are difficult to scale up to large numbers. A small number of the species are common, and most of the species are difficult to find, and some of the species are also impossible to find. In this case, the applicability of the deep learning classification method remains to be verified. Moreover, it is also beneficial if the image segmentation can be accurately performed to remove the image background, i.e. the interference item, regardless of which classification method is subsequently used.
There are many methods of image segmentation. Some require human intervention. In the previous stage, the method is deeply researched, and a high-accuracy complex background blade image segmentation method with manual participation is provided. But the need for human intervention still impacts the user experience. Clearly, a fully automated approach would be more popular.
There are also many, notably:
OTSU is a threshold segmentation method proposed by OTSU in 1979 as a scholars of japan for binarizing images. The method selects the optimal segmentation threshold value by taking the maximum inter-class variance as a criterion. The method is very sensitive to noise and target size, and only produces a good segmentation effect on images with obvious foreground and background contrast.
Mean shift is a hill climbing algorithm based on kernel density estimation, and can be used for clustering, image segmentation, tracking and the like. The image segmentation is to find the class label of each pixel point. The class label depends on the cluster to which it belongs in the feature space. For each cluster, there must first be a class center. mean shift considers that the maximum point of probability density (probability density) is the class center. Mean shift is similar to OTSU, and the segmentation effect is good for images with obvious foreground and background contrast; otherwise, the effect is poor.
The GraphCut method associates the image segmentation problem with the min cut problem of the graph. The essence of the graph theory-based segmentation method is to remove a specific edge with the goal of minimizing Cost, and divide a graph into a plurality of sub-graphs so as to realize segmentation. The Cost includes a region item and a boundary item. It has advantages in processing images with significant differences in pixel values, but has poor effects on complex background images, images with closer foreground and background.
FCN is trained in a pixel-to-pixel and end-to-end manner, and realizes classification of pixel levels of an image, thereby solving a semantic level image segmentation problem (semantic segmentation). if the limitations of time and memory are not considered, the input image with any size can be accepted theoretically, but FCN has the defects that the result obtained by ① is not fine enough, and is not sensitive to details in the image ②, the classification of each pixel is only carried out, and the relation between the pixels is not fully considered.
Another classical segmentation method, labeled watershed segmentation. The method is not limited by the shape of the target area, and is very suitable for the segmentation of various blade images with different shapes. And it is also well suited for complex background situations. As long as the input foreground marker image and the background marker image are accurate, the method can obtain a good segmentation result. However, how to accurately obtain the foreground marked image and the background marked image is a big problem. Many times, only manual participation is available.
Disclosure of Invention
The invention provides a full-automatic segmentation method of a complicated background leaf image, which realizes full-automatic segmentation of plant leaves for traditional Chinese medicine in a complicated background.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a full-automatic segmentation method for a complex background leaf image comprises the following steps:
s1: carrying out preposed simple segmentation on the original blade image by using a maximum inter-class variance method;
s2: respectively converting the original blade images into representation forms of HIS and Lab models to obtain component images, detecting background marks in a maximum inter-class variance method segmentation result of each component image, and selecting one background mark as an optimal background mark according to a preset standard;
s3: detecting foreground marks on an original leaf image;
s4: and (4) sorting the foreground mark and the background mark, and segmenting by using a mark watershed segmentation method to obtain a final segmentation image.
Preferably, step S1 specifically includes the following steps:
s1.1: reducing the original blade image of the color RGB to a preset size to obtain an image currentImage, which specifically comprises the following steps:
calculating ratio (1200 × 900)/(number of rows of original blade image × number of columns of original blade image);
if the ratio is less than 1, reducing the image by taking the value of the square of the ratio as a multiple, and if the ratio is not less than 1, not reducing; obtaining a currentImage of the image; in practice, if other scales need to be used, the calculation parameters of the ratio can be adjusted;
s1.2: respectively converting the currentImage into representation forms of HIS and Lab models, judging whether the foreground color and the background color in the H component image, the S component image, the a component image and the b component image have overlarge difference, if so, recording the corresponding win coefficient coef and the judgment mark flat, and outputting a segmentation image BW; the method specifically comprises the following steps:
s1.2.1: for each component image, adopting a maximum inter-class variance method (OTSU) to carry out segmentation to obtain a corresponding segmented image BW;
s1.2.2: calculating the ratio frameCof of pixels with the value of 1 in the pixels of the four frames of each divided image BW;
s1.2.3: calculating the number area of pixels with the median value of '1' in each segmentation image BW;
s1.2.4: deleting all small-area regions in each segmentation image BW, and reserving the unique region with the largest area;
s1.2.5: calculating the number targetArea of pixels having a value of "1" in the image BW after the step S1.2.4;
s1.2.6: taking the ratio of targetArea to area as foregorouncoef;
s1.2.7: if frameCof <0.1 and foregorroundcoef >0.9, it is considered that there is an excessive difference between the foreground and background colors, set flat to 1, and take (1-frameCof) < foregorroundcoef as the win-win coefficient coef; if frameCof is more than or equal to 0.1 or foregrouncoef is less than or equal to 0.9, inverting the image BW after S1.2.4, repeating steps S1.2.2 to S1.2.6, then judging whether frameCof is less than 0.1 and foregrouncoef is greater than 0.9, if so, setting flat to 1, taking (1-frameCof) foregrouncoef as win-win coefficient coef, otherwise, setting flat to 0;
s1.3: if the judgment flag flat obtained from each component image in step S1.2 is 0, go to step S2; if more than one judgment mark flat exists, the segmentation image BW corresponding to the maximum win coefficient coef is taken as the image logicImage;
s1.4: detecting the number of the positions where the foreground area of the image logicImage extends to the four-side frame, and if the position exceeds 3, entering the step S2; if the position does not exceed 3, the step S1.5 is carried out;
generally, when the blade is shot, the blade is shot completely, and the situation that the blade touches the frame is unlikely to occur. Occasionally, the blade stem is too long or the blade tip is slender and extends to the frame, which is accepted with great tolerance. Thus, if touchdown occurs more than 3 times to the border, this logimge must not be the result of the segmentation.
S1.5: cutting a sub-graph logimagecrop in the middle of the image logimage, wherein the length of the sub-graph logimagecrop is 0.6 times of the length of the image logimage, and the width of the sub-graph logimagecrop occupies the same proportion, four borders of the sub-graph logimagecrop are respectively parallel to four borders of the image logimage, and the central points of the two borders are overlapped; calculating the number logicImageCrop area of pixel points with the median value of '1' in the subgraph logicImageCrop area; calculating the number logicImageArea of the pixel points with the logicImage median value of 1, calculating the ratio of the logicimagecropaarea to the logicImageArea, and if the ratio is not more than 0.5, entering the step S2; if the value is more than 0.5, the step S1.6 is carried out;
the purpose of this step is to detect whether the foreground region is too small and located near the image border in the logimge, and if this is the case, the segmentation result is discarded. Because the blade image is taken, the blade is often taken in the center of the image and occupies most of the image without the aforementioned situation.
S1.6: performing closed operation on the image logicImage;
s1.7: and (4) filling holes in the image logicImage obtained in the step (S1.6) to obtain a segmentation result.
Preferably, step S2 specifically includes the following steps:
s2.1: setting a credible foreground scale parameter para;
s2.2: detecting a background mark on each segmentation image BW corresponding to the H component image, the S component image, the a component image, and the b component image, specifically including:
s2.2.1, in the middle of the divided image BW, cutting a piece of subgraph criblebreground with the length equal to the length of para × 2 times of the length of the component image BW and the width occupying the same proportion, wherein the four borders of the subgraph criblebreground are respectively parallel to the four borders of the divided image BW, and the central point of the subgraph criblebreground is coincident with the central point of the divided image BW, because when shooting the blade image, the blade is always shot in the center of the image and occupies most range of the image, therefore, the target blade is larger and possibly covers a small area in the center of the image.
S2.2.2: in the computer subgraph credbeleForeground, the proportion of pixels with the value of 1 to coefforcredibeleForeground is calculated;
s2.2.3: if the coeffoldleboregorund is in the interval [0.2,0.8], the segmentation accuracy of the segmentation image BW is not suitable for detecting the background mark in the segmentation image BW, the detection process is ended, and a detection failure message is returned;
s2.2.4: if the coeffoldeforegroup is smaller than 0.2, storing the segmentation image BW as an image backsgroundCandidate, and storing the result of negating the segmentation image BW as an image BW; if the coeffoldeforequired is not less than 0.2, storing the result of inverting the segmentation image BW as an image background Candidate;
s2.2.5: calculating the ratio frameCof of pixels with the value of 1 in the pixels of the four frames of the image BW;
s2.2.6: if the frameCof is more than 0.6, the segmentation accuracy of the image BW is regarded as questionable, the detection process is ended, and a detection failure message is returned;
s2.2.7: carrying out mathematical morphology corrosion operation on the backsgroundCandidate image;
s2.2.8: setting the pixels of the four borders of the S2.2.7 background Candidate as 1;
s2.2.9: selecting and reserving an area communicated with the point at the upper left corner of the image on the image background and Candidate after S2.2.8, deleting other areas, and marking the result as the image background to obtain a required background mark; but holes may also be present therein. The method has the advantages that the holes are eliminated through the following three steps, and the time consumption of the mark watershed segmentation is saved.
S2.2.10: the image background is inverted and stored as an image revertbackground;
s2.2.11: selecting and reserving an area communicated with the central point of the image on the image retroretrobacksound, deleting other areas, and recording the result as the image retrobacksound 2;
s2.2.12: storing the result of inverting the image revertbackground 2 as an image background;
s2.2.13, cutting a subgraph forkefeground bean with the length being (para × 2) times of the length of the component image and the width occupying the same proportion at the middle part of the retroversebackground 2, wherein the four borders of the subgraph are respectively parallel to the four borders of the original image, and the central point of the subgraph is superposed with the central point of the original image;
s2.2.14: in the computed subgraph credbeleForegrountClean, the proportion of pixels with the value of 1 to coefforcredibeleForegrountClean;
s2.2.15, calculating backgroudcoef 1- (1.01-coeffofcrediborefroeground) × frameCof, returning images background and backgroudcoef and detecting success information;
s2.3: selecting an image background with the detection success and the largest background coef from the four background mark detection results obtained in the S2.2 as an image best background detect, registering best background detect flat as true, and registering best background detect flat as false if all the four background mark detection results return failure information;
s2.4: if the bestbackgroundetectFlat is true, correcting the bestbackgroundetect image, wherein the specific process is as follows: corroding the bestbackgroundetect image, setting all pixel points on four borders of the bestbackgroundetect image to be 1, selecting and reserving an area communicated with the upper left corner of the bestbackgroundetect image, and deleting the rest areas.
Preferably, step S3 includes the steps of:
s3.1: initializing and setting to obtain an initialized foreground mark image forkbacking mask and an initialized background mark image backgrouping mask;
s3.2: directly solving the gradient of the currentImage of the color RGB image;
s3.3: enhancement of veins;
s3.4: dividing veins;
s3.5: the main veins are merged;
s3.6: the scattered and fine veins are incorporated.
Preferably, step S3.1 comprises the steps of:
s3.1.1: setting a neighbor distance criterion coefficient nerDistancepara, setting a neighbor distance criterion nerDistance as the average value of the length and the width of the image multiplied by nerDistancepara, and rounding up; setting a close border distance criterion coefficient nearBoundarBoundarPara, setting a close border row number criterion nearBoundarRowDistance as the total row number of the image multiplied by nearBoundarPara, rounding up, setting a close border column number criterion nearBoundarBoundarColDistance as the total column number of the image multiplied by nearBoundarBoundarPara, and rounding up; setting a very close border distance criterion coefficient veryNearBoundarPara, setting a very close border row number criterion veryNearBoundarRowDistance as the total row number of the image multiplied by veryNearBoundarPara, rounding up, setting a very close border column number criterion veryNearBoundarColDistance as the total column number of the image multiplied by veryNearBoundarPara, rounding up; setting an avoidance border distance coefficient doddgeboundarypara (for example, 0.2, the value of which must be greater than that of nearBoundaryPara), setting an avoidance border row number doddgeboundarywowdistance as the total row number of the image multiplied by the doddgeboundarypara, rounding up, setting an avoidance border column number doddgeboundarycodistance as the total column number of the image multiplied by the doddgeboundarypara, rounding up;
s3.1.2, initializing a foreground marking image forkroundmask, specifically, creating a binary image forkroundmask with the size consistent with the color RGB image currentImage and the pixel values all being 0, resetting the pixel value in a rectangular area with the width occupying the same proportion as (para × 2) times of the length of the forkroundmask in the middle of the forkroundmask and the width occupying the same proportion as "1", wherein the four borders of the rectangle are respectively parallel to the four borders of the forkroundmask and the central points of the two borders are also coincident;
s3.1.3: initializing a background mark picture background mask, which specifically comprises the following steps: newly establishing a binary image background mask with the size consistent with the currentImage of the color RGB image and all pixel values of 0, and resetting the pixel values on four borders of the background mask to be 1.
Preferably, step S3.2 comprises the steps of:
directly solving the gradient of the currentImage of the color RGB image to obtain a gradient amplitude image VG and a gradient angle image A, wherein the specific process comprises the following steps:
s3.2.1: partial derivatives in the x and y directions: let the coordinates of any point on the currentImage be (x, y), and the pixel values be (R, G, B), where R, G, B represent the component values of red, green, and blue, respectively. To find
Figure BDA0002145617990000071
Figure BDA0002145617990000072
Computing
Figure BDA0002145617990000073
When six partial derivatives are adopted, a sobel operator is used;
s3.2.2: to find
Figure BDA0002145617990000074
S3.2.3: to find
Figure BDA0002145617990000075
Wherein arctan is an arctangent function;
s3.2.4: to find
Figure BDA0002145617990000081
And
Figure BDA0002145617990000082
s3.2.5: if it is
Figure BDA0002145617990000083
Greater than or equal to
Figure BDA0002145617990000084
Get
Figure BDA0002145617990000085
Is the gradient amplitude F (x, y), θ1Is the gradient angle θ (x, y), otherwise is taken
Figure BDA0002145617990000086
Is the gradient amplitude F (x, y), θ2For the gradient angle theta (x, y), a gradient amplitude map VG and a gradient angle map A are respectively stored.
Preferably, step S3.3 comprises the steps of:
s3.3.1: if the bestbackgroundetectFlat is true, detecting the value of any point in the bestbackgroundetect, if the value is '1', resetting the point value at the same position in the gradient amplitude graph VG to be '0'; the step avoids the possibility that some pixel points in the background area are mixed into the foreground again after being enhanced;
s3.3.2: in the statistical gradient magnitude graph VG, dividing a boundary threshold value high of a portion of points with a maximum pixel value of 1%, and dividing a boundary threshold value low of a portion of points with a minimum pixel value of 1%; resetting all the points with the pixel values larger than high to be high; resetting all points with pixel values less than low to be low; the step removes the point of extreme pixel value in the gradient amplitude image, which is beneficial to the subsequent OTSU segmentation;
s3.3.3: in the gradient angle graph A, for each point, a disc with a small radius as a small scale is used as a neighborhood range, and a standard deviation is calculated to a standard deviation image stdOfA; the smaller the standard deviation in the local range, the higher the consistency of the angle of the gradient vector representing the vicinity of the point;
s3.3.4 for each pixel value α at each point on the gradient magnitude graph VG, α is reset α/(stdOfA +0.1), which acts to enhance the veins in VG.
Preferably, S3.4 comprises the steps of:
s3.4.1, cutting a subgraph VGLoop with the length (para × 2) times that of the gradient magnitude graph VG and the width accounting for the same proportion at the center of the gradient magnitude graph VG, wherein the four borders of the subgraph are respectively parallel to the four borders of the VG, and the center point of the subgraph is coincided with the center point of the VG;
s3.4.2: solving a maximum inter-class variance method segmentation threshold level of the VGcrop;
s3.4.3: taking the level as a threshold, and performing threshold segmentation on the gradient magnitude graph VG to obtain OtsuBW; the purpose of this is to find the veins.
S3.4.4: deleting the undersized connected region in the OtsuBW;
s3.4.5: swelling OtsuBW in order to attach the disrupted veins;
preferably, step S3.5 comprises the steps of:
s3.5.1: copying a backup forkroundmask into forkroundmaskbackup;
s3.5.2: storing the forkroundmask or OtsuBW as the forkroundmask;
s3.5.3: selecting and reserving an area communicated with the central point of the image on a forkroundmask, and deleting other areas;
s3.5.4: copying a backup forkroundmask to forkroundmaskfordel;
s3.5.5, detecting whether there is a point with value "1" on the formegrounmask, falling within the front nearBoundarRowDistance line, or falling within the back nearBoundarRowDistance line, or falling within the front nearBoundyColDistance column, or falling within the back nearBoundyColDistance column, if not, considering that the newly incorporated foreground region is farther from the border of the image, and is likely to be the main vein passing through the central region of the image, and turning to S3.6;
s3.5.6: performing small-scale corrosion on the forkroundmask, selecting and reserving a region communicated with the central point of the image, and deleting other regions;
s3.5.7: further detecting that there is no point with value "1" on the forego and round mask, either in the front veryNearBoundryRowDistance line, or in the back veryNearBoundryRowDistance line, or in the front veryNearBoundryColDistance column, or in the back veryNearBoundryColDistance column, if so, it indicates that the image foreground region is very close to the image border, meaning that the previous corrosion has not achieved the purpose of trimming, and needs to be corroded again; if not, turn to S3.5.11;
s3.5.8: performing small-scale corrosion on the forkroundmask, selecting and reserving a region communicated with the central point of the image, and deleting other regions;
s3.5.9: further detecting that there is no point with value "1" on the forego round mask, which falls in the front veryNearBoundryRowDistance line, or in the back veryNearBoundryRowDistance line, or in the front veryNearBoundryColDistance column, or in the back veryNearBoundryColDistance column, if yes, it indicates that the image foreground region is still very close to the image border, meaning that the previous corrosion operation has not yet achieved the purpose, and special treatment is needed; if not, turn to S3.5.11;
s3.5.10: resetting all rows of front, rear, front, and rear doddgeboundaryRowDistances in the foregroudMask to "0";
s3.5.11: storing the forkroundmask or the upper forkroundmask backup as the forkroundmask, and completing the incorporation of the main vein;
preferably, step S3.6 comprises the steps of:
s3.6.1: in OtsuBW, deleting foreground regions in foreground MaskForDel and storing the foreground regions as fragmentary vein candidate images candidates, wherein the deletion is as follows: for any point in OtsuBW, if the point value of the same position in the foregroudMaskForDel is '1', resetting the point in OtsuBW to '0';
s3.6.2: scanning the whole image on candidates to find out the distance DN between each region and the central point of the image; if DN is less than the near distance, marking, recording the row number row and the column number col of the point nearest to the central point of the image in the area, simultaneously judging whether the horizontal distance between each area and the four borders of the image is less than the near BoundarRowdistance, and whether the vertical distance is less than the near Coldistance, if so, marking that the area is too close to the borders;
s3.6.3: the duplicate backup candidates are avoidRegions;
s3.6.4: deleting all areas of candidates marked as too close to the border;
s3.6.5: for each region of candidates whose distance from the image center point is less than the near distance, a point from the nearest image center point of the region is drawn in a way of avoiding, and the line number and the column number S3.6.2 are obtained, and the line segment pointing to the image center point is defined as follows: setting the coordinate of any point of the line segment as (x, y), if the avoidRegions (x, y) is '1' and the point is not the starting point of the line segment, cancelling the line segment, and in addition, in the line drawing process, if detecting that the forkroundmask (x, y) is '1', determining that the forkroundmask is connected, and completing the task;
s3.6.6: storing the forego mask or upper candidates as forego mask;
s3.6.7: selecting the area which is communicated with the central point of the image and reserved in the forkroundmask.
Preferably, S4 includes the steps of:
s4.1: if the bestbackgroundetectFlat is true, slightly expanding bestbackgroundetect into bestbackgroundetectFat, then deleting the area overlapped with bestbackgroundetectFlat in foregroundMask, keeping the area communicated with the central point of the image for the foregroundMask, deleting other areas, and finally setting the backgroundetect equal to bestbackgroundetect;
s4.2: marking the watershed segmentation by using the result of backsourcenmask or upper foregoroundmask as a mark to obtain outPutImage;
s4.3: and taking the area marked with the label number of 2 in the outPutImage as a foreground, and taking the rest as a background to obtain a final binary segmentation result logicImage.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the method directly obtains the gradient on the color image, then grasps the characteristic that the leaf region contains the veins, and obtains a more accurate foreground marking map by effectively enhancing and extracting the veins. And finally, realizing the segmentation of the image by adopting a watershed marking method. The invention overcomes the difficult problem of full-automatic segmentation of the complex background leaf image and promotes the development of complex background leaf image identification. The method has certain reference significance for the full-automatic image segmentation problem of a single target under a complex background and a target region with texture veins.
Drawings
FIG.1 is a schematic flow chart of the present invention.
Fig. 2 is a schematic diagram of the result of segmentation of the pre-background in S1, where (a) is the original blade image, (b) is the segmentation result of the S component map, (c) is the segmentation result of the b component map, (d) is the final segmentation result after correction, (e) is the H component map, (f) is the S component map, (g) is the a component map, (H) is the b component map, (i) is the H component map after OTSU, (j) is the S component map after OTSU, (k) is the a component map after OTSU, and (l) is the b component map after OTSU.
Fig. 3 is a schematic diagram of the results of the lonifera convesae (Sweet.) dc blade image after S1.2, where (a) is the original image, (b) is the b-component image, (c) is the b-component image after OTSU, and (d) is the result output by S1.2.
Fig. 4 is a schematic diagram of the result of the Clematis chinensis Osbeck leaf image after S1.2, wherein (a) is the original image, (b) is the H component image, (c) is the H component image after OTSU, and (d) is the result output by S1.2.
Fig. 5 is a diagram illustrating results obtained when an optimal background mark is detected, where (a) is a leaf original, (b) is an optimal background mark, (c) is a foreground mark, (d) is a final segmentation result, (e) is an H-component map, (f) is an S-component map, (g) is an a-component map, (H) is a b-component map, (i) is an H-component map segmentation result, (j) is an S-component map segmentation result, (k) is an a-component map segmentation result, (l) is a b-component map segmentation result, (m) is a background detected from the H-component map, (n) is a background detected from the S-component map, (o) is a background detected from the a-component map, and (p) is a background detected from the b-component map.
Fig. 6 is a polygon mirror l. blade, in which (a) is a blade original drawing, (b) is a gradient map, (c) is a gradient angle map, (d) is a gradient map with background removed, (e) is a gradient map with polar end removed, (f) is a local standard deviation of the gradient map, (g) is a gradient map enhanced, (h) is a central rectangular region, (i) is a segmentation result of the gradient map, (j) is a region deleted, (k) is a dilated region, (l) is an initial foreground with main veins added, (m) is a fragmentary leaf vein candidate map, (n) is a region deleted from a near boundary region, (o) is a line drawn toward a central point, and (p) is a fragmentary leaf vein complete merged.
Fig. 7 is a diagram showing the result of merging main veins, where (a) is the original, (b) is the just merged main vein, (c) is the final segmentation result after the first small-cut, and (d) is the final segmentation result.
Fig. 8 is a schematic diagram of segmentation results of other plant leaves, where (a), (e), (i), and (m) are original images of different plant leaves, respectively, (b) is a corresponding background label map of (a), (c) is a corresponding foreground label map of (a), (d) is a corresponding segmentation result of (a), (f) is a corresponding background label map of (e), (g) is a corresponding foreground label map of (e), (h) is a corresponding segmentation result of (e), (j) is a corresponding background label map of (i), (k) is a corresponding foreground label map of (i), (l) is a corresponding segmentation result of (i), (n) is a corresponding background label map of (m), (o) is a corresponding foreground label map of (m), and (p) is a corresponding segmentation result of (m).
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
The embodiment discloses a full-automatic segmentation method of a complex background leaf image, as shown in fig.1, comprising the following steps:
s1: carrying out preposed simple segmentation on the original blade image by using a maximum inter-class variance method;
s2: respectively converting the original blade images into representation forms of HIS and Lab models to obtain component images, detecting background marks in a maximum inter-class variance method segmentation result of each component image, and selecting one background mark as an optimal background mark according to a preset standard;
s3: detecting foreground marks on an original leaf image;
s4: and (4) sorting the foreground mark and the background mark, and segmenting by using a mark watershed segmentation method to obtain a final segmentation image.
Step S1 specifically includes the following steps:
s1.1: reducing the original blade image of the color RGB to a preset size to obtain an image currentImage, which specifically comprises the following steps:
calculating ratio (1200 × 900)/(number of rows of original blade image × number of columns of original blade image);
if the ratio is less than 1, reducing the image by taking the value of the square of the ratio as a multiple, and if the ratio is not less than 1, not reducing; obtaining a currentImage of the image; in practice, if other scales need to be used, the calculation parameters of the ratio can be adjusted;
s1.2: respectively converting the currentImage into representation forms of HIS and Lab models, judging whether the foreground color and the background color in the H component image, the S component image, the a component image and the b component image have overlarge difference, if so, recording the corresponding win coefficient coef and the judgment mark flat, and outputting a segmentation image BW; the method specifically comprises the following steps:
s1.2.1: for each component image, adopting a maximum inter-class variance method (OTSU) to carry out segmentation to obtain a corresponding segmented image BW;
s1.2.2: calculating the ratio frameCof of pixels with the value of 1 in the pixels of the four frames of each divided image BW;
s1.2.3: calculating the number area of pixels with the median value of '1' in each segmentation image BW;
s1.2.4: deleting all small-area regions in each segmentation image BW, and reserving the unique region with the largest area;
s1.2.5: calculating the number targetArea of pixels having a value of "1" in the image BW after the step S1.2.4;
s1.2.6: taking the ratio of targetArea to area as foregorouncoef;
s1.2.7: if frameCof <0.1 and foregorroundcoef >0.9, it is considered that there is an excessive difference between the foreground and background colors, set flat to 1, and take (1-frameCof) < foregorroundcoef as the win-win coefficient coef; if frameCof is more than or equal to 0.1 or foregrouncoef is less than or equal to 0.9, inverting the image BW after S1.2.4, repeating steps S1.2.2 to S1.2.6, then judging whether frameCof is less than 0.1 and foregrouncoef is greater than 0.9, if so, setting flat to 1, taking (1-frameCof) foregrouncoef as win-win coefficient coef, otherwise, setting flat to 0;
in fig. 2, the flat of the S component image segmentation result is 1, coef is 0.9536, and the output BW is as shown in fig. 2 (b); the b-component image segmentation result has flat of 1, coef of 0.8722, and the output BW is as shown in fig. 2 (c). The flat of the remaining component images is 0. In fig. 2, a divided image BW of the S component image having flat of 1 and the largest coef, that is, fig. 2(b), is selected.
S1.3: if the judgment flag flat obtained from each component image in step S1.2 is 0, go to step S2; if more than one judgment mark flat exists, the segmentation image BW corresponding to the maximum win coefficient coef is taken as the image logicImage;
s1.4: detecting the number of the positions where the foreground area of the image logicImage extends to the four-side frame, and if the position exceeds 3, entering the step S2; if the position does not exceed 3, the step S1.5 is carried out;
generally, when the blade is shot, the blade is shot completely, and the situation that the blade touches the frame is unlikely to occur. Occasionally, the blade stem is too long or the blade tip is slender and extends to the frame, which is accepted with great tolerance. Thus, if touchdown occurs more than 3 times to the border, this logimge must not be the result of the segmentation.
For example, the Lonicerae Confusae (Sweet.) DC blade image in FIG. 3(a), which is the b-component map in FIG. 3(b), is subjected to OTSU to obtain a binary map in FIG. 3(c), and after selecting the maximum foreground region, it is determined that the frame cof <0.1 and the foregrounccoef >0.9 are satisfied (in this case, the frame cof is 0.0970, and the foregrounccoef is 0.9985), and the segmentation result in FIG. 3(d) is output. Then, the segmentation result is selected as logicImage. However, the result is obviously connected with the boundary at a plurality of positions, and the segmentation effect is not good. This problem is found and the segmentation result is discarded.
S1.5: cutting a sub-graph logimagecrop in the middle of the image logimage, wherein the length of the sub-graph logimagecrop is 0.6 times of the length of the image logimage, and the width of the sub-graph logimagecrop occupies the same proportion, four borders of the sub-graph logimagecrop are respectively parallel to four borders of the image logimage, and the central points of the two borders are overlapped; calculating the number logicImageCrop area of pixel points with the median value of '1' in the subgraph logicImageCrop area; calculating the number logicImageArea of the pixel points with the logicImage median value of 1, calculating the ratio of the logicimagecropaarea to the logicImageArea, and if the ratio is not more than 0.5, entering the step S2; if the value is more than 0.5, the step S1.6 is carried out;
the purpose of this step is to detect whether the foreground region is too small and located near the image border in the logimge, and if this is the case, the segmentation result is discarded. Because the blade image is taken, the blade is often taken in the center of the image and occupies most of the image without the aforementioned situation.
For example, the Clematis chinensis Osbeck leaf image in FIG. 4(a), which is the H-component map in FIG. 4(b), is subjected to OTSU to obtain the binary map in FIG. 4(c), and then after selecting the maximum foreground region, it is determined that the frameCof <0.1 and the formroundCoef >0.9 (in this case, frameCof is 0.0817, and formroundCoef is 0.9993) are satisfied, and the segmentation result in FIG. 4(d) is output. Then, the segmentation result is selected as logicImage. But clearly the results obtained are completely wrong. Fortunately, it is detected that its foreground region is too small and located near the image border and discarded.
S1.6: performing closed operation on the image logicImage;
s1.7: and (4) filling holes in the image logicImage obtained in the step (S1.6) to obtain a segmentation result.
The logicImage as in fig. 2(b) passes both S1.4 and S1.5 tests, and then undergoes S1.6 and S1.7 corrections to obtain the results as in fig. 2 (d).
Preferably, step S2 specifically includes the following steps:
s2.1: setting a credible foreground scale parameter para;
s2.2: detecting a background mark on each segmentation image BW corresponding to the H component image, the S component image, the a component image, and the b component image, specifically including:
s2.2.1, in the middle of the divided image BW, cutting a piece of subgraph criblebreground with the length equal to the length of para × 2 times of the length of the component image BW and the width occupying the same proportion, wherein the four borders of the subgraph criblebreground are respectively parallel to the four borders of the divided image BW, and the central point of the subgraph criblebreground is coincident with the central point of the divided image BW, because when shooting the blade image, the blade is always shot in the center of the image and occupies most range of the image, therefore, the target blade is larger and possibly covers a small area in the center of the image.
S2.2.2: in the computer subgraph credbeleForeground, the proportion of pixels with the value of 1 to coefforcredibeleForeground is calculated;
s2.2.3: if the coeffoldleboregorund is in the interval [0.2,0.8], the segmentation accuracy of the segmentation image BW is not suitable for detecting the background mark in the segmentation image BW, the detection process is ended, and a detection failure message is returned;
s2.2.4: if the coeffoldeforegroup is smaller than 0.2, storing the segmentation image BW as an image backsgroundCandidate, and storing the result of negating the segmentation image BW as an image BW; if the coeffoldeforequired is not less than 0.2, storing the result of inverting the segmentation image BW as an image background Candidate;
s2.2.5: calculating the ratio frameCof of pixels with the value of 1 in the pixels of the four frames of the image BW;
s2.2.6: if the frameCof is more than 0.6, the segmentation accuracy of the image BW is regarded as questionable, the detection process is ended, and a detection failure message is returned;
s2.2.7: carrying out mathematical morphology corrosion operation on the backsgroundCandidate image;
s2.2.8: setting the pixels of the four borders of the S2.2.7 background Candidate as 1;
s2.2.9: selecting and reserving an area communicated with the point at the upper left corner of the image on the image background and Candidate after S2.2.8, deleting other areas, and marking the result as the image background to obtain a required background mark; but holes may also be present therein. The method has the advantages that the holes are eliminated through the following three steps, and the time consumption of the mark watershed segmentation is saved.
S2.2.10: the image background is inverted and stored as an image revertbackground;
s2.2.11: selecting and reserving an area communicated with the central point of the image on the image retroretrobacksound, deleting other areas, and recording the result as the image retrobacksound 2;
s2.2.12: storing the result of inverting the image revertbackground 2 as an image background;
s2.2.13, cutting a subgraph forkefeground bean with the length being (para × 2) times of the length of the component image and the width occupying the same proportion at the middle part of the retroversebackground 2, wherein the four borders of the subgraph are respectively parallel to the four borders of the original image, and the central point of the subgraph is superposed with the central point of the original image;
s2.2.14: in the computed subgraph credbeleForegrountClean, the proportion of pixels with the value of 1 to coefforcredibeleForegrountClean;
s2.2.15, calculating backgroudcoef 1- (1.01-coeffofcrediborefroeground) × frameCof, returning images background and backgroudcoef and detecting success information;
s2.3: selecting an image background with the detection success and the largest background coef from the four background mark detection results obtained in the S2.2 as an image best background detect, registering best background detect flat as true, and registering best background detect flat as false if all the four background mark detection results return failure information;
s2.4: if the bestbackgroundetectFlat is true, correcting the bestbackgroundetect image, wherein the specific process is as follows: corroding the bestbackgroundetect image, setting all pixel points on four borders of the bestbackgroundetect image to be 1, selecting and reserving an area communicated with the upper left corner of the bestbackgroundetect image, and deleting the rest areas.
As shown in fig. 5, fig. 5(a) is an original drawing of a polygon chicken l-blade. Fig. 5(e) to 5(H) are an H-component diagram, an S-component diagram, an a-component diagram, and a b-component diagram in this order. Fig. 5(i) to 5(l) are binary maps obtained by dividing the four component maps by OTSU. And fig. 5(m) to 5(p) are four background marks detected after S2.2 on four binary maps, respectively. At the same time, the corresponding backsgroudcoef were also measured to be 0.9942, 0.9972, 0.9964 and 0.9986, respectively. That is, the background mark (fig. 5(p)) detected on the binary map of the b-component map corresponds to backgroupcoef being maximum. The background marker is selected as the optimal background marker in S2.3 and modified in S2.4 to obtain bestbackgroutdetect as shown in fig. 5 (b). Next, foreground markers as in fig. 5(c) will also be detected. Finally, combining the two labeled images, the segmentation result as shown in fig. 5(d) is obtained.
Step S3 includes the following steps:
s3.1: initializing and setting to obtain an initialized foreground mark image forkbacking mask and an initialized background mark image backgrouping mask;
s3.2: directly solving the gradient of the currentImage of the color RGB image;
s3.3: enhancement of veins;
s3.4: dividing veins;
s3.5: the main veins are merged;
s3.6: the scattered and fine veins are incorporated.
Step S3.1 comprises the following steps:
s3.1.1: setting a neighbor distance criterion coefficient nerDistancepara, setting a neighbor distance criterion nerDistance as the average value of the length and the width of the image multiplied by nerDistancepara, and rounding up; setting a close border distance criterion coefficient nearBoundarBoundarPara, setting a close border row number criterion nearBoundarRowDistance as the total row number of the image multiplied by nearBoundarPara, rounding up, setting a close border column number criterion nearBoundarBoundarColDistance as the total column number of the image multiplied by nearBoundarBoundarPara, and rounding up; setting a very close border distance criterion coefficient veryNearBoundarPara, setting a very close border row number criterion veryNearBoundarRowDistance as the total row number of the image multiplied by veryNearBoundarPara, rounding up, setting a very close border column number criterion veryNearBoundarColDistance as the total column number of the image multiplied by veryNearBoundarPara, rounding up; setting an avoidance border distance coefficient doddgeboundarypara (for example, 0.2, the value of which must be greater than that of nearBoundaryPara), setting an avoidance border row number doddgeboundarywowdistance as the total row number of the image multiplied by the doddgeboundarypara, rounding up, setting an avoidance border column number doddgeboundarycodistance as the total column number of the image multiplied by the doddgeboundarypara, rounding up;
s3.1.2, initializing a foreground marking image forkroundmask, specifically, creating a binary image forkroundmask with the size consistent with the color RGB image currentImage and the pixel values all being 0, resetting the pixel value in a rectangular area with the width occupying the same proportion as (para × 2) times of the length of the forkroundmask in the middle of the forkroundmask and the width occupying the same proportion as "1", wherein the four borders of the rectangle are respectively parallel to the four borders of the forkroundmask and the central points of the two borders are also coincident;
s3.1.3: initializing a background mark picture background mask, which specifically comprises the following steps: newly establishing a binary image background mask with the size consistent with the currentImage of the color RGB image and all pixel values of 0, and resetting the pixel values on four borders of the background mask to be 1.
Step S3.2 comprises the following steps:
s3.2.1: partial derivatives in the x and y directions: let the coordinates of any point on the currentImage be (x, y), and the pixel values be (R, G, B), where R, G, B represent the component values of red, green, and blue, respectively. To find
Figure BDA0002145617990000181
Figure BDA0002145617990000182
Computing
Figure BDA0002145617990000183
When six partial derivatives are adopted, a sobel operator is used;
s3.2.2: to find
Figure BDA0002145617990000184
S3.2.3: to find
Figure BDA0002145617990000185
Wherein arctan is an arctangent function;
s3.2.4: to find
Figure BDA0002145617990000186
And
Figure BDA0002145617990000187
s3.2.5: if it is
Figure BDA0002145617990000188
Greater than or equal to
Figure BDA0002145617990000189
Get
Figure BDA00021456179900001810
Is the gradient amplitude F (x, y), θ1Is the gradient angle θ (x, y), otherwise is taken
Figure BDA00021456179900001811
Is the gradient amplitude F (x, y), θ2For the gradient angle theta (x, y), a gradient amplitude map VG and a gradient angle map A are respectively stored.
Fig. 6(a) is a color image of a polygon chip l. blade, from which the gradient is directly obtained, and the gradient magnitude map and the gradient angle map as fig. 6(b) and fig. 6(c) are obtained. As can be seen from fig. 6(b), the veins of the leaves are relatively clear although they are somewhat dim, and the background area outside the leaves is low in brightness.
Step S3.3 comprises the following steps:
s3.3.1: if the bestbackgroundetectFlat is true, detecting the value of any point in the bestbackgroundetect, if the value is '1', resetting the point value at the same position in the gradient amplitude graph VG to be '0'; this step eliminates the possibility that some pixel points in the background area are mixed into the foreground again after enhancement, as shown in fig. 6 (d);
s3.3.2: in the statistical gradient magnitude graph VG, dividing a boundary threshold value high of a portion of points with a maximum pixel value of 1%, and dividing a boundary threshold value low of a portion of points with a minimum pixel value of 1%; resetting all the points with the pixel values larger than high to be high; resetting all points with pixel values less than low to be low; the step removes the point of extreme pixel value in the gradient amplitude image, which is beneficial to the subsequent OTSU segmentation, and the result is shown in FIG. 6(e), and the image is softer.
S3.3.3: in the gradient angle graph A, for each point, a disc with a small radius as a small scale is used as a neighborhood range, and a standard deviation is calculated to a standard deviation image stdOfA; the smaller the standard deviation in the local range, the higher the uniformity of the angle of the gradient vector in the vicinity of the point, and as shown in fig. 6(f), the middle dark line corresponds to the main vein.
S3.3.4 for each pixel value α at each point on the gradient magnitude graph VG, reset α is α/(stdOfA +0.1) which acts to enhance the veins in VG as shown in fig. 6(g) which is the result after the veins are enhanced.
S3.4 comprises the following steps:
s3.4.1, cutting a subgraph VGLoop with the length (para × 2) times that of the gradient magnitude graph VG and the width accounting for the same proportion at the center of the gradient magnitude graph VG, wherein four borders of the subgraph are respectively parallel to four borders of the VG, and the center point of the subgraph is coincided with the center point of the VG, as shown in FIG. 6 (h);
s3.4.2: solving a maximum inter-class variance method segmentation threshold level of the VGcrop;
s3.4.3: taking the level as a threshold, and performing threshold segmentation on the gradient magnitude graph VG to obtain OtsuBW; the purpose of this is to find the veins, as shown in fig. 6 (i). .
S3.4.4: deleting the connected region with the too small area in OtsuBW, and deleting the region with the area of 10 or less, as shown in FIG. 6 (j);
s3.4.5: the OtsuBW was inflated in order to attach the disrupted veins, as shown in FIG. 6 (k);
step S3.5 comprises the following steps:
s3.5.1: copying a backup forkroundmask into forkroundmaskbackup;
s3.5.2: storing the forkroundmask or OtsuBW as the forkroundmask;
s3.5.3: on the forego mask, the area that is connected with the center point of the image is selected to be reserved, and other areas are deleted, and the result is shown in fig. 6 (l). (ii) a
S3.5.4: copying a backup forkroundmask to forkroundmaskfordel;
s3.5.5, detecting whether there is a point with value "1" on the formegrounmask, falling within the front nearBoundarRowDistance line, or falling within the back nearBoundarRowDistance line, or falling within the front nearBoundyColDistance column, or falling within the back nearBoundyColDistance column, if not, considering that the newly incorporated foreground region is farther from the border of the image, and is likely to be the main vein passing through the central region of the image, and turning to S3.6; as in fig. 6(l), there is no foreground near the boundary, so in this example, we turn directly to S3.6, which is also the result of the merging of the main veins.
S3.5.6: performing small-scale corrosion on the forkroundmask, selecting and reserving a region communicated with the central point of the image, and deleting other regions; the steps have the following functions: the main vein is too long and extends to the border of the neighboring image, and it is necessary to cut it because it connects some regions outside the leaf by mistake. As shown in fig. 7(b), the main vein extends to the vicinity of the boundary of the image (actually, the four borders of the image have been reached), and it is obvious that clipping is required. So in this example, execution S3.5.5 is followed by execution S3.5.6. After the clipping, the result of fig. 7(c) is obtained, and the previously incorporated main veins are substantially preserved. Later, the final segmentation result as in fig. 7(d) is obtained.
S3.5.7: further detecting that there is no point with value "1" on the forego and round mask, either in the front veryNearBoundryRowDistance line, or in the back veryNearBoundryRowDistance line, or in the front veryNearBoundryColDistance column, or in the back veryNearBoundryColDistance column, if so, it indicates that the image foreground region is very close to the image border, meaning that the previous corrosion has not achieved the purpose of trimming, and needs to be corroded again; if not, turn to S3.5.11;
s3.5.8: etching the formegrounmask by using a disc with the radius of 3 as a structural element, selecting and reserving a region communicated with the central point of the image, and deleting other regions;
s3.5.9: further detecting that there is no point with value "1" on the forego round mask, which falls in the front veryNearBoundryRowDistance line, or in the back veryNearBoundryRowDistance line, or in the front veryNearBoundryColDistance column, or in the back veryNearBoundryColDistance column, if yes, it indicates that the image foreground region is still very close to the image border, meaning that the previous corrosion operation has not yet achieved the purpose, and special treatment is needed; if not, turn to S3.5.11;
s3.5.10: resetting all rows of front, rear, front, and rear doddgeboundaryRowDistances in the foregroudMask to "0";
s3.5.11: storing the forkroundmask or the upper forkroundmask backup as the forkroundmask, and completing the incorporation of the main vein;
step S3.6 comprises the following steps
S3.6.1: in OtsuBW, deleting foreground regions in foreground MaskForDel and storing the foreground regions as fragmentary vein candidate images candidates, wherein the deletion is as follows: for any point in OtsuBW, if the point value of the same position in the foregroudMaskForDel is '1', resetting the point in OtsuBW to '0'; as shown in FIG. 6(m)
S3.6.2: scanning the whole image on candidates to find out the distance DN between each region and the central point of the image; if DN is less than the near distance, marking, recording the row number row and the column number col of the point nearest to the central point of the image in the area, simultaneously judging whether the horizontal distance between each area and the four borders of the image is less than the near BoundarRowdistance, and whether the vertical distance is less than the near Coldistance, if so, marking that the area is too close to the borders;
s3.6.3: the duplicate backup candidates are avoidRegions;
s3.6.4: the regions of candidates marked as being too close to the border are deleted, as shown in fig. 6 (n);
s3.6.5: for each region of candidates whose distance from the image center point is less than the near distance, a point from the nearest image center point of the region is drawn in a way of avoiding, and the line number and the column number S3.6.2 are obtained, and the line segment pointing to the image center point is defined as follows: setting the coordinates of any point of the line segment as (x, y), if the avoidRegions (x, y) is "1" and the point is not the starting point of the line segment, then canceling the line segment, and in addition, in the line drawing process, if the forkroundmask (x, y) is detected as "1", then the forkroundmask is considered to be connected, the task is completed, no further drawing is needed to save time, and the obtained candidates are as shown in fig. 6 (o);
s3.6.6: storing the forego mask or canduds as forego mask;
s3.6.7: selecting the area which is communicated with the central point of the image and reserved in the forkroundmask.
S4 includes the steps of:
s4.1: if the bestbackgroundetectFlat is true, slightly expanding bestbackgroundetect into bestbackgroundetectFat by using a disc operator with the radius of 3, then deleting the region overlapped with bestbackgroundetectFlat in a foregroungMask, keeping the region communicated with the central point of the image for the foregroungMask, deleting other regions, and finally setting the backgroundetect equal to bestbackgroundetect;
s4.2: marking the watershed segmentation by using the result of backsourcenmask or upper foregoroundmask as a mark to obtain outPutImage; among them, an example of background mask is shown in fig. 5(b), and an example of forkroundmask is shown in fig. 5 (c).
S4.3: the area marked with "2" in the outPutImage is used as the foreground, and the rest is used as the background, so as to obtain the final binary segmentation result logicImage, as shown in fig. 5 (d).
For further example, please refer to fig. 8. Fig. 8 shows the original image, the foreground mask (foregrounmask described in S4.2), the background mask (background mask described in S4.3) and the segmentation results for 4 additional complex background leaf images.
To evaluate the segmentation results, 5 well-known (well-known) observation criteria (obsservatientiriteria) were used. Wherein tp (true positive) represents the number of the original positive samples (points in the leaf region), and is classified as a positive sample; tn (true negative) represents an originally negative example (background point), the number of which is classified as negative examples; fp (false positive) represents the number of originally negative samples, classified as positive samples, often also called false positives; fn (falsenegive) represents the number of originally positive samples, classified as negative samples, and is also commonly referred to as false negative.
Figure BDA0002145617990000221
Figure BDA0002145617990000222
Figure BDA0002145617990000223
Figure BDA0002145617990000224
Figure BDA0002145617990000225
88 kinds of images are collected, and 100-115 images are collected. For information on plant species names and drug names, reference is made to the literature.
In order to verify the effectiveness of the segmentation algorithm, 5 images of 88 leaf images are randomly selected; and manually determining standard segmentation results one by one as a reference. Thus constituting Database 1. 8 of these images are shown in FIG. 1. And the segmentation result of the algorithm is attached to the original image.
Later, in order to train the deep learning algorithm, 5 images are randomly selected from 88 leaf images after the images with the selected Database 1 are removed; and manually determining standard segmentation results one by one. Thus, a training set corresponding to the test set Database 1 is formed and is marked as Database 0.
In order to directly compare with the existing mainstream algorithm, the traditional methods of OTSU, MeanShift, GraphCut and the like are adopted to respectively perform segmentation test on 440 images in Database 1. The results are recorded in sequence in lines 1 to 3 of Table 1.
Then, we further adopt a deep learning segmentation method FCN. The imagenet-vgg-verydep-16 is selected as the embedded deep network, the FCN is trained by 50 epochs by taking the previously introduced Database 0 as a training set, and then the Database 1 is subjected to a segmentation test to obtain a good result. The results are recorded in Table 1, line 4.
Then, the method is applied to segment 440 complex background leaf images in Database 1. The resulting segmentation results are recorded in Table 1, line 5 (highlighted in bold).
It is added that the image is compressed to 200 × 150 for processing because the MeanShift runs too long (takes more than one hour to process an image with a resolution of 400 × 300) when processing high resolution images, whereas the FCN compresses the image to 400 × 300 for processing because it cannot accept high resolution images for training (memory and time factors). other algorithms default to testing at image resolution 1200 × 900. at this resolution, the details of the image, such as the fuzz or spurs at the blade edge, are well preserved.
As can be seen from the comparison of the results of lines 1 to 5 of Table 1, the algorithm has significant advantages compared with the three conventional methods. Compared with the FCN, the method is better.
TABLE 1
AVERAGE RESULT BASED ON DATABASE 1
Figure BDA0002145617990000231
The same or similar reference numerals correspond to the same or similar parts;
the terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (9)

1. A full-automatic segmentation method for a complex background leaf image is characterized by comprising the following steps:
s1: carrying out preposed simple segmentation on the original blade image by using a maximum inter-class variance method;
s2: respectively converting the original blade images into representation forms of HIS and Lab models to obtain component images, detecting background marks in a maximum inter-class variance method segmentation result of each component image, and selecting one background mark as an optimal background mark according to a preset standard;
s3: detecting foreground marks on an original leaf image;
s4: arranging the foreground mark and the background mark, and segmenting by using a mark watershed segmentation method to obtain a final segmentation image;
step S1 specifically includes the following steps:
s1.1: reducing the original blade image of the color RGB to a preset size to obtain an image currentImage, which specifically comprises the following steps:
calculating ratio (1200 × 900)/(number of rows of original blade image × number of columns of original blade image);
if the ratio is less than 1, reducing the image by taking the value of the square of the ratio as a multiple, and if the ratio is not less than 1, not reducing; obtaining a currentImage of the image;
s1.2: respectively converting the currentImage into representation forms of HIS and Lab models, judging whether the foreground color and the background color in the H component image, the S component image, the a component image and the b component image have overlarge difference, if so, recording the corresponding win coefficient coef and the judgment mark flat, and outputting a segmentation image BW; the method specifically comprises the following steps:
s1.2.1: dividing each component image by adopting a maximum inter-class variance method to obtain a corresponding divided image BW;
s1.2.2: calculating the ratio frameCof of pixels with the value of 1 in the pixels of the four frames of each divided image BW;
s1.2.3: calculating the number area of pixels with the median value of '1' in each segmentation image BW;
s1.2.4: deleting all small-area regions in each segmentation image BW, and reserving the unique region with the largest area;
s1.2.5: calculating the number targetArea of pixels having a value of "1" in the image BW after the step S1.2.4;
s1.2.6: taking the ratio of targetArea to area as foregorouncoef;
s1.2.7: if frameCof <0.1 and foregorroundcoef >0.9, it is considered that there is an excessive difference between the foreground and background colors, set flat to 1, and take (1-frameCof) < foregorroundcoef as the win-win coefficient coef; if frameCof is more than or equal to 0.1 or foregrouncoef is less than or equal to 0.9, inverting the image BW after S1.2.4, repeating steps S1.2.2 to S1.2.6, then judging whether frameCof is less than 0.1 and foregrouncoef is greater than 0.9, if so, setting flat to 1, taking (1-frameCof) foregrouncoef as win-win coefficient coef, otherwise, setting flat to 0;
s1.3: if the judgment flag flat obtained from each component image in step S1.2 is 0, go to step S2; if more than one judgment mark flat is 1, the segmentation image BW corresponding to the maximum win coefficient coef is taken as the image logicImage;
s1.4: detecting the number of the positions where the foreground area of the image logicImage extends to the four-side frame, and if the position exceeds 3, entering the step S2; if the position does not exceed 3, the step S1.5 is carried out;
s1.5: cutting a sub-graph logimagecrop in the middle of the image logimage, wherein the length of the sub-graph logimagecrop is 0.6 times of the length of the image logimage, and the width of the sub-graph logimagecrop occupies the same proportion, four borders of the sub-graph logimagecrop are respectively parallel to four borders of the image logimage, and the central points of the two borders are overlapped; calculating the number logicImageCrop area of pixel points with the median value of '1' in the subgraph logicImageCrop area; calculating the number logicImageArea of the pixel points with the logicImage median value of 1, calculating the ratio of the logicimagecropaarea to the logicImageArea, and if the ratio is not more than 0.5, entering the step S2; if the value is more than 0.5, the step S1.6 is carried out;
s1.6: performing closed operation on the image logicImage;
s1.7: and (4) filling holes in the image logicImage obtained in the step (S1.6) to obtain a segmentation result.
2. The method for fully automatically segmenting the complex background leaf image according to claim 1, wherein the step S2 specifically comprises the following steps:
s2.1: setting a credible foreground scale parameter para;
s2.2: detecting a background mark on each maximum inter-class variance method segmentation image BW corresponding to the H component image, the S component image, the a component image and the b component image respectively, specifically comprising:
s2.2.1, cutting a sub-image cribble forego with the length equal to the length of the component image para × 2 times and the width equal to the same proportion at the middle part of the segmentation image BW, wherein the four borders of the sub-image cribble forego are respectively parallel to the four borders of the segmentation image BW, and the central point of the sub-image cribble forego is superposed with the central point of the segmentation image BW;
s2.2.2: in the computer subgraph credbeleForeground, the proportion of pixels with the value of 1 to coefforcredibeleForeground is calculated;
s2.2.3: if the coeffoldleboregorund is in the interval [0.2,0.8], the segmentation accuracy of the segmentation image BW is not suitable for detecting the background mark in the segmentation image BW, the detection process is ended, and a detection failure message is returned;
s2.2.4: if the coeffoldeforegroup is smaller than 0.2, storing the segmentation image BW as an image backsgroundCandidate, and storing the result of negating the segmentation image BW as an image BW; if the coeffoldeforequired is not less than 0.2, storing the result of inverting the segmentation image BW as an image background Candidate;
s2.2.5: calculating the ratio frameCof of pixels with the value of 1 in the pixels of the four frames of the image BW;
s2.2.6: if the frameCof is more than 0.6, the segmentation accuracy of the image BW is regarded as questionable, the detection process is ended, and a detection failure message is returned;
s2.2.7: carrying out mathematical morphology corrosion operation on the backsgroundCandidate image;
s2.2.8: setting the pixels of the four borders of the S2.2.7 background Candidate as 1;
s2.2.9: selecting and reserving an area communicated with the point at the upper left corner of the image on the image background and Candidate after S2.2.8, deleting other areas, and marking the result as the image background to obtain a required background mark;
s2.2.10: the image background is inverted and stored as an image revertbackground;
s2.2.11: selecting and reserving an area communicated with the central point of the image on the image retroretrobacksound, deleting other areas, and recording the result as the image retrobacksound 2;
s2.2.12: storing the result of inverting the image revertbackground 2 as an image background;
s2.2.13, cutting a subgraph forkefeground bean with the length being (para × 2) times of the length of the component image and the width occupying the same proportion at the middle part of the retroversebackground 2, wherein the four borders of the subgraph are respectively parallel to the four borders of the original image, and the central point of the subgraph is superposed with the central point of the original image;
s2.2.14: in the computed subgraph credbeleForegrountClean, the proportion of pixels with the value of 1 to coefforcredibeleForegrountClean;
s2.2.15, calculating backgroudcoef 1- (1.01-coeffofcrediborefroeground) × frameCof, returning images background and backgroudcoef and detecting success information;
s2.3: selecting an image background with the detection success and the largest background coef from the four background mark detection results obtained in the S2.2 as an image best background detect, registering best background detect flat as true, and registering best background detect flat as false if all the four background mark detection results return failure information;
s2.4: if the bestbackgroundetectFlat is true, correcting the bestbackgroundetect image, wherein the specific process is as follows: corroding the bestbackgroundetect image, setting all pixel points on four borders of the bestbackgroundetect image to be 1, selecting and reserving an area communicated with the upper left corner of the bestbackgroundetect image, and deleting the rest areas.
3. The method for fully automatically segmenting the complex background blade image according to claim 2, wherein the step S3 comprises the following steps:
s3.1: initializing and setting to obtain an initialized foreground mark image forkbacking mask and an initialized background mark image backgrouping mask;
s3.2: directly solving the gradient of the currentImage of the color RGB image;
s3.3: enhancement of veins;
s3.4: dividing veins;
s3.5: the main veins are merged;
s3.6: the scattered and fine veins are incorporated.
4. The method for fully automatically segmenting the complex background leaf image according to claim 3, wherein the step S3.1 comprises the following steps:
s3.1.1: setting a neighbor distance criterion coefficient nerDistancepara, setting a neighbor distance criterion nerDistance as the average value of the length and the width of the image multiplied by nerDistancepara, and rounding up; setting a close border distance criterion coefficient nearBoundarBoundarPara, setting a close border row number criterion nearBoundarRowDistance as the total row number of the image multiplied by nearBoundarPara, rounding up, setting a close border column number criterion nearBoundarBoundarColDistance as the total column number of the image multiplied by nearBoundarBoundarPara, and rounding up; setting a very close border distance criterion coefficient veryNearBoundarPara, setting a very close border row number criterion veryNearBoundarRowDistance as the total row number of the image multiplied by veryNearBoundarPara, rounding up, setting a very close border column number criterion veryNearBoundarColDistance as the total column number of the image multiplied by veryNearBoundarPara, rounding up; setting an avoidance frame distance coefficient doddgeboundarypara, wherein the value of the doddgeboundarypara must be larger than that of the nearBoundaryPara, setting an avoidance frame row number doddgeboundarygroudistance as the total row number of the image multiplied by the doddgeboundarypara, rounding up, setting an avoidance frame column number doddgeboundarypolldistance as the total column number of the image multiplied by the doddgeboundarypara, rounding up;
s3.1.2, initializing a foreground marking image forkroundmask, specifically, creating a binary image forkroundmask with the size consistent with the color RGB image currentImage and the pixel values all being 0, resetting the pixel value in a rectangular area with the width occupying the same proportion as (para × 2) times of the length of the forkroundmask in the middle of the forkroundmask and the width occupying the same proportion as "1", wherein the four borders of the rectangle are respectively parallel to the four borders of the forkroundmask and the central points of the two borders are also coincident;
s3.1.3: initializing a background mark picture background mask, which specifically comprises the following steps: newly establishing a binary image background mask with the size consistent with the currentImage of the color RGB image and all pixel values of 0, and resetting the pixel values on four borders of the background mask to be 1.
5. The method of fully automatically segmenting the complex background leaf image according to claim 4, wherein the step S3.2 comprises the following steps:
directly solving the gradient of the currentImage of the color RGB image to obtain a gradient amplitude image VG and a gradient angle image A, wherein the specific process comprises the following steps:
s3.2.1: partial derivatives in the x and y directions: let the coordinate of any point on the currentImage be (x, y) and the pixel value be (R, G, B), where R, G, B respectively represent the component values of red, green, blue, and calculate
Figure FDA0002489733080000051
Figure FDA0002489733080000052
Computing
Figure FDA0002489733080000053
When six partial derivatives are adopted, a sobel operator is used;
s3.2.2: to find
Figure FDA0002489733080000054
S3.2.3: to find
Figure FDA0002489733080000055
Wherein arctan is an arctangent function;
s3.2.4: to find
Figure FDA0002489733080000056
And
Figure FDA0002489733080000057
s3.2.5: if it is
Figure FDA0002489733080000058
Greater than or equal to
Figure FDA0002489733080000059
Get
Figure FDA00024897330800000510
Is the gradient amplitude F (x, y), θ1Is the gradient angle θ (x, y), otherwise is taken
Figure FDA00024897330800000511
Is the gradient amplitude F (x, y), θ2For the gradient angle theta (x, y), a gradient amplitude map VG and a gradient angle map A are respectively stored.
6. The method of fully automatically segmenting the complex background leaf image according to claim 5, wherein the step S3.3 comprises the following steps:
s3.3.1: if the bestbackgroundetectFlat is true, detecting the value of any point in the bestbackgroundetect, if the value is '1', resetting the point value at the same position in the gradient amplitude graph VG to be '0';
s3.3.2: in the statistical gradient magnitude graph VG, dividing a boundary threshold value high of a portion of points with a maximum pixel value of 1%, and dividing a boundary threshold value low of a portion of points with a minimum pixel value of 1%; resetting all the points with the pixel values larger than high to be high; resetting all points with pixel values less than low to be low;
s3.3.3: in the gradient angle graph A, for each point, a disc with a small radius as a small scale is used as a neighborhood range, and a standard deviation is calculated to a standard deviation image stdOfA;
s3.3.4 for each pixel value α at each point on the gradient magnitude graph VG, α is reset α/(stdOfA + 0.1).
7. The method for fully automatically segmenting the complex background leaf image according to claim 6, wherein S3.4 comprises the following steps:
s3.4.1, cutting a sub-graph VGLoop with the length (para × 2) times that of the gradient amplitude graph VG and the width occupying the same proportion at the center of the gradient amplitude graph VG, wherein four borders of the sub-graph are respectively parallel to the four borders of the VG, and the center point of the sub-graph is coincided with the center point of the VG;
s3.4.2: solving a maximum inter-class variance method segmentation threshold level of the VGcrop;
s3.4.3: taking the level as a threshold, and performing threshold segmentation on the gradient magnitude graph VG to obtain OtsuBW;
s3.4.4: deleting the undersized connected region in the OtsuBW;
s3.4.5: the OtsuBW was inflated.
8. The method of fully automatically segmenting the complex background leaf image according to claim 7, wherein the step S3.5 comprises the following steps:
s3.5.1: copying a backup forkroundmask into forkroundmaskbackup;
s3.5.2: storing the forkroundmask or OtsuBW as the forkroundmask;
s3.5.3: selecting and reserving an area communicated with the central point of the image on a forkroundmask, and deleting other areas;
s3.5.4: copying a backup forkroundmask to forkroundmaskfordel;
s3.5.5, detecting whether there is a point with value "1" on the formegrounmask, falling within the front nearBoundarRowDistance line, or falling within the back nearBoundarRowDistance line, or falling within the front nearBoundyColDistance column, or falling within the back nearBoundyColDistance column, if not, considering that the newly incorporated foreground region is farther from the border of the image, and is likely to be the main vein passing through the central region of the image, and turning to S3.6;
s3.5.6: performing small-scale corrosion on the forkroundmask, selecting and reserving a region communicated with the central point of the image, and deleting other regions;
s3.5.7: further detecting that there is no point with value "1" on the forego and round mask, either in the front veryNearBoundryRowDistance line, or in the back veryNearBoundryRowDistance line, or in the front veryNearBoundryColDistance column, or in the back veryNearBoundryColDistance column, if so, it indicates that the image foreground region is very close to the image border, meaning that the previous corrosion has not achieved the purpose of trimming, and needs to be corroded again; if not, turn to S3.5.11;
s3.5.8: performing small-scale corrosion on the forkroundmask, selecting and reserving a region communicated with the central point of the image, and deleting other regions;
s3.5.9: further detecting that there is no point with value "1" on the forego round mask, which falls in the front veryNearBoundryRowDistance line, or in the back veryNearBoundryRowDistance line, or in the front veryNearBoundryColDistance column, or in the back veryNearBoundryColDistance column, if yes, it indicates that the image foreground region is still very close to the image border, meaning that the previous corrosion operation has not yet achieved the purpose, and special treatment is needed; if not, turn to S3.5.11;
s3.5.10: resetting all rows of front, rear, front, and rear doddgeboundaryRowDistances in the foregroudMask to "0";
s3.5.11: storing the forkroundmask or the upper forkroundmask backup as the forkroundmask, and completing the incorporation of the main vein;
step S3.6 comprises the following steps:
s3.6.1: in OtsuBW, deleting foreground regions in foreground MaskForDel and storing the foreground regions as fragmentary vein candidate images candidates, wherein the deletion is as follows: for any point in OtsuBW, if the point value of the same position in the foregroudMaskForDel is '1', resetting the point in OtsuBW to '0';
s3.6.2: scanning the whole image on candidates to find out the distance DN between each region and the central point of the image; if DN is less than the near distance, marking, recording the row number row and the column number col of the point nearest to the central point of the image in the area, simultaneously judging whether the horizontal distance between each area and the four borders of the image is less than the near BoundarRowdistance, and whether the vertical distance is less than the near Coldistance, if so, marking that the area is too close to the borders;
s3.6.3: the duplicate backup candidates are avoidRegions;
s3.6.4: deleting all areas of candidates marked as too close to the border;
s3.6.5: for each region of candidates whose distance from the image center point is less than the near distance, a point from the nearest image center point of the region is drawn in a way of avoiding, and the line number and the column number S3.6.2 are obtained, and the line segment pointing to the image center point is defined as follows: setting the coordinate of any point of the line segment as (x, y), if the avoidRegions (x, y) is '1' and the point is not the starting point of the line segment, cancelling the line segment, and in addition, in the line drawing process, if detecting that the forkroundmask (x, y) is '1', determining that the forkroundmask is connected, and completing the task;
s3.6.6: storing the forego mask or upper candidates as forego mask;
s3.6.7: selecting the area which is communicated with the central point of the image and reserved in the forkroundmask.
9. The method for fully automatically segmenting the complex background blade image according to the claim 8, wherein the S4 comprises the following steps:
s4.1: if the bestbackgroundetectFlat is true, slightly expanding bestbackgroundetect into bestbackgroundetectFat, then deleting the area overlapped with bestbackgroundetectFlat in foregroundMask, keeping the area communicated with the central point of the image for the foregroundMask, deleting other areas, and finally setting the backgroundetect equal to bestbackgroundetect;
s4.2: marking a watershed segmentation by taking a result of backstreamdmask or upper foregroundMask as a mark to obtain outPutImage, wherein the outPutImage is a result of marking the watershed segmentation, a mark in the outPutImage is a watershed line with the mark being '0', a mark in the outPutImage is a background area with the mark being '1', and a mark in the outPutImage is a foreground area with the mark being '2';
s4.3: and taking the area marked with the label number of 2 in the outPutImage as a foreground, and taking the rest as a background to obtain a final binary segmentation result logicImage.
CN201910683687.6A 2019-07-26 2019-07-26 Full-automatic segmentation method for complex background leaf image Active CN110443811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910683687.6A CN110443811B (en) 2019-07-26 2019-07-26 Full-automatic segmentation method for complex background leaf image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910683687.6A CN110443811B (en) 2019-07-26 2019-07-26 Full-automatic segmentation method for complex background leaf image

Publications (2)

Publication Number Publication Date
CN110443811A CN110443811A (en) 2019-11-12
CN110443811B true CN110443811B (en) 2020-06-26

Family

ID=68431864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910683687.6A Active CN110443811B (en) 2019-07-26 2019-07-26 Full-automatic segmentation method for complex background leaf image

Country Status (1)

Country Link
CN (1) CN110443811B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223041B (en) * 2021-06-25 2024-01-12 上海添音生物科技有限公司 Method, system and storage medium for automatically extracting target area in image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127735A (en) * 2016-06-14 2016-11-16 中国农业大学 A kind of facilities vegetable edge clear class blade face scab dividing method and device
CN106683098A (en) * 2016-11-15 2017-05-17 北京农业信息技术研究中心 Segmentation method of overlapping leaf images

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102012200930A1 (en) * 2012-01-23 2015-08-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for detecting a plant against a background
CN103473550B (en) * 2013-09-23 2016-04-13 广州中医药大学 Based on the leaf image dividing method of Lab space and local dynamic threshold
CN104050670B (en) * 2014-06-24 2016-08-17 广州中医药大学 In conjunction with simple mutual and the complex background leaf image dividing method in labelling watershed
CN104598908B (en) * 2014-09-26 2017-11-28 浙江理工大学 A kind of crops leaf diseases recognition methods
CN104850822B (en) * 2015-03-18 2018-02-06 浙江大学 Leaf identification method under simple background based on multi-feature fusion
CN106296662B (en) * 2016-07-28 2019-07-02 北京农业信息技术研究中心 Maize leaf image partition method and device under field conditions
CN106910197B (en) * 2017-01-13 2019-05-28 广州中医药大学 A kind of dividing method of the complex background leaf image in single goal region
CN108564589A (en) * 2018-03-26 2018-09-21 江苏大学 A kind of plant leaf blade dividing method based on the full convolutional neural networks of improvement
CN109359653B (en) * 2018-09-12 2020-07-07 中国农业科学院农业信息研究所 Cotton leaf adhesion lesion image segmentation method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127735A (en) * 2016-06-14 2016-11-16 中国农业大学 A kind of facilities vegetable edge clear class blade face scab dividing method and device
CN106683098A (en) * 2016-11-15 2017-05-17 北京农业信息技术研究中心 Segmentation method of overlapping leaf images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Segmentation of Defected Regions in Leaves using K- Means and OTSU"s Method;P.Divya等;《2018 4th International Conference on Electrical Energy Systems (ICEES)》;20180823;全文 *
棉田复杂背景下棉花叶片分割方法;高攀 等;《新疆农业科学》;20181231;第55卷(第12期);全文 *

Also Published As

Publication number Publication date
CN110443811A (en) 2019-11-12

Similar Documents

Publication Publication Date Title
CN107909138B (en) Android platform-based circle-like particle counting method
JP4172941B2 (en) Land parcel data creation method and apparatus
CN104751187B (en) Meter reading automatic distinguishing method for image
Malambo et al. Automated detection and measurement of individual sorghum panicles using density-based clustering of terrestrial lidar data
Gao et al. Fully automatic segmentation method for medicinal plant leaf images in complex background
Gao et al. A method for accurately segmenting images of medicinal plant leaves with complex backgrounds
CN103295013A (en) Pared area based single-image shadow detection method
CN109993750A (en) A kind of segmentation recognition method and system, terminal and readable storage medium storing program for executing of hand jnjuries
CN111259925B (en) K-means clustering and width mutation algorithm-based field wheat spike counting method
US20170178341A1 (en) Single Parameter Segmentation of Images
CN110309808A (en) A kind of adaptive smog root node detection method under a wide range of scale space
CN109145906B (en) Target object image determination method, device, equipment and storage medium
CN107992856A (en) High score remote sensing building effects detection method under City scenarios
CN104598907A (en) Stroke width figure based method for extracting Chinese character data from image
JP4747122B2 (en) Specific area automatic extraction system, specific area automatic extraction method, and program
CN106485252A (en) Dot matrix target image Feature point recognition method is tested in image registration
CN112164030A (en) Method and device for quickly detecting rice panicle grains, computer equipment and storage medium
CN115761270A (en) Color card detection method and device, electronic equipment and storage medium
CN110443811B (en) Full-automatic segmentation method for complex background leaf image
CN110288616A (en) A method of based on dividing shape and RPCA to divide hard exudate in eye fundus image
Kerle et al. Reviving legacy population maps with object-oriented image processing techniques
CN115719355A (en) Extensible farmland boundary normalization and simplification method, system, equipment and terminal
CN115511815A (en) Cervical fluid-based cell segmentation method and system based on watershed
CN111401275B (en) Information processing method and device for identifying grassland edge
Abraham et al. Unsupervised building extraction from high resolution satellite images irrespective of rooftop structures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant