CN110148146A - A kind of plant leaf blade dividing method and system using generated data - Google Patents

A kind of plant leaf blade dividing method and system using generated data Download PDF

Info

Publication number
CN110148146A
CN110148146A CN201910439140.1A CN201910439140A CN110148146A CN 110148146 A CN110148146 A CN 110148146A CN 201910439140 A CN201910439140 A CN 201910439140A CN 110148146 A CN110148146 A CN 110148146A
Authority
CN
China
Prior art keywords
blade
vertex
leaf
image
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910439140.1A
Other languages
Chinese (zh)
Other versions
CN110148146B (en
Inventor
刘骥
林艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201910439140.1A priority Critical patent/CN110148146B/en
Publication of CN110148146A publication Critical patent/CN110148146A/en
Application granted granted Critical
Publication of CN110148146B publication Critical patent/CN110148146B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of plant leaf blade dividing methods and system using under generated data.This method comprises: constructing the different postures of multiple leaf images, the leaf three-dimensional model of different colours based on leaf image;Leaf three-dimensional model is projected into two-dimensional surface and generates blade two-dimensional image data, blade two-dimension graphic data is merged with different background image, obtains training set;Deep learning model is trained by training set, obtains blade parted pattern;Leaf image after image to be split input blade parted pattern comprising blade is divided.Segmentation data are generated using three-dimensional leaf model, pass through a leaf image, reconstruction obtains the leaf model of different shape, different colours, and it is merged with different backgrounds, automatically generate a large amount of training picture and training label composing training collection, the energy consumption for reducing artificial mark image, automatically can be split leaf image under the conditions of natural background, segmentation effect is good.

Description

A kind of plant leaf blade dividing method and system using generated data
Technical field
The present invention relates to image segmentation field, more particularly to a kind of plant leaf blade dividing method using generated data and System.
Background technique
Plant is studied, identification, detection and the segmentation of plant leaf blade are that we are important in terms of the computer vision of plant Task.However the general locating background condition of plant leaf blade under natural conditions is complicated, and plant leaf blade type every trade itself Color color, texture is complicated, so being a very difficult task to the identification, detection, segmentation of plant leaf blade.Machine can effectively be known Other blade, basis are plant leaf blade can be extracted from natural background.Only blade is extracted from background, The subsequent analysis of plant could be continued.Such as by segmentation blade judge plant whether healthy growth, if be subject to disease Insect pest.And to the partitioning algorithm of blade, in fact and common image partition method, essence is to the region on image It is divided.
The principle of image segmentation is will to be divided into multiple portions on image by the difference of the characteristic information of image, and identical Partial characteristic information is identical, and characteristic information can be also the information such as color, shape.Current image segmentation algorithm probably includes, Traditional image partition method, for example, using threshold value carry out region division, using the algorithm of Image Edge-Detection, according to region Division methods, the segmentation using graph theory relevant knowledge and method based on energy functional etc..Based on traditional image segmentation side Method can obtain good effect in the environment of or monolithic leaf fairly simple in background, but in the image of background complexity In, segmentation effect is just had a greatly reduced quality.And many conventional methods need manual extraction feature, and parameter is arranged, and it cannot be full-automatic Segmentation, this is not obviously able to satisfy actual needs.
In recent years, as big data and the associated infrastructure of cloud computing, algorithm improve so that deep learning is at full speed Development, large quantities of experts and scholars solve image segmentation problem or blade segmentation problem using the related algorithm of deep learning. At present and the correlation technique of deep learning achieves best effect on public data collection.But the side based on deep learning Formula, it is necessary to have training data, disclosed data set all manually marks by hand in the world at present, the number marked by hand According to meaning to consume a large amount of manpower financial capacity, and manually marks and be also easy in edge marking error.Therefore, data Shortage problem seriously hinders progress of the deep learning method on image segmentation algorithm.And up to the present, there are no special Door is directed to the partitioned data set of plant leaf blade.Meanwhile the shortage problem of data also hampers the development of deep learning.
From the foregoing, it will be observed that needing to be manually entered some parameters using traditional blade dividing method, cannot complete automatically to divide It cuts or segmentation effect is poor.The full-automatic dividing of blade then may be implemented using the dividing method of deep learning, but need A large amount of training data.
Summary of the invention
The present invention is directed at least solve the technical problems existing in the prior art, especially innovatively proposes a kind of utilize and close At the plant leaf blade dividing method and system of data.
In order to realize above-mentioned purpose of the invention, according to the first aspect of the invention, the present invention provides a kind of utilizations The plant leaf blade dividing method of generated data, comprising:
Step S1 constructs the different postures of multiple leaf images, the blade three-dimensional of different colours based on leaf image Model;
By leaf three-dimensional model project to two-dimensional surface generate blade two-dimensional image data, by blade two-dimension graphic data with Different background image is merged, and obtains training set, and the training set includes multiple training samples and corresponding with training sample Sample label;
Step S2 is trained deep learning model by training set, obtains blade parted pattern;
Image to be split comprising blade is inputted blade parted pattern, after the output segmentation of blade parted pattern by step S3 Leaf image.
Having the beneficial effect that for above-mentioned technical proposal lacks for the plant leaf blade research algorithm training data under deep learning The problem of, generate segmentation data using three-dimensional leaf model, by a leaf image, reconstruction obtain different shape, The leaf model of different colours, and merged with different backgrounds, it automatically generates a large amount of training picture and training label constitutes instruction Practice collection;It trains segmentation network to obtain blade parted pattern using the blade data and label of synthesis, reduces artificial mark figure As when energy consumption, the dividing method training data for solving the problems, such as to be currently based on deep learning is insufficient, and the blade Parted pattern automatically can be split leaf image under the conditions of natural background, and segmentation effect is good.
In order to realize above-mentioned purpose of the invention, according to the second aspect of the invention, the present invention provides a kind of plants Blade segmenting system, including processor and image providing unit, processor from image providing unit obtain comprising blade to point Image is cut, and is partitioned into leaf from image to be split using the plant leaf blade dividing method of generated data according to of the present invention Picture.
Above-mentioned technical proposal has the beneficial effect that except with the plant leaf blade of the present invention using under generated data point Outside the beneficial effect of segmentation method, also there is beneficial effect full-automatic and that leaf image is accurately partitioned into from image to be split Fruit.
As can be seen that using traditional blade dividing method, need to be manually entered some parameters, cannot complete automatically to divide It cuts or segmentation effect is poor.The full-automatic dividing of blade then may be implemented using the dividing method of deep learning, but need A large amount of training data.For this problem, set forth herein using generated data by using training blade to divide network.
Detailed description of the invention
Fig. 1 is to be illustrated in the embodiment of the invention using the process of the plant leaf blade dividing method under generated data Figure;
Fig. 2 is the schematic diagram for obtaining blade profile line segment in the embodiment of the invention by grid;
Fig. 3 is the blade profile schematic shapes of blade profile compression front and back in the embodiment of the invention;
Fig. 4 is that embodiment of the invention middle period face subnetting is formatted process schematic, wherein Fig. 4 (a) is blade Sampled point schematic diagram on skeleton and blade profile;Fig. 4 (b) is blade face partial preliminary gridding result schematic diagram;Fig. 4 (c) is The further subdivided meshes result schematic diagram in blade face part;
Fig. 5 is the moving process of operating point in the embodiment of the invention;Wherein, Fig. 5 (a) is that operating point is mobile former Manage schematic diagram;Fig. 5 (b) is operating point moving process schematic diagram;
Fig. 6 is the deformed schematic diagram of blade plane grid model in the embodiment of the invention;
Fig. 7 is of different sizes, two-dimensional image data schematic diagram that visual angle is inconsistent in the embodiment of the invention;
Fig. 8 is sample and sample label schematic diagram in the embodiment of the invention;
Fig. 9 is the schematic diagram in empty spatial pyramid pond in the embodiment of the invention;
Figure 10 is the single iteration process of CRF-RNN in the embodiment of the invention.
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached The embodiment of figure description is exemplary, and for explaining only the invention, and is not considered as limiting the invention.
In the description of the present invention, it is to be understood that, term " longitudinal direction ", " transverse direction ", "upper", "lower", "front", "rear", The orientation or positional relationship of the instructions such as "left", "right", "vertical", "horizontal", "top", "bottom" "inner", "outside" is based on attached drawing institute The orientation or positional relationship shown, is merely for convenience of description of the present invention and simplification of the description, rather than the dress of indication or suggestion meaning It sets or element must have a particular orientation, be constructed and operated in a specific orientation, therefore should not be understood as to limit of the invention System.
In the description of the present invention, unless otherwise specified and limited, it should be noted that term " installation ", " connected ", " connection " shall be understood in a broad sense, for example, it may be mechanical connection or electrical connection, the connection being also possible to inside two elements can , can also indirectly connected through an intermediary, for the ordinary skill in the art to be to be connected directly, it can basis Concrete condition understands the concrete meaning of above-mentioned term.
The invention discloses a kind of plant leaf blade dividing methods using generated data, in a preferred embodiment, The flow chart of this method is as shown in Figure 1, specifically include:
Step S1 constructs the different postures of multiple leaf images, the blade three-dimensional mould of different colours based on leaf image Type;
By leaf three-dimensional model project to two-dimensional surface generate blade two-dimensional image data, by blade two-dimension graphic data with Different background image is merged, and training set is obtained, and training set includes multiple training samples and sample corresponding with training sample Label;
Step S2 is trained deep learning model by training set, obtains blade parted pattern;
Image to be split comprising blade is inputted blade parted pattern, after the output segmentation of blade parted pattern by step S3 Leaf image.
In the present embodiment, leaf image is one and includes the picture for the plant leaf blade for needing to divide, it is preferred that at this Plant leaf blade in picture is smooth as much as possible, complete, and background is also relatively cleaner.Leaf image can be acquired by camera, can also To be obtained by scanning.
In a preferred embodiment, in step sl, the multiple and different of the leaf image are constructed based on leaf image Posture, different colours leaf three-dimensional model the step of specifically include:
Step S11 obtains the blade profile in leaf image;Preferably, step S11 is specifically included:
Step S111 carries out gray proces to each pixel in leaf image according to the following formula, obtains grayscale image Picture;
Gray=0.3*R+0.59*G+0.11*B, wherein gray is the gray value of pixel, and R, G, B is respectively pixel R channel value, G channel value and channel B value in leaf image;
Gray threshold is arranged in step S112, carries out following judgement to all pixels point in gray level image and obtains binary map Picture:
If the gray value of pixel is greater than gray threshold, then it is assumed that the pixel is blade pixel, that is, is belonged in image Blade pixel, if the gray value of pixel be less than or equal to gray threshold, then it is assumed that the pixel be background pixel point;
It is that binaryzation is more clear blade profile by greyscale image transitions.
Step S113, extracts blade profile from bianry image, a kind of accurate method for extracting blade profile it is specific Process are as follows:
Moveable grid is set on bianry image, and for grid there are four vertex, locating two-value is stored on each vertex The color of the binary value of the pixel of image, grid vertex is corresponding with the binary value of the pixel of vertex position, if Pixel binary value is 1, and vertex is black, if pixel binary value is 0, vertex is white;
It is moved in the counterclockwise direction since the leftmost side of bianry image blade portions along blade edge, until having returned to Point, the midpoint for connecting two sides adjacent with one or two Black vertices in each grid obtain blade profile line segment, own Blade profile line segment constitutes blade profile, and the endpoint of blade profile line segment is the vertex of blade profile.
Fig. 2 is shown in a kind of application scenarios, is enumerated in Fig. 2 by the schematic diagram that grid obtains blade profile line segment 16 kinds of situations when 4 vertex of grid are not black do not generate blade profile line segment, when there is grid to have Black vertices, The midpoint for connecting two sides adjacent with one or two Black vertices in each grid obtains blade profile line segment.But When situation 8 and situation 16, two blade profile line segments can be generated, need to correct.
Therefore, when the Black vertices of grid are two and are located at the angle steel joint of grid, by changing grid size Or ignore the mode of one of Black vertices to avoid generating two blade profile line segments in a grid.
Step S12 extracts vein as blade framework;Blade framework includes second level and/or three pole veins;Vein is in plant In blade construction, entire blade face is support, skeleton is played a part of, keeps the stability of plant leaf blade;;Vein has as branch The same hierarchical structure, master pulse is most thick, and lateral vein is secondly, beneath also tiny train of thought.Generally manually, along leaf The vein of piece selects several points in master pulse network, then connects these points, just obtained blade framework.
The region that step S13, blade profile and blade framework are surrounded constitutes blade face part, carries out net to blade face part It formats, obtains blade plane grid model;Preferably, step S13 is specifically included:
Step S131, the vertex of compression blade profile, it is preferred that step S131 is specifically included:
Step S1311, if compressing the vertex set before vertex on blade profile is N1, N1={ n10,n11,n12,..., n1m-1, n1pAnd n1qTwo vertex of serial number p and q in respectively vertex set N1,0≤p≤m-1,0≤q≤m-1, m are vertex Collect the number of vertices of N1;If set V is that blade profile compresses the vertex set behind vertex;If p=0, q=2, by vertex n10It is added Into vertex set V;
Step S1312 judges whether q=m-1 true, if so, blade profile vertex compression processing terminates, if not at It is vertical, connect vertex n1pAnd n1q, constitute line segment lpq, calculate all vertex n1 for meeting p < r < qrTo line segment lpqIt is European away from From r is positive integer, chooses the maximum therein and is denoted as Drmax
Step S1313, if Drmax< T then enables q=q+1, return step S1312;
If Drmax>=T, by vertex n1rIt is added in vertex set V, enables p=q-1, q=q+1, return step S1312;
T indicates the worst error value of the preset permission in blade profile variation front and back, 0 < T < 50, it is preferred that T 30.
Use the method for above-mentioned compression blade profile, it is possible to reduce the node of redundancy, and have substantially no effect on plant leaf blade Profile.Wherein, T is bigger, and the redundant points of reduction are more, blade wheel profile when Fig. 3 show T value difference, after compression processing Shape, from left to right, successively be compression before, T=10 compression after, T=20 compression after, the compressed blade profile shape of T=30, can To find out, even if the characteristic point quantity extracted sharply is reduced, the not big change of profile.The method energy of the compression blade profile It is effective to reduce redundancy vertex, avoid subsequent arithmetic amount excessive, while can guarantee that blade profile shape is undistorted.
Step S132 chooses multiple sampled points from blade framework and blade profile, by the sampled point and leaf of blade profile The sampling dot sequency of piece skeleton is connected to form multiple polygons;Preferably, as shown in Fig. 4 (a), in blade framework and blade profile On choose a series of sampled points manually, by blade framework sampled point and blade profile sampling dot sequency be connected to form it is multiple polygon Shape.
Polygon is divided at least two triangle or quadrangle by step S133, to all or part of triangle or four sides The grid of shape composition does further subdivision, obtains blade plane grid model.Step S133 is specifically included:
Polygon is divided at least two triangle by Delaunay Triangulation algorithm, obtains net by step S1331 Lattice Preliminary division is as a result, as shown in Fig. 4 (b).
Blade profile and blade framework are labeled as boundary by step S1332;
Step S1333, to all or part of triangle sets at grid do further Subdividing Processing, such as may be selected in blade Edge (its curvature is relatively high) part and not flat enough the part of blade are finely divided, without being finely divided to all blades, Reduce operand.It is preferably but not limited to selection Loop subdivision method, it is possible to use following method specifically includes:
Step A: for the vertex v in any blade triangular mesh in step S1331, if set N is to include vertex v And in vertex v neighborhood all triangular apex set, each vertex in set N is finely divided, it is specific:
If vertex v0,v1For two vertex in vertex set N, v0≠v1If side v0v1It is not common edge, is not inserted into new top Point;
If side v0v1For common edge, obtain with side v0v1For common edge two triangles where plane normal angle, if Angle is more than or equal to first threshold, in side v0v1On be not inserted into new summit, if angle be less than first threshold, in side v0v1Upper insertion New summit vnew, vnewPosition coordinates calculation formula it is as follows:
Wherein, vnewIndicate new summit position coordinates;v2And v3It respectively indicates with side v0v1For in two triangles of common edge with side v0v1The position coordinates on opposite vertex;First threshold is preset value, value model It encloses are as follows: 15 degree to 60 degree, preferably 30 degree;
Step B: for any vertex v in step S1331, if set N is includes institute in vertex v and vertex v neighborhood There is the set of triangular apex, calculates the flatness S of vertex vvIf Sv< λ, the position coordinates of vertex v do not need to adjust, if Sv >=λ adjusts the position coordinates of vertex v by following formula:
Wherein, v' indicates vertex v position coordinates adjusted;vjIndicate j-th of vertex in vertex set N;β is position adjustment Coefficient,K is the number of vertices in vertex set N;The flatness of vertex vThe normal vector of vertex vNjIt is j-th three in k-1 triangle in vertex v neighborhood Angular normal vector;λ is flatness threshold, 0 < λ < 1, it is preferred that λ=0.7;
Step S1334 is generated newly based on new summit, vertex adjusted and the former vertex being inserted into step S1333 Grid;
Step S1335, s times circulation executes step S1333 and step S1334, realizes the further subdivision of grid, s is positive Integer, it is preferred that the numerical value selection of s 3, s are related with desired subdivision degree, and subdivision degree can pass through maximum mesh unit Size is judged.
Step S14 carries out texture mapping processing to blade plane grid model.
Blade plane grid model is obtained, blade only has rough shape in the model, without color, not without bumps It is flat, in order to obtain true blade information, need to be further processed blade, and one that blade obtains the sense of reality is most important Method is exactly texture mapping.
The texture mapping of blade is that blade texture pixel (also known as UV coordinate) is mapped in three-dimensional space pixel, this can It is attached on a three-dimension object with being interpreted as putting up figure for one, texture mapping patch needs more complicated calculation on three-dimensional objects Method, because textures can deformation occurs, fortunately, the blade plane grid model obtained at present is initial also in two-dimensional surface Leaf image can provide the texture material of blade, so texture can be directly attached on the grid of blade.Texture mapping processing is specific Process are as follows:
Blade texture is extracted from leaf image, and blade texture is attached on blade plane grid model, it is flat for blade Any vertex v={ x, y, z } texture coordinate { X in veil lattice model on gridtexture,Ytexture, z } are as follows:
Wherein, Xmin=min { x1,x2,...,xn, Xmax=max { x1,x2,...,xn, Ymin=min { y1,y2,..., yn, Ymax=max { y1,y2,...,yn};{ x, y, z } indicates European coordinate of the vertex v in blade plane grid model, x1, x2,...,xnRespectively indicate the x-axis coordinate value on all vertex on grid in blade plane grid model, y1,y2,...,ynTable respectively Show the y-axis coordinate value on all vertex on grid in blade plane grid model, n is indicated in blade plane grid model on grid Number of vertices;Preferably, it can also be mapped to avoid except blade edge that there may be some noises when carrying out texture mapping To among blade grid model, the pixel except blade is set as transparent using the transparent channel of texture.
Step S15, choose on the blade profile and/or blade framework of the blade plane grid model after texture mapping to A few operating point, different movements is carried out to operating point, makes blade profile and blade framework that different deformation occur, and obtains behaviour Make the coordinate position before and after point deformation, the blade plane grid model after texture mapping is carried out using Laplce's deformation algorithm Deformation, obtains the blade three-dimensional grid model of multiple and different postures;Step S15 is specifically included:
Step S151 chooses at least one operating point on blade profile and/or blade framework, as shown in Fig. 5 (a), if Operating point is Qw, operating point quantity is W, and w indicates the serial number of operating point, 1≤w≤W;QwRoot node be Qw-1, vector QwQw-1With Vector Qw-1QNConstitute plane P, vector Qw-1QNFor Z axis, if vector Qw-1QMPerpendicular to plane P, operating point QwMove mode For around vector Qw-1QMRotation, if rotation gyration is θ, θ ≈ t, operating point QwPosition coordinates after movement are as follows:
Q (t)=k (t) * (Qw+t*(QN-Qw));
Wherein, k (t)=| Qw|/|Qw+t*(QN-Qw) |, 0≤t≤1, by the way that different parameters t is arranged, so that it may realization pair Operating point carries out different movements, makes blade profile and blade framework that different deformation occur, and obtains different deformation rear blade The coordinate of characteristic point on the vertex and blade framework of profile;Fig. 5 (b) gives the sample of blade profile or blade framework deformation , multiple and different θ can be set in real process, blade of different shapes can be obtained in this way.
Step S152, if the vertex set of grid is V={ v in blade plane grid model1,v2,...,vn, 1≤i≤ N, viThe coordinate on i-th of vertex in vertex set V is indicated, if set N is and vertex viAdjacent vertex set, vjIndicate set N In j-th of vertex coordinate, vertex viLaplce's coordinate are as follows:
Wherein, wijIt is side (vi,vj) weight, and have ∑j∈Nwij=1, wij=cot α+cot β, α and β are side (vi, vj) where two adjacent triangles in side (vi,vj) opposite two it is diagonal;It can be seen that Laplce's coordinate can work as Make an information on vertex, this information represents an approximation of its average normal direction curvature.Laplce's coordinate can be with It is considered as a little and the average weight of its all consecutive points, so it can be used to express the local message of a point.
Laplce's coordinate can also use matrix expression, be based on above formula, obtain the matrix expression of Laplce's coordinate Are as follows:
L (x', y', z')=L × V (x, y, z);
Wherein, l (x', y', z') is mesh vertex coordinates corresponding n × 3 rank Laplce in blade plane grid model Coordinate;V (x, y, z) indicates the European coordinate of grid vertex in blade plane grid model, is the rank matrix of n × 3;L is n × n rank Laplacian Matrix can specifically be expressed with following parting expression are as follows:
E ' is vertex viAnd vertex vjSide collection, the size of the order of matrix L is n-1;
Instrument error function, so that the mean square deviation error of deformation front and back feature is minimum, error function are as follows:
QwIt is the operating point w, Handles is the set of all operating points, kwIt is operating point QwWeight, 0 < kw< 5, Coordinate by solving the V in following formula makes error function value minimum:
H is W × n rank sparse matrix;Every row only includes a nonzero element hii=1, v at this timeiIt is operating point;hW×3It is institute The matrix for thering is the deformed coordinate of operating point to constitute respectively with the product of corresponding operating point weight;
Following formula is solved using least square method:
ATAV=ATB obtains the coordinate of V, i.e., the deformed coordinate of vertex V, A in blade plane grid modelTFor matrix The transposed matrix of A repeats step S152, the deformed coordinate until acquiring all the points in blade plane grid model, Obtain the blade three-dimensional grid model of multiple and different postures.Laplce's deformation algorithm is used to make the color of blade, texture etc. Information is constant as much as possible, will not lose detailed information.As shown in fig. 6, being blade distortion of the mesh result schematic diagram.
Step S16 carries out coloring treatment to the blade three-dimensional grid model of different postures and obtains multiple and different postures, difference The leaf three-dimensional model of color.
It after obtaining the three-dimensional grid model of blade, needs to paint to it, to obtain the three-dimensional leaf of color multiplicity Piece model.Eyes see the color of plant leaf blade, are because mirror-reflection has occurred on blade, overflows when light is shone on blade Reflection and transmission.So having understood the optical characteristics of blade, the illumination model of blade could be modeled, to generate blade Multiple color model enrich training sample data.
Obtain the bidirectional reflectance distribution function BRDF and two-way transmission distribution letter of the blade three-dimensional grid model of different postures Number BTDF simultaneously generates irradiance rate texture;Irradiance rate texture is handled using Gaussian Blur algorithm, processing result is obtained, will locate Reason result combined bidirectional Reflectance Distribution Function BRDF and two-way transmission distribution function BTDF obtains the blade three dimensional network of different postures The coloring models of lattice model;Adjust illumination and blade ginseng at random in the coloring models of the blade three-dimensional grid model of different postures Number obtains the multiple and different postures of blade and different colours leaf three-dimensional models.
Step S16 is specifically included:
Step S161 obtains the reflecting component fr of the coloring models of plant leaf blade:
Wherein, frdiffFor diffusing reflection component;frspecFor specular components;
Diffusing reflection component frdiffCalculation formula are as follows:
frdiff=kdDrgbη (x), wherein kdFor the adjustment parameter of diffusing reflection parameter and luminous intensity, by user setting;η(x) Diffusing reflection texture after indicating normalization, DrgbFor the RGB vector of the bidirectional reflectance distribution function BRDF of plant leaf blade, by blade Chlorophyll, carrotene and structural parameters determine;
Specular components frspecCalculation formula are as follows:
Wherein, F is Fresnel coefficient, when sight and the angle difference of blade, it is seen that reflecting effect it is different;Dbeckmann It is Beckmann distribution parameter, G is to block item, θiAnd θvRespectively indicate the incidence angle and angle of reflection of light;
Step S162 obtains the transmitted component ft of the coloring models of plant leaf blade:
Ft=ktT′rgbγ(x)e-h
Wherein, T 'rgb=(0.9g, g, 0.2g), g indicate T 'rgbGreen light part, ktTo adjust diffusing reflection parameter and light The parameter of intensity, by user setting;γ (x) is the transmission texture after normalization;The thickness of h expression blade;
Step S163 generates irradiance rate texture using the reflecting component fr and transmitted component ft of blade;Use Gaussian Blur Algorithm carries out processing to the irradiance rate texture of generation and obtains the scattered colouring component of blade subsurface;Joint subsurface dissipates colouring component, reflection Component fr and transmitted component ft obtains coloring models;In the convolution process of Gaussian Blur algorithm, present invention uses 7 volumes Product, the size of convolution kernel is 12, and its center be respectively 0.006,0.061,0.242,0.383,0.242,0.061, 0.006}。
Step S164, it is random to adjust the color, intensity of light and the transmissivity of blade in coloring models, obtain multiple colors The colour model is superimposed from different blade three-dimensional grid models, obtains the blade of multiple and different postures, different colours by model Threedimensional model.The parameters of random adjustment illumination and blade, such as the color of light, intensity and the transmissivity of blade etc., and Control in a certain range, to the same leaf model, can obtain the blade of multiple and different colors in this way herein Threedimensional model.
Step S1 further include: leaf three-dimensional model is projected into two-dimensional surface and generates blade two-dimensional image data, by blade Two-dimension graphic data is merged with different background image, obtains training set, training set include multiple training samples and with training The step of sample corresponding sample label.
Since leaf three-dimensional model cannot train directly in the segmentation network model of deep learning model, it is necessary to have point Image and sample label are cut, sample label is identical as sample image size in corresponding training set, in black and white, wherein white Color region is that leaf area represents the location information of blade in corresponding sample image.First the threedimensional model of blade is projected It is gone on to two-dimensional surface, but the data generated only have blade not have background information, the data of generation are too single, need to merge multiple Different backgrounds and blade two-dimension graphic data are fused together using laplacian pyramid algorithm by miscellaneous background herein, It is able to solve the very lofty problem of blade edge caused by when blade directly incorporates background image, due to blade in fusion process Location information be known, so can be same, Shi Shengcheng sample label, the entire process for generating data be fully automated, does not need It participates in by hand.
The threedimensional model of blade is projected into two-dimensional surface and is really transformed into pixel coordinate system from world coordinate system up Up, conversion formula:
Wherein, pixel coordinate system is the coordinate system that location of pixels is portrayed in two dimensional image, is denoted as O0- uv, transverse and longitudinal coordinate For uv axis;It is the physical location of pixel in the plane to image coordinate system, and unit is usually millimeter, defines image coordinate system and is O1U axis in-xy, x-axis and pixel coordinate system is parallel, and y-axis is parallel with the v axis in pixel coordinate system;Camera coordinates system is space A little relative to camera photocentre OcThree-dimensional coordinate.Assuming that camera coordinates system is Oc-XcYcZc, camera shooting direction is Zc, that Shooting direction is vertical with pixel coordinate system, and intersects at O1Point.XcAxis is identical as the u axis direction in plane of delineation coordinate system, Yc Axis is identical as v axis direction in plane of delineation coordinate system.World coordinate system can indicate each vertex of grid blade in three-dimensional generation The coordinate position on boundary, coordinate system Ow-XwYwZw, its coordinate position is for other object spaces.fxAnd fy For the focal length of camera;K is a matrix, and size is 3 × 3, is camera Intrinsic Matrix, fx、fy、uoAnd voIt is in camera Param elements, each camera has the internal reference of own, and internal reference and camera itself have relationship.R and t is Camera extrinsic number member Element, R are rotation parameter, and t is translation parameters, are equivalent to posture of the camera relative to world coordinate system.
It can for a leaf three-dimensional model by adjusting the outer ginseng of camera, that is, rotation parameter and translation parameters To generate thousands of two-dimensional image data, as shown in fig. 7, illustrating the size of three dimendional blade model generation not Two-dimensional image data same, visual angle is inconsistent.
The background of blade two-dimension graphic data is pure color, manufacture segmentation data is not used to, in order to synthesize segmentation data Collection is the sample of training set, and random selection background picture is merged.Since the pixel coordinate in blade two-dimension graphic data is It is known, it is possible to directly to make sample label.The detailed process of fusion is as follows:
1. the first gaussian pyramid of building plant leaf blade two-dimension graphic data and the background for needing to merge, resettles B layers Laplacian pyramid.
2. blade shape exposure mask mask of the same size in setting and blade two-dimension graphic data indicates the position to be merged It sets.
3. merging the laplacian pyramid of blade two-dimension graphic data and background picture according to exposure mask, having obtained one New pyramid.
4. rebuilding to the result of step 3., the method for operation is identical with laplacian pyramid, only on adopt Then sample merges each layer of image, obtain the blade two-dimensional graphical image (i.e. sample) and sample mark of the synthesis background of this paper Label, as shown in Figure 8, wherein the first row and the second behavior sample image, the second row and fourth line are the first row and the second behavior sample The corresponding sample label of this image.
Step S2 is trained deep learning model by training set, obtains blade parted pattern;
Image to be split comprising blade is inputted blade parted pattern, after the output segmentation of blade parted pattern by step S3 Leaf image.
Deep learning model in the present invention is to have carried out following improvement: needle based on full convolutional neural networks (FCN) model The problem bad to segmentation effect caused by the characteristic pattern reduced in full convolutional neural networks (FCN) model has used empty volume Product;To solve the empty convolutional layer structure (ASPP) that the Issues On Multi-scales of object have used parallel organization;Believe from the edge of segmentation It ceases not accurate enough and uses condition random field (CRF) technology.It has finally drawn and has incorporated the network of FCN and ASPP and CRF DeepLab-ASPP model, segmentation effect reach unprecedented height.
For full convolutional neural networks FCN by multiple convolution, pondization operation, image is smaller and smaller, and resolution ratio is also increasingly It is low, in order to obtain with original image segmentation figure of a size, image is up-sampled using deconvolution, deconvolution is exactly that convolution exists Opposite operation is done in positive or backpropagation.
On the basis of being up-sampled by deconvolution, using jumped floor structure, i.e., the result of joint upper level network layers into Row fusion, to obtain finer segmentation result, such structure, referred to as jumped floor structure.Such as VGG-16 network, one is shared 5 times pool ing is operated, and after each pool ing, image reforms into one layer of 1/2 size.In the pool by the 3rd time Ing operation after, image becomes original 1/8, save at this time it as a result, becoming original after the 4th pool ing 1/16, and after last time pool ing, become original 1/32, the result of this when is known as thermal map (heatmap). And then pass through 3 convolutional layers, if directly carrying out 32 times of up-sampling to the thermal map of 1/32 size at this time, as a result not enough Finely, because during pond, some minutias are just lost.Our iteration forward at this time, 1/8 characteristic pattern, with And 1,/16 4 times of up-sampling results of 2 times of characteristic pattern up-samplings and 1/32 heat be fused together, just obtained 1/8 segmentation knot Fruit.
Remove subsequent pond layer in full convolutional neural networks FCN, and replaced with empty convolution, by least one convolutional layer Empty convolutional layer is replaced with, is avoided due to down-sampling bring characteristic loss, the calculation formula of empty convolution are as follows:
Feature y is exported for two dimension, it can be indicated with the feature x comprising input are as follows:
R is the sampling step length for controlling input feature vector;It can be understood as the insertion 0 between the every two feature of input to do again Convolution;W is convolution kernel, and k is convolution kernel size, can be 3*3 or 5*5.
For the multiple dimensioned property problem of target blade, that is to say, that the blade on image has the problem of different sizes.Make Multiple dimensioned segmentation problem is solved with the empty convolutional layer of multiple parallel different sample rates.This is equivalent to makes on original image With multiple and different convolution, and more contextual informations are obtained on the image of multiple scales.The sky of this parallel organization Hole convolutional layer is known as ASPP module.Fig. 9 illustrates the pyramidal substantially process of void space, and 4 parallel empty convolution are right simultaneously Upper one layer of feature is sampled, its parameter of the empty convolution of each branch r is respectively different, to take different features.Generally For, r is smaller, and the local feature of extraction is more, and r is bigger, and the global information of extraction is more.
Plant leaf blade can be split for full convolutional neural networks FCN, in the result that the edge part of blade is divided Imperfect problem, the present invention use condition random field (CRF), the relationship between pixel and pixel are taken into account, to change The shortcomings that being not concerned with adjacent pixel information into FCN, is further processed segmentation result.
Stochastic variable X={ X1,X2,...,XNIndicate data set, Y={ Y1,Y2,...,YNIt is label, N is data set Size.In blade segmentation, XiIndicate the pixel value of ith pixel, YiFor the label value of ith pixel, its value is L= {l1,l2,...,lk}.For scheming G=(V, E), Y=(yi), wherein G be probability non-directed graph, V indicate stochastic variable, E indicate with Dependence between machine variable, yiFor the stochastic variable of ith pixel, if giving X, the conditional probability of Y obeys formula P (yi |X,yi, j ≠ i) and=P (yi|X,yi, j:i), j:i is the nonoriented edge of j to i in G, that is, Markov property, (X, Y) just structure At a condition random field P (Y | X), it can be distributed and be expressed as with Gibbs:
Wherein Z (X) is normalizing item, and the potential function for each group c on figure G isSegmentation is marked Result y ∈ L, the Gibbs energy of note can be indicated with following equation:
P (y | X) it is posterior probability, best segmentation result, obtains best dividing mark y in order to obtain*, i.e. target letter Number P (y | X) it is maximum, then Gibbs energy is minimum, it may be assumed that
y*=argminE (y | X);
The essence for constructing CRF model is all potential function weighted sums, only considers single order gesture and second-order potential here herein, Its corresponding Gibbs energy function are as follows:
ψ is used hereinc(yc) replace ψc(yc| X), then above-mentioned expression formula can be converted are as follows:
ψi(yi) it is single order potential function, the pixel value for expressing ith pixel point is XiWhen, the point and its generic Diversity factor, it only has relationship with pixel itself.In conjunction with FCN output as a result, ψi(yi) can calculate in detail are as follows:
ψi(yi)=- ln (PFCN(yi=lk));PFCN(yi=lk) meaning be using FCN carry out image segmentation when, i-th The predicted value of pixel is lkProbability.
But FCN model does not account for the consistency of label distribution and the flatness of segmenting edge.Further to utilize figure Information as between pixel and pixel, if the pixel value X of neighbor pixeli,XjRelatively, then it is considered that it To belong to same category of probability larger, so second order potential function has been introduced herein, for the letter between pixel and pixel Breath is taken into account.Second order potential function may be defined as the Gaussian function of weighting:
Wherein, μ (yi,yj) it is a penalty term, when two characteristic similarities are higher, punishment is bigger.For image, The distance of two pixels is closer, and pixel value more mutually tries, but classification is different, their punishment is higher.When ith pixel point and It is 1 when the label difference of j pixel, and otherwise it is 0.w(m)What is represented is the corresponding weight of Gaussian kernel.Each kmUsing height This coreΛ(m)For the accuracy matrix of gaussian kernel function, feature vector fi,fjCoordinate and the rgb value composition of pixel can be used, further abbreviation is available:
piFor the position of ith pixel, pjFor the position of j-th of pixel, IiFor the color of ith pixel, IjIt is j-th The color of pixel, λαAnd λβFor pixel on image to the distance of other pixels in color and position.In order to reduce the complexity of calculating Degree, using mean value field approximation method, with Q (X) come approximate simulation P (YX), Indicate opposite Entropy is equivalent to the difference of the comentropy of two probability distribution;And it can satisfy KL divergence, it, can in conjunction with method of Lagrange multipliers To solve:
As can be seen from the above formula that the posterior probability calculating to P (Y | X) is transformed into Qi(Xi) posterior probability calculate, F (i) is precisely to operate to the bilateral filtering of ith pixel.
As shown in Figure 10, the mean field algorithm of condition random field CRF is converted to a common layer of neural network, is owned Parameter all pass through training obtain.Parameter I be input picture, in conjunction with FCN output result U and exported by follow-up phase As a result QoutAs input Q next timein, continuous iteration, until convergence.
Initial phase:
In iterative process for the first time, ith pixel belongs to the calculation expression of the output result Q of first of classification are as follows:
Wherein U is the output of FCN as a result, Ui(l) the output result of i-th layer of FCN is indicated;ZiIt is normaliztion constant, this layer It can be regarded as the Softmax layer of convolutional neural networks CNN.
The probability calculation stage:
The stage is the first stage in iterative process, to Q value application gaussian filtering.Filtering is derived according to characteristics of image Coefficient, such as pixel coordinate and rgb value.WithTop mark be intended merely to distinguish, no practical significance;Since CRF may Full connection, calculating Gaussian convolution core difficulty in this way is too big, therefore it is high can to obtain the application of Q value according to following formula approximate calculation This filtering, i.e., with the convolution of Gaussian kernel, expression formula in feature space are as follows:
The weight adjusting stage:
The stage is directly weighted summation, equation to the probability calculation stage are as follows:
M is the Gaussian kernel number of plies.
The class switching stage:
The layer is class switch layer, can regard the convolutional layer of 1*1 as:
Probability conformity stage:
This layer combines the output and class switching output of FCN, carries out to the probability of ith pixel distribution label whole It closes, can be expressed as follows with math equation:
The normalization stage:
Input is normalized, carries out next iteration operation to facilitate:
QiNormalized,For probability integration.
Mean value field approximation method can realize that the input of each iterative network layer is upper one by repeating network layer above The primitive form of secondary iterative network layer output and single order gesture.This equates iteration mean field algorithm method reasoning is considered as one to follow Ring neural network (RNN), iterative algorithm can be expressed as follows:
H2(t)=fθ(U,H2(t-1),I),0<t≤T;
Three formula are mean value field iterative algorithm of the CRF in Recognition with Recurrent Neural Network above, and t is changing for Recognition with Recurrent Neural Network Generation number, U are the result of full convolutional network.Experiment shows that, when T is 10 or so, mean value field iterative algorithm has just been received substantially It holds back, segmentation effect is best at this time.The result that Y (t) is the T times exports.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not Centainly refer to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be any One or more embodiment or examples in can be combined in any suitable manner.
Although an embodiment of the present invention has been shown and described, it will be understood by those skilled in the art that: not A variety of change, modification, replacement and modification can be carried out to these embodiments in the case where being detached from the principle of the present invention and objective, this The range of invention is defined by the claims and their equivalents.

Claims (9)

1. a kind of plant leaf blade dividing method using generated data characterized by comprising
Step S1 constructs the different postures of multiple leaf images, the leaf three-dimensional model of different colours based on leaf image;
By leaf three-dimensional model project to two-dimensional surface generate blade two-dimensional image data, by blade two-dimension graphic data from it is different Background image is merged, and training set is obtained, and the training set includes multiple training samples and sample corresponding with training sample Label;
Step S2 is trained deep learning model by training set, obtains blade parted pattern;
Image to be split comprising blade is inputted blade parted pattern, the leaf after the output segmentation of blade parted pattern by step S3 Picture.
2. utilizing the plant leaf blade dividing method of generated data as described in claim 1, which is characterized in that in the step S1 In, based on leaf image construct multiple and different postures of the leaf image, different colours leaf three-dimensional model the step of packet It includes:
Step S11 obtains the blade profile in leaf image;
Step S12 extracts vein as blade framework;The blade framework includes second level and/or three pole veins;
The region that step S13, blade profile and blade framework are surrounded constitutes blade face part, carries out grid to blade face part Change, obtains blade plane grid model;
Step S14 carries out texture mapping processing to blade plane grid model, specifically:
Blade texture is extracted from leaf image, blade texture is attached on blade plane grid model, for blade plane net Any vertex v={ x, y, z } texture coordinate { X in lattice model on gridtexture,Ytexture, z } are as follows:
Wherein, Xmin=min { x1,x2,...,xn, Xmax=max { x1,x2,...,xn, Ymin=min { y1,y2,...,yn, Ymax=max { y1,y2,...,yn};{ x, y, z } indicates European coordinate of the vertex v in blade plane grid model, x1, x2,...,xnRespectively indicate the x-axis coordinate value on all vertex on grid in blade plane grid model, y1,y2,...,ynTable respectively Show the y-axis coordinate value on all vertex on grid in blade plane grid model, n is indicated in blade plane grid model on grid Number of vertices;
Step S15 chooses at least one on the blade profile and/or blade framework of the blade plane grid model after texture mapping A operating point carries out different movements to operating point, makes blade profile and blade framework that different deformation occur, and obtains operating point Coordinate position before and after deformation becomes the blade plane grid model after texture mapping using Laplce's deformation algorithm Shape obtains the blade three-dimensional grid model of multiple and different postures;
Step S16 carries out coloring treatment to the blade three-dimensional grid model of different postures and obtains multiple and different postures, different colours Leaf three-dimensional model.
3. utilizing the plant leaf blade dividing method of generated data as claimed in claim 2, which is characterized in that the step S11 Include:
Step S111 carries out gray proces to each pixel in leaf image according to the following formula, obtains gray level image;
Gray=0.3*R+0.59*G+0.11*B, wherein gray is the gray value of pixel, and R, G, B is respectively pixel in leaf R channel value, G channel value and channel B value in picture;
Gray threshold is arranged in step S112, carries out following judgement to all pixels point in gray level image and obtains bianry image:
If the gray value of pixel is greater than gray threshold, then it is assumed that the pixel is blade pixel, if the gray value of pixel Less than or equal to gray threshold, then it is assumed that the pixel is background pixel point;
Step S113, extracts blade profile from bianry image, specifically:
Moveable grid, the binaryzation of the pixel of the color and vertex position on grid vertex are set on bianry image Value corresponds to, if pixel binary value is 1, vertex is black, if pixel binary value is 0, vertex is white;
It is moved in the counterclockwise direction since the leftmost side of bianry image blade portions along blade edge, until returning to starting point, even The midpoint for connecing two sides adjacent with one or two Black vertices in each grid obtains blade profile line segment, all blade wheels Profile section constitutes blade profile, and the endpoint of blade profile line segment is the vertex of blade profile;
When the Black vertices of grid are two and are located at the angle steel joint of grid, by changing grid size or ignoring it In the modes of a Black vertices avoid generating two blade profile line segments in a grid.
4. utilizing the plant leaf blade dividing method of generated data as claimed in claim 2, which is characterized in that the step S13 Include:
Step S131, the vertex of compression blade profile;
Step S132 chooses multiple sampled points from blade framework and blade profile, by the sampled point of blade profile and blade bone The sampling dot sequency of frame is connected to form multiple polygons;
The polygon is divided at least two triangle or quadrangle by step S133, to all or part of triangle or four sides The grid of shape composition does further subdivision, obtains blade plane grid model.
5. utilizing the plant leaf blade dividing method of generated data as claimed in claim 4, which is characterized in that the step S133 Include:
The polygon is divided at least two triangle by Delaunay Triangulation algorithm by step S1331;
Blade profile and blade framework are labeled as boundary by step S1332;
Step S1333, to all or part of triangle sets at grid do further Subdividing Processing, specifically include:
Step A: for the vertex v in any blade triangular mesh in step S1331, if set N be comprising vertex v and The set of all triangular apex in vertex v neighborhood is finely divided each vertex in set N, specific:
If vertex v0,v1For two vertex in vertex set N, v0≠v1If side v0v1It is not common edge, is not inserted into new summit;
If side v0v1For common edge, obtain with side v0v1For common edge two triangles where plane normal angle, if described Angle is more than or equal to first threshold, in side v0v1On be not inserted into new summit, if the angle be less than first threshold, in side v0v1On It is inserted into new summit vnew, vnewPosition coordinates calculation formula it is as follows:
Wherein, vnewIndicate new summit position coordinates;v2And v3It respectively indicates with side v0v1For In two triangles of common edge with side v0v1The position coordinates on opposite vertex;The first threshold is preset value, value range Are as follows: 15 degree to 60 degree;
Step B: for any vertex v in step S1331, if set N is to include in vertex v and vertex v neighborhood all three The set of angled peak calculates the flatness S of vertex vvIf Sv< λ, the position coordinates of vertex v do not need to adjust, if Sv>=λ, The position coordinates of vertex v are adjusted by following formula:
Wherein, v' indicates vertex v position coordinates adjusted;vjIndicate j-th of vertex in vertex set N;β is position regulation coefficient,K is the number of vertices in vertex set N;The flatness of vertex v The normal vector of vertex vNjFor the normal vector of j-th of triangle in k-1 triangle in vertex v neighborhood;λ is Flatness threshold, 0 < λ < 1;
Step S1334 generates new net based on new summit, vertex adjusted and the former vertex being inserted into step S1333 Lattice;
Step S1335, s times circulation executes step S1333 and step S1334, realizes the further subdivision of grid, the s is positive Integer.
6. utilizing the plant leaf blade dividing method of generated data as claimed in claim 4, which is characterized in that the step S131 Include:
Step S1311, if compressing the vertex set before vertex on blade profile is N1, N1={ n10,n11,n12,...,n1m-1, n1p And n1qTwo vertex of serial number p and q in respectively vertex set N1,0≤p≤m-1,0≤q≤m-1, m are the top of vertex set N1 Point number;If set V is that blade profile compresses the vertex set behind vertex;If p=0, q=2, by vertex n10It is added to vertex set It closes in V;
Step S1312 judges whether q=m-1 is true, if so, blade profile vertex compression processing terminates, if not, even Meet vertex n1pAnd n1q, constitute line segment lpq, calculate all vertex n1 for meeting p < r < qrTo line segment lpqEuclidean distance, r is Positive integer chooses the maximum therein and is denoted as Drmax
Step S1313, if Drmax< T then enables q=q+1, return step S1312;
If Drmax>=T, by vertex n1rIt is added in vertex set V, enables p=q-1, q=q+1, return step S1312;
The T indicates the worst error value of the preset permission in blade profile variation front and back, 0 < T < 50.
7. utilizing the plant leaf blade dividing method of generated data as claimed in claim 2, which is characterized in that the step S15, It specifically includes:
Step S151 chooses at least one operating point on blade profile and/or blade framework, if operating point is Qw, operation points Amount is W, and w indicates the serial number of operating point, 1≤w≤W;QwRoot node be Qw-1, vector QwQw-1With vector Qw-1QNConstitute plane P, vector Qw-1QNFor Z axis, if vector Qw-1QMPerpendicular to plane P, operating point QwMove mode be around vector Qw-1QMRotation, if Rotation gyration is θ, θ ≈ t, operating point QwPosition coordinates after movement are as follows:
Q (t)=k (t) * (Qw+t*(QN-Qw));
Wherein, k (t)=| Qw|/|Qw+t*(QN-Qw) |, 0≤t≤1, by the way that different parameters t is arranged, so that it may realize to operating point Different movements is carried out, makes blade profile and blade framework that different deformation occur, and obtains different deformation rear blade profile The coordinate of characteristic point on vertex and blade framework;
Step S152, if the vertex set of grid is V={ v in blade plane grid model1,v2,...,vn, 1≤i≤n, viTable Show the coordinate on i-th of vertex in vertex set V, if set N is and vertex viAdjacent vertex set, vjIndicate jth in set N The coordinate on a vertex, vertex viLaplce's coordinate are as follows:
Wherein, wijIt is side (vi,vj) weight, and have ∑j∈Nwij=1, wij=cot α+cot β, α and β are side (vi,vj) institute Two adjacent triangles in side (vi,vj) opposite two it is diagonal;
Based on above formula, the matrix expression of Laplce's coordinate is obtained are as follows:
L (x', y', z')=L × V (x, y, z);
Wherein, l (x', y', z') is mesh vertex coordinates corresponding n × 3 rank Laplce's coordinate in blade plane grid model; V (x, y, z) indicates the European coordinate of grid vertex in blade plane grid model, is the rank matrix of n × 3;L is n × n rank La Pula This matrix, specifically:
E ' is vertex viAnd vertex vjSide collection, the size of the order of matrix L is n-1;
Instrument error function, error function are as follows:
QwIt is the operating point w, Handles is the set of all operating points, kwIt is operating point QwWeight, 0 < kw< 5, passes through The coordinate for solving the V in following formula makes error function value minimum:
H is W × n rank sparse matrix;Every row only includes a nonzero element hii=1, v at this timeiIt is operating point;hW×3It is all behaviour The matrix that coordinate after making point deformation is constituted with the product of corresponding operating point weight respectively;
It is solved using least square method:
ATAV=ATB obtains the coordinate of V, i.e., the deformed coordinate of vertex V, A in blade plane grid modelTFor matrix A Transposed matrix repeats step S152, the deformed coordinate until acquiring all the points in blade plane grid model, obtains The blade three-dimensional grid model of multiple and different postures.
8. utilizing the plant leaf blade dividing method of generated data as claimed in claim 2, which is characterized in that the step S16 Include:
Step S161 obtains the reflecting component fr of the coloring models of plant leaf blade:
Wherein, frdiffFor diffusing reflection component;frspecFor specular components;
Diffusing reflection component frdiffCalculation formula are as follows:
frdiff=kdDrgbη (x), wherein kdFor the adjustment parameter of diffusing reflection parameter and luminous intensity, by user setting;η (x) is indicated Diffusing reflection texture after normalization, DrgbFor the RGB vector of the bidirectional reflectance distribution function BRDF of plant leaf blade, by the leaf of blade Green element, carrotene and structural parameters determine;
Specular components frspecCalculation formula are as follows:
Wherein, F is Fresnel coefficient, when sight and the angle difference of blade, it is seen that reflecting effect it is different;DbeckmannIt is Beckmann distribution parameter, G are to block item, θiAnd θvRespectively indicate the incidence angle and angle of reflection of light;
Step S162 obtains the transmitted component ft of the coloring models of plant leaf blade:
Ft=ktT′rgbγ(x)e-h
Wherein, T 'rgb=(0.9g, g, 0.2g), g indicate T 'rgbGreen light part, ktTo adjust diffusing reflection parameter and luminous intensity Parameter, by user setting;γ (x) is the transmission texture after normalization;The thickness of h expression blade;
Step S163 generates irradiance rate texture using the reflecting component fr and transmitted component ft of blade;Using Gaussian Blur algorithm, Processing is carried out to the irradiance rate texture of generation and obtains the scattered colouring component of blade subsurface;Joint subsurface dissipates colouring component, reflecting component Fr and transmitted component ft obtains coloring models;
Step S164, it is random to adjust the color, intensity of light and the transmissivity of blade in coloring models, obtain multiple color moulds The colour model is superimposed from different blade three-dimensional grid models, obtains the blade three of multiple and different postures, different colours by type Dimension module.
9. a kind of plant leaf blade segmenting system, which is characterized in that including processor and image providing unit, processor is mentioned from image The image to be split comprising blade is obtained for unit, and according to the plant described in one of claim 1-8 using generated data Blade dividing method is partitioned into leaf image from image to be split.
CN201910439140.1A 2019-05-24 2019-05-24 Plant leaf segmentation method and system by utilizing synthetic data Active CN110148146B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910439140.1A CN110148146B (en) 2019-05-24 2019-05-24 Plant leaf segmentation method and system by utilizing synthetic data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910439140.1A CN110148146B (en) 2019-05-24 2019-05-24 Plant leaf segmentation method and system by utilizing synthetic data

Publications (2)

Publication Number Publication Date
CN110148146A true CN110148146A (en) 2019-08-20
CN110148146B CN110148146B (en) 2021-03-02

Family

ID=67593208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910439140.1A Active CN110148146B (en) 2019-05-24 2019-05-24 Plant leaf segmentation method and system by utilizing synthetic data

Country Status (1)

Country Link
CN (1) CN110148146B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110631995A (en) * 2019-08-26 2019-12-31 江苏大学 Method for synchronously diagnosing nitrogen, potassium and magnesium element deficiency by leaf chlorophyll and leaf surface distribution characteristics
CN112184789A (en) * 2020-08-31 2021-01-05 深圳大学 Plant model generation method and device, computer equipment and storage medium
CN112183546A (en) * 2020-09-29 2021-01-05 河南交通职业技术学院 Image segmentation method based on spatial nearest neighbor and having weight constraint
CN112232385A (en) * 2020-09-27 2021-01-15 北京五八信息技术有限公司 Image processing method and device
CN113112504A (en) * 2021-04-08 2021-07-13 浙江大学 Plant point cloud data segmentation method and system
CN113255402A (en) * 2020-02-10 2021-08-13 深圳绿米联创科技有限公司 Motion recognition method and device and electronic equipment
CN113361499A (en) * 2021-08-09 2021-09-07 南京邮电大学 Local object extraction method and device based on two-dimensional texture and three-dimensional attitude fusion
CN113379763A (en) * 2021-06-01 2021-09-10 北京齐尔布莱特科技有限公司 Image data processing method, model generating method and image segmentation processing method
CN113963127A (en) * 2021-12-22 2022-01-21 深圳爱莫科技有限公司 Simulation engine-based model automatic generation method and processing equipment
CN114140688A (en) * 2021-11-23 2022-03-04 武汉理工大学 Vein phenotype extraction method and device based on transmission scanning image and electronic equipment
CN114708375A (en) * 2022-06-06 2022-07-05 江西博微新技术有限公司 Texture mapping method, system, computer and readable storage medium
CN115544656A (en) * 2022-09-30 2022-12-30 华中科技大学 Efficient prediction method and system for time-varying modal parameters of thin-wall blade machining
CN115876108A (en) * 2023-03-01 2023-03-31 菲特(天津)检测技术有限公司 Inner diameter measuring method, inner diameter measuring device and computer-readable storage medium
WO2023083152A1 (en) * 2021-11-09 2023-05-19 北京字节跳动网络技术有限公司 Image segmentation method and apparatus, and device and storage medium
CN116309651A (en) * 2023-05-26 2023-06-23 电子科技大学 Endoscopic image segmentation method based on single-image deep learning
CN116878413A (en) * 2023-09-06 2023-10-13 中国航发四川燃气涡轮研究院 Preparation method of surface speckle of blisk blade
US11954943B2 (en) 2020-12-21 2024-04-09 Qualcomm Incorporated Method for generating synthetic data

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279980A (en) * 2013-05-08 2013-09-04 西安理工大学 Tree leaf modeling method based on point cloud data
CN103955701A (en) * 2014-04-15 2014-07-30 浙江工业大学 Multi-level-combined multi-look synthetic aperture radar image target recognition method
CN104183020A (en) * 2014-07-09 2014-12-03 浙江大学 Terrain grid simplifying method based on local quadric error metric with penalty term
CN104636982A (en) * 2014-12-31 2015-05-20 北京中农腾达科技有限公司 Management system and management method based on plant cultivation
CN104748677A (en) * 2015-02-11 2015-07-01 中国矿业大学(北京) Method of measuring plant morphology by adopting three-dimensional laser scanner way
JP5756717B2 (en) * 2011-09-05 2015-07-29 株式会社平和 Game machine
CN204926123U (en) * 2015-08-24 2015-12-30 同济大学 Plant species recognition device based on handheld terminal blade image
CN105654543A (en) * 2014-09-25 2016-06-08 薛联凤 Laser point cloud data-oriented broad-leaved tree real leaf modeling and deforming method
CN105761289A (en) * 2016-03-08 2016-07-13 重庆大学 New method for extracting and classifying expandable grid curved surface
CN106803093A (en) * 2016-12-06 2017-06-06 同济大学 A kind of plant species recognition methods based on blade textural characteristics and iOS platforms
CN106991147A (en) * 2017-03-27 2017-07-28 重庆大学 A kind of Plant identification and recognition methods
CN107194993A (en) * 2017-06-19 2017-09-22 南京农业大学 Leaves of plants Dip countion method based on three dimensional point cloud
CN107239740A (en) * 2017-05-05 2017-10-10 电子科技大学 A kind of SAR image automatic target recognition method of multi-source Fusion Features
CN109544554A (en) * 2018-10-18 2019-03-29 中国科学院空间应用工程与技术中心 A kind of segmentation of plant image and blade framework extracting method and system
CN109635653A (en) * 2018-11-09 2019-04-16 华南农业大学 A kind of plants identification method

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5756717B2 (en) * 2011-09-05 2015-07-29 株式会社平和 Game machine
CN103279980A (en) * 2013-05-08 2013-09-04 西安理工大学 Tree leaf modeling method based on point cloud data
CN103955701A (en) * 2014-04-15 2014-07-30 浙江工业大学 Multi-level-combined multi-look synthetic aperture radar image target recognition method
CN104183020A (en) * 2014-07-09 2014-12-03 浙江大学 Terrain grid simplifying method based on local quadric error metric with penalty term
CN105654543A (en) * 2014-09-25 2016-06-08 薛联凤 Laser point cloud data-oriented broad-leaved tree real leaf modeling and deforming method
CN104636982A (en) * 2014-12-31 2015-05-20 北京中农腾达科技有限公司 Management system and management method based on plant cultivation
CN104748677A (en) * 2015-02-11 2015-07-01 中国矿业大学(北京) Method of measuring plant morphology by adopting three-dimensional laser scanner way
CN204926123U (en) * 2015-08-24 2015-12-30 同济大学 Plant species recognition device based on handheld terminal blade image
CN105761289A (en) * 2016-03-08 2016-07-13 重庆大学 New method for extracting and classifying expandable grid curved surface
CN106803093A (en) * 2016-12-06 2017-06-06 同济大学 A kind of plant species recognition methods based on blade textural characteristics and iOS platforms
CN106991147A (en) * 2017-03-27 2017-07-28 重庆大学 A kind of Plant identification and recognition methods
CN107239740A (en) * 2017-05-05 2017-10-10 电子科技大学 A kind of SAR image automatic target recognition method of multi-source Fusion Features
CN107194993A (en) * 2017-06-19 2017-09-22 南京农业大学 Leaves of plants Dip countion method based on three dimensional point cloud
CN109544554A (en) * 2018-10-18 2019-03-29 中国科学院空间应用工程与技术中心 A kind of segmentation of plant image and blade framework extracting method and system
CN109635653A (en) * 2018-11-09 2019-04-16 华南农业大学 A kind of plants identification method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BOWEN YANG等: "Object segmentation using FCNs trained on synthetic images", 《JOURNAL OF INTELLIGENT & FUZZY SYSTEMS》 *
唐丽玉等: "基于点云数据的树木三维重建方法改进", 《农业机械学报》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110631995A (en) * 2019-08-26 2019-12-31 江苏大学 Method for synchronously diagnosing nitrogen, potassium and magnesium element deficiency by leaf chlorophyll and leaf surface distribution characteristics
CN110631995B (en) * 2019-08-26 2021-09-10 江苏大学 Method for synchronously diagnosing nitrogen, potassium and magnesium element deficiency by leaf chlorophyll and leaf surface distribution characteristics
CN113255402B (en) * 2020-02-10 2024-06-11 深圳绿米联创科技有限公司 Action recognition method and device and electronic equipment
CN113255402A (en) * 2020-02-10 2021-08-13 深圳绿米联创科技有限公司 Motion recognition method and device and electronic equipment
CN112184789A (en) * 2020-08-31 2021-01-05 深圳大学 Plant model generation method and device, computer equipment and storage medium
CN112184789B (en) * 2020-08-31 2024-05-28 深圳大学 Plant model generation method, plant model generation device, computer equipment and storage medium
CN112232385A (en) * 2020-09-27 2021-01-15 北京五八信息技术有限公司 Image processing method and device
CN112183546A (en) * 2020-09-29 2021-01-05 河南交通职业技术学院 Image segmentation method based on spatial nearest neighbor and having weight constraint
CN112183546B (en) * 2020-09-29 2023-05-23 河南交通职业技术学院 Image segmentation method based on spatial nearest neighbor with weight constraint
US11954943B2 (en) 2020-12-21 2024-04-09 Qualcomm Incorporated Method for generating synthetic data
CN113112504A (en) * 2021-04-08 2021-07-13 浙江大学 Plant point cloud data segmentation method and system
CN113112504B (en) * 2021-04-08 2023-11-03 浙江大学 Plant point cloud data segmentation method and system
CN113379763A (en) * 2021-06-01 2021-09-10 北京齐尔布莱特科技有限公司 Image data processing method, model generating method and image segmentation processing method
CN113361499A (en) * 2021-08-09 2021-09-07 南京邮电大学 Local object extraction method and device based on two-dimensional texture and three-dimensional attitude fusion
WO2023083152A1 (en) * 2021-11-09 2023-05-19 北京字节跳动网络技术有限公司 Image segmentation method and apparatus, and device and storage medium
CN114140688B (en) * 2021-11-23 2022-12-09 武汉理工大学 Vein phenotype extraction method and device based on transmission scanning image and electronic equipment
CN114140688A (en) * 2021-11-23 2022-03-04 武汉理工大学 Vein phenotype extraction method and device based on transmission scanning image and electronic equipment
CN113963127A (en) * 2021-12-22 2022-01-21 深圳爱莫科技有限公司 Simulation engine-based model automatic generation method and processing equipment
CN114708375B (en) * 2022-06-06 2022-08-26 江西博微新技术有限公司 Texture mapping method, system, computer and readable storage medium
CN114708375A (en) * 2022-06-06 2022-07-05 江西博微新技术有限公司 Texture mapping method, system, computer and readable storage medium
CN115544656A (en) * 2022-09-30 2022-12-30 华中科技大学 Efficient prediction method and system for time-varying modal parameters of thin-wall blade machining
CN115876108B (en) * 2023-03-01 2023-10-24 菲特(天津)检测技术有限公司 Inner diameter measuring method, apparatus and computer readable storage medium
CN115876108A (en) * 2023-03-01 2023-03-31 菲特(天津)检测技术有限公司 Inner diameter measuring method, inner diameter measuring device and computer-readable storage medium
CN116309651B (en) * 2023-05-26 2023-08-11 电子科技大学 Endoscopic image segmentation method based on single-image deep learning
CN116309651A (en) * 2023-05-26 2023-06-23 电子科技大学 Endoscopic image segmentation method based on single-image deep learning
CN116878413A (en) * 2023-09-06 2023-10-13 中国航发四川燃气涡轮研究院 Preparation method of surface speckle of blisk blade
CN116878413B (en) * 2023-09-06 2023-11-17 中国航发四川燃气涡轮研究院 Preparation method of surface speckle of blisk blade

Also Published As

Publication number Publication date
CN110148146B (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN110148146A (en) A kind of plant leaf blade dividing method and system using generated data
CN110120097B (en) Semantic modeling method for airborne point cloud of large scene
Fahim et al. Single-View 3D reconstruction: A Survey of deep learning methods
Le et al. A multi-view recurrent neural network for 3D mesh segmentation
CN106683182B (en) A kind of three-dimensional rebuilding method for weighing Stereo matching and visual appearance
Kim et al. Feature detection of triangular meshes based on tensor voting theory
CN110047144A (en) A kind of complete object real-time three-dimensional method for reconstructing based on Kinectv2
CN112784736B (en) Character interaction behavior recognition method based on multi-modal feature fusion
CN110390638A (en) A kind of high-resolution three-dimension voxel model method for reconstructing
CN108510583A (en) The generation method of facial image and the generating means of facial image
US20230044644A1 (en) Large-scale generation of photorealistic 3d models
CN110310285A (en) A kind of burn surface area calculation method accurately rebuild based on 3 D human body
US20230281955A1 (en) Systems and methods for generalized scene reconstruction
CN115115797B (en) Large-scene sparse light field semantic driving intelligent reconstruction method, system and device
KR20230097157A (en) Method and system for personalized 3D head model transformation
CN109740539A (en) 3D object identification method based on transfinite learning machine and fusion convolutional network
CN114092615A (en) UV mapping on 3D objects using artificial intelligence
CN108615229A (en) Collision detection optimization method based on curvature points cluster and decision tree
KR20230085931A (en) Method and system for extracting color from face images
CN116580156A (en) Text generation 3D printing model method based on big data deep learning
JP2024506170A (en) Methods, electronic devices, and programs for forming personalized 3D head and face models
Hu et al. Geometric feature enhanced line segment extraction from large-scale point clouds with hierarchical topological optimization
Zhang et al. 3D viewpoint estimation based on aesthetics
Kamra et al. Lightweight reconstruction of urban buildings: Data structures, algorithms, and future directions
Ren et al. Octree-gs: Towards consistent real-time rendering with lod-structured 3d gaussians

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant