WO2017178666A1 - Ensemble autonome de dispositifs et procédé pour la détection et l'identification d'espèces végétales dans une culture agricole pour l'application de produits agrochimiques de manière sélective - Google Patents

Ensemble autonome de dispositifs et procédé pour la détection et l'identification d'espèces végétales dans une culture agricole pour l'application de produits agrochimiques de manière sélective Download PDF

Info

Publication number
WO2017178666A1
WO2017178666A1 PCT/ES2016/070655 ES2016070655W WO2017178666A1 WO 2017178666 A1 WO2017178666 A1 WO 2017178666A1 ES 2016070655 W ES2016070655 W ES 2016070655W WO 2017178666 A1 WO2017178666 A1 WO 2017178666A1
Authority
WO
WIPO (PCT)
Prior art keywords
plant species
detection
identification
crop
agricultural crop
Prior art date
Application number
PCT/ES2016/070655
Other languages
English (en)
Spanish (es)
Inventor
Diego Hernan Perez Roca
Original Assignee
Diego Hernan Perez Roca
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Diego Hernan Perez Roca filed Critical Diego Hernan Perez Roca
Publication of WO2017178666A1 publication Critical patent/WO2017178666A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01CPLANTING; SOWING; FERTILISING
    • A01C21/00Methods of fertilising, sowing or planting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the present invention relates to Application technologies in the agro industrial field. Particularly, it is a set of devices autonomous for the detection and identification of species vegetables in an agricultural crop for the application of Agrochemicals selectively.
  • the set consists of multiple cameras that must be arranged on the wing or boom arm of for example a machine sprayer, a detection device and plant species identification, a circuit electronic responsible for managing the opening and closing of the agrochemical product sprinkler peaks, and a Ultrasound sensor for each chamber in the set.
  • the device processing which is able to detect, segment and identify the different species for sure vegetables found in the image scene processed.
  • the device sends a signal to the electronic circuit which is responsible for the management of opening and closing of different solenoid valves of the sprinkler peaks of agrochemical product to open during a period of predetermined time a specific peak, falling from this way a defined dose of agrochemical product over the desired plant.
  • the processing device is able to diagnose the agrochemical to use based on a table of correspondence comprising the different species vegetables, what is your specific agrochemical for treatment and the recommended dose to use.
  • Artificial vision is a subfield of the artificial intelligence whose purpose is to program a computer to "understand" the characteristics of an image.
  • the typical objectives of artificial vision include: detection, segmentation, location and recognition of certain objects in images; evaluation of the results, such as segmentation, registration; registration of different images of the same scene or object, that is to make agree same object in different images; tracking a object in a sequence of images; mapping a scene to generate a three-dimensional model of it, what that could be used by a robot to navigate the scene; estimation of the three-dimensional positions of humans; and search for digital images by content.
  • Continuous signal images are reproduced by analog electronic devices that record image data accurately using several methods, such as a sequence of fluctuations of an electrical signal or changes in the chemical nature of an emulsion of a film, which vary continuously in different aspects of the image.
  • a continuous signal or an analog image must be convert first to an understandable digital format to the computer. This process applies to all images regardless of origin, complexity and if they are black and white or grayscale, or all color.
  • a digital image is composed of a matrix rectangular or square of pixels representing a series of intensity values ordered in a system of coordinates in a plane (x, y).
  • V1 Primary visual
  • This type network is a variation of a multilayer perceptron, but their operation makes them much more effective for artificial vision tasks, especially in the Image classification Per perceptron must understand an artificial neuron and basic unit of inference in the form of linear discriminator, this is a algorithm capable of generating a criterion to select a subgroup from one more component group big.
  • multithreading allows you to execute efficiently multiple threads at the same time on the same GPU, managing to process several algorithms in the form concurrent, and in this way take full potential of the processor and in a shorter space of time, being able to as needed share the same logical resources and / or system physicists.
  • the convolutional neural networks consist in multiple layers with different purposes. To the principle is the extraction phase of characteristics, composed of convolutional neurons and of sampling reduction. At the end of the network you they find simple perceptron neurons to make the final classification on features extracted.
  • the feature extraction phase is resembles the stimulating process in the cells of the visual cortex This phase consists of alternate layers of convolutional neurons and reduction neurons of sampling. As the data progresses along this phase, its dimensionality is reduced, being the neurons in distant layers much less sensitive to disturbances in the input data, but at the same time being these activated by characteristics every more and more complex
  • the simple neurons of a perceptron are replaced by matrix processors that perform an operation about the 2D image data that passes through them, in place of a single numerical value
  • the convolution operator has the effect of filter the input image with a kernel previously trained. This transforms the data in such a way that certain characteristics, determined by the shape of the core, become more dominant in the output image having these a higher numerical value assigned to the pixels that represent them.
  • These cores have specific image processing skills, such as edge detection that can be perform with cores that highlight a gradient in a particular address.
  • edge detection that can be perform with cores that highlight a gradient in a particular address.
  • the nuclei that they are trained by a convolutional neural network they are generally more complex to be able to extract other more abstract and non-trivial features.
  • Neural networks have some tolerance to small disturbances in the data of entry. For example, if two almost identical images differentiated only by a transfer of some pixels laterally are analyzed with a neural network, The result should be essentially the same. This is gets, in part, given the reduction in sampling that It occurs within a convolutional neural network. To the reduce resolution, same features will correspond to a greater activation field in the input image.
  • neural networks convolutional used a subsampling process to carry out this operation.
  • other operations such as by max-pooling example, they are much more effective in summarizing characteristics about a region.
  • this type of operation is similar to how the visual cortex can summarize information internally.
  • the max-pooling operation finds the value maximum between a sample window and pass this value as a summary of characteristics about that area. How result, the size of the data is reduced by a factor equal to the size of the sample window on the which one is operated
  • the data After one or more extraction phases of characteristics, the data finally reaches the phase of classification. By then, the data has been debugged up to a series of unique features to the input image, and it is now the work of the latter phase to classify these characteristics towards a label or other, depending on training objectives.
  • neural networks convolutional are being used for the Image recognition and classification.
  • recognition process using a classifier based on a convolutional neural network a image to the network and after several repetitions of convolution operations in a maximum space of sampling and complete connection, is extracted as a result of recognition an accurate classification of the image and a maximum level of security of the result.
  • Object tracking in English object tracking is a process that allows you to estimate over time the location of one or more mobile objects using the use of a camera
  • the improvements achieved in form accelerated in the quality and resolution of the sensors image, together with the impressive increase of The computing power achieved in the last decade has favored the creation of new algorithms and applications by tracking objects.
  • Object tracking can be a process slow due to the large amount of data contained in a video, which can increase its complexity in the face of possible need to use recognition techniques of objects to track.
  • Video cameras capture information on objects of interest in the form of a set of pixels
  • an object follower values the location of this object in time.
  • the relationship between the object and projection of its image is very complex and it may depend on more factors than just the object position, which implies that the tracking of objects is a difficult goal to achieve.
  • the main challenges to have in account in the design of an object follower are related to the similarity of aspect between the object of interest and other objects in the scene, as well as the variation of appearance of the object itself. Since the appearance of both other objects and the background can be similar to the object of interest, this may Interfere with your observation. In that case, the features extracted from those unwanted areas it can be difficult to differentiate from what is expected that the object of interest generates. This phenomenon is known. with the name of disorder ("clutter").
  • an object In a tracking scenario, an object is you can define as anything of interest for later analysis.
  • the objects can be represent through their forms and appearances, generally: points, primitive geometric shapes, object silhouette and contour, articulated models of shape, skeletal models.
  • the most desired visual feature is the uniqueness because objects can be distinguished easily in the space of features.
  • the details of the most common features are the following: color, margins, optical flow, texture.
  • Each tracking method requires an object detection mechanism, either in each frame or when the first object appears in the video.
  • a common method for object detection is the use of single-frame information.
  • some object detection methods make use of the temporal information calculated from a sequence of images to reduce the number of false detections. This temporary information is generally calculated using the “frame differencing” technique, which shows the changing regions in consecutive sections. Once the regions of the object in the image are taken into account, it is then the task of the follower to perform the object correspondence from one frame to another to generate the tracking.
  • the most popular methods in the context of object tracking are: point detectors, background subtraction, segmentation.
  • Point detectors are used to find points of interest in images that have an expressive texture in their respective locations. Points of interest have been used for a long time. time in the context of the movement and in the problems of follow up. A desirable feature in terms of the points of interest is its invariance in the changes of illumination and in the point of view of the camera.
  • Object detection can be achieved by building a representation of the scene called background model and then finding the model deviations for each incoming frame. Any significant change in a region of the background model image represents an object in movement. The pixels that make up the regions in change process are marked for later processing. In general, a component algorithm connected is applied to get connected regions that correspond to the objects. This process is known. as background subtraction.
  • segmentation algorithms of the image is to divide the image into regions perceptually similar.
  • Each algorithm of segmentation covers two problems, the criteria for a good partition and the method to get the efficient partition.
  • 2D motion models are simple, But less realistic. As a consequence, the systems of 3D segmentation are the most used in the practice. Within three-dimensional methods, They can distinguish two different algorithms: structure from the SFM movement (acronym for English structure from motion) and parametric algorithms.
  • the SFM generally handles 3D scenes that contain relevant depth information while that in parametric methods this is not assumed depth. Another important difference between the two algorithms is that in the SFM a movement is assumed rigid while in parametric algorithms only stiffness of movement is assumed in parts of the scene.
  • Object tracking is a very task important within the field of video processing. He main objective of the monitoring techniques of objects is to generate the trajectory of an object through of time, positioning it within the image. We can classify techniques according to three large groups: point tracking, tracking core (kernel) and silhouettes tracking.
  • Point tracking techniques the objects detected in consecutive images are represented each by one or several points and the association of these is based on the state of the object in the previous image, which may include position and movement.
  • An external mechanism is required that Detect the objects of each frame. This technique may present problems in scenarios where you object presents occlusions and in the entrances and exits of these.
  • Point tracking techniques can be Also classify into two broad categories: deterministic and statistical.
  • Core tracking techniques perform a calculation of the movement of the object, which is represented by an initial region of an image to the next
  • the movement of the object is expressed in general in the form of parametric movement (translation, rotation, related %) or through the flow field Calculated in the following frames.
  • parametric movement translation, rotation, related
  • Silhouettes tracking techniques are performed by valuing the region of the object in each image using the information it contains.
  • This information can be in the form of density of look or shape models that are generally presented with margin maps. It has two methods: correspondence of form and monitoring of contour.
  • Tracking objects of interest on video It is the basis of many applications ranging from video production to remote surveillance, and from Robotics to interactive games.
  • the Object followers are used to improve the understanding of certain video data sets of medical and safety applications; to increase the productivity by reducing the amount of labor that it is necessary to complete a task and to give rise to Natural interaction with machines.
  • optical flow in English "optical flow" is the pattern of the apparent movement of objects, surfaces and edges in a scene caused by the relative movement between an observer's eye or a Camera and scene.
  • a second definition then more refined define the term "affordances" as the possibilities of action that a user is aware of being able to perform.
  • Applications of optical flow such as motion detection, object segmentation, the time until the collision and the calculation approach of expansions, motion coding compensated and Stereoscopic disparity measurement use this movement of the surfaces and edges of the objects.
  • US Patent 6038337 A which refers to a hybrid system of neural networks for the object recognition that exhibits local sampling of image, a neural network of self-organized maps, and a hybrid convolutional neural network.
  • the car map organization provides quantification of samples image in a topological space where entries that are close in the original space are also in the exit space, then providing the dimensionality reduction and change invariance minors in the image sample, and the neural network convolutional hybrid provides partial invariance to the translation, rotation, scale, and deformation.
  • the net convolutional hybrid extracts features successively larger in a set of layers hierarchical Alternative embodiments are described. using the Karhunen-Lo + E, gra e + EE that transforms instead of the own organization map, and a multilayer perceptron instead of the convolutional network.
  • an autonomous vehicle carries the device chemical application and is, in part, controlled by the processing requirements of device vision component responsible for detecting and assigning lists of objectives to chemical ejectors that target these target points while the device evolves to Through the countryside or a natural environment.
  • the device request US20150245565A1 patent is only able to detect the presence of a plant, but is unable to identify What plant species is it? Can only distinguish two characteristics that are absolutely different, how is the soil of plants, and determine only with a certain probability whether it is a crop or not.
  • this system is unable to distinguish what plant species is it to apply the specific herbicide and thus eliminate said species.
  • it is not a system that works in all kinds of terrain without having a certain pattern to maintain its trajectory, it cannot identify the kind of species in question, you can't make a intelligent application of the necessary agrochemical with a great cost savings in agrochemicals and efficiency unbeatable in weed management.
  • the purpose of this invention is an autonomous set of devices for the detection and identification of plant species in an agricultural crop for the application of agrochemicals selectively, said set comprises: a chemical application device that comprises at least one container container of agrochemicals linked in fluid communication with a plurality of sprinkler peaks through a valve, a plurality of cameras that are arranged on the autonomous vehicle and focused on cultivation, where each camera has an associated ultrasound sensor for the height measurement to the crop in real time, and in where each camera is tilted forward at 45 degrees from normal; a device of detection and identification of plant species connected to the cameras to receive information from video of them, an electronic circuit in charge to manage the opening and closing of the valves of the sprinkler peaks of agrochemicals connected to detection device that manages through said circuit the opening and closing of the valves of the sprinkler peaks, and where, the set of devices is mounted on a vehicle of transport.
  • a chemical application device that comprises at least one container container of agrochemicals linked in fluid communication with a plurality of sprinkler peaks
  • the transport vehicle is a self-propelled vehicle or a tow vehicle.
  • the vehicle self-propelled is a vehicle for fumigation with arms sides arranged perpendicularly to it (mosquito).
  • the device Detection and identification consists of a processor.
  • the processor comprises a tool based on computer software developed in C ++ language, a vision framework artificial, and a neural network framework convolutional
  • step a) of the method allows distinguish weed cultivation.
  • step a) of the method allows to identify plant species to Determine the agrochemical to apply.
  • step c) of the method the dose of agrochemical is sprayed by opening a solenoid valve.
  • the species Vegetables correspond to the crop and weeds.
  • the agrochemical is a herbicide, a foliar fertilizer, an insecticide, a fungicide, or a protective compound.
  • the step b) also allows to determine the state of the crop.
  • step c) of the method comprises selecting a specific herbicide from a set of herbicides for each weed identified in step a) regarding the crop.
  • step c) of the method comprises selecting a specific foliar fertilizer of a set of foliar fertilizers for cultivation identified in step a) according to their status.
  • step c) of the method comprises selecting a specific insecticide from a set of insecticides for the crop identified in step a) according to its state of deterioration.
  • step c) of the method comprises selecting a specific fungicide from a set of fungicides for the identified crop in step a) according to its state of deterioration.
  • step c) of the method comprises select a specific protective compound from a set of protective compounds for cultivation identified in step a) according to their status.
  • the method for the detection and identification of plant species in an agricultural crop comprises the steps of: a) obtaining a Real Time Video Stream from a plurality of cameras positioned along the wings of the autonomous set of devices for the detection and identification of plant species in an agricultural crop for the application of agrochemicals in a selective way; b) process each of the frames obtained; c) convert the frame to a numerical matrix with the representation of RGB (Red-Green-Blue English) colors of each pixel in the image; d) trim the matrix to select the area of the frame to be processed; e) assign an area of the image to the corresponding sprinklers, so that if weeds are detected in that area, the opening order is sent to the corresponding sprinkler; f) apply 4 filters to obtain a mask of the predominant colors of the plant species to be identified; g) identify the contours of the image on the color mask by saving the information of the position of each one; h) estimate the travel speed with the positions of the contours found in the current frame and the positions of those same
  • step f) of the method previous separates strange elements like earth, dry vegetable residue and species stones vegetables, where: a first filter transforms the YCbCr color format matrix; a second filter subtracts two channels in the RGB format depending on the color to filter; a third filter is a logical AND operation (bit by bit) between the filter results previous first and second; and a fourth filter applies to the previous result a blur (Gaussian blur), converting the image to black and white and removing the noise.
  • a first filter transforms the YCbCr color format matrix
  • a second filter subtracts two channels in the RGB format depending on the color to filter
  • a third filter is a logical AND operation (bit by bit) between the filter results previous first and second
  • a fourth filter applies to the previous result a blur (Gaussian blur), converting the image to black and white and removing the noise.
  • FIG.1a schematically represents the way the data goes through different types of tests, in order to make a decision in A three layer network.
  • FIG. 1b schematically represents the way in which the input layers of the network contain neurons that encode pixel values from entry.
  • FIG.1c schematically represents a possible architecture with rectangles denoting the subnets in order to show how the convolutional neural networks.
  • FIG. 2a schematically represents a rapid detection to classify plants that allows distinguish weed cultivation.
  • FIG.2b schematically represents the area and perimeter analysis performed by the system a Once weeds are detected.
  • FIG.2c schematically represents the spray with precision and accuracy that performs the weed system once the species is detected Vegetable and its size.
  • FIG.3 represents a frame of a Flow Real Time Video obtained from a camera.
  • FIG. 4 represents a table of equivalences between number of seals (or shutters) per second and vehicle speed in movement that carries the cameras.
  • FIG. 5 represents the frame of FIG. 3 converted to a numeric matrix with representation of RGB (Red-Green-Blue English) colors of each Image pixel
  • FIG. 6 represents an ideal size of central horizontal strip of the image to be process from the frame of [FIG.3].
  • FIG. 7a represents a transformation that make a first matrix filter in color format YCbCr.
  • FIG.7b represents a transformation that makes a second filter by subtracting two channels in the RGB format depending on the color to filter.
  • FIG.7c represents a transformation AND logic (bit by bit) between the results of the two previous filters as shown in [FIG.7a] and [FIG.7b] that performs a third filter.
  • FIG.7d1 represents a transformation of a fourth filter that applies to the previous result of the [FIG. 7c] a Gaussian blur.
  • FIG.7d2 represents the conversion of the Image from [FIG.7d1] to black and white.
  • FIG.7d3 represents the elimination of noise of [FIG.7d2] corresponding to the points scattered whites.
  • FIG. 8 represents the identification of Contours of the image on the color mask.
  • FIG. 9 represents the image cropped in small squares of approximately the same size containing the contours of [FIG. 8].
  • FIG. 10 represents one of the squares cropped from [FIG. 9] with image size changed to a preferred size of 256 x 256 pixels.
  • FIG. 11a represents another of the squares cut out of [FIG. 9] corresponding to a weed present in cultivation.
  • FIG. 11b represents a sequence where the square of [FIG. 11a] of a 256 x 256 image pixels is sent to the first layer or input layer of the previously trained convolutional neural network for analysis and categorization until you reach a last layer
  • FIG. 12 represents a result in value average success of each of the categories obtained from the last layer or network exit layer convolutional neuronal according to the sequence of the [FIG. 11a].
  • FIG. 13 is a complete representation of the processed frame of the main video stream according to the sequence of [FIG. 11a], where the unwanted plant species identified framed in red to apply a necessary agrochemical in each One of the weeds.
  • FIG. 14 represents an AlexNet model that It consists of 5 convolutional layers according to the architecture chosen for the training of the Caffe network.
  • FIG. 15 schematically represents a preferred embodiment of the autonomous assembly of devices for the detection and identification of plant species according to the present invention, showing how the boards with a microcontroller and development environment IDE (acronym for "Integrated Drive Electronics") Integrated Control Electronics) with inputs and outputs analog and digital chicken to the CPU (acronym for English “Central Processing Unit”, Processing Unit Central) through a USB port (acronym for English “Universal Serial Bus”, Universal Serial Bus).
  • IDE Integrated Drive Electronics
  • FIG. 16 represents a detail of the end of a side arm of a spraying unit where you can observe the cameras tilted towards forward at an angle of approximately 45 degrees with respect to the lower vertical axis and a plurality of associated sprinklers.
  • FIG.17 schematically represents a camera tilted forward at an angle ⁇ of approximately 45 degrees from the vertical axis bottom that is installed on a side arm of a spray unit showing image and size approximate of the scene to process with that decline.
  • the present invention is constituted mainly by an autonomous set of devices for the detection and identification of plant species in an agricultural crop for the application of Agrochemicals selectively.
  • the set consists of multiple cameras that must be arranged on the wing or arm of the boom of, for example, a machine sprayer, a detection device and plant species identification, a circuit electronic responsible for managing the opening and closing of the agrochemical product sprinkler peaks, and a Ultrasound sensor for each chamber in the set.
  • an autonomous set of devices for the detection and identification of plant species in an agricultural crop for the application of agrochemicals selectively said set comprises: a product application device chemicals comprising at least one container container of agrochemicals linked in fluid communication with a plurality of sprinkler peaks through a valve, a plurality of cameras that are arranged on the autonomous vehicle and focused on cultivation, where each camera has an associated ultrasound sensor for the height measurement to the crop in real time, and in where each camera is tilted forward at 45 degrees from normal; a device of detection and identification of plant species connected to the cameras to receive information from video of them, an electronic circuit in charge to manage the opening and closing of the valves of the sprinkler peaks of agrochemicals connected to detection device that manages through said circuit the opening and closing of the valves of the sprinkler peaks, and where, the set of devices is mounted on a vehicle of transport.
  • FIG. 15], [FIG. 16] and [FIG. 17] show diagrams and diagrams of the autonomous set of devices for the detection and identification of plant species in an agricultural crop for application of agrochemicals selectively according to a preferred form of the present invention.
  • the transport vehicle is a self-propelled vehicle or a tow vehicle.
  • the self-propelled vehicle is a fumigation vehicle with side arms arranged perpendicular to it (mosquito).
  • the detection and identification device consists of a processor that comprises a tool based on computer software developed in language C ++, and the use of the artificial vision framework OpenCV, and the neural network framework convolutional “Caffe” by Berkeley Vision and Learning Center By using this tool you can achieve recognition of different species Vegetables with 96% effectiveness.
  • the network is trained to perform a certain type of processing Once reached a adequate training level, you go to the phase of operation, where the network is used to carry out the task for which she was trained.
  • the Convolutionary Neural Network a set of data (Data Set) of a minimum of 50,000 photographs of the different plant species that you want to identify during training, taking into account the specific region of the planet and the species predominant in that place. These photographs are loaded through different folders or directories that represent the category to which each one belongs of them. The different photographs are supplied for example in JPEG format and at a minimum size of 80 x 80 pixels, preferably in a recommended size 256 x 256 pixels, which are included for each species different situations of the seedling namely loose leaves, partial leaves, whole plant, flowers, plant in context, etc.
  • the architecture chosen for Caffe network training is the AlexNet model consisting of 5 convolutional layers [See FIG. 14].
  • the network Once the learning or training phase is finished, the network generates a “ deploy.prototxt ” file, which is basically the learning model, and in this way it can already be used to perform the task for which it was trained.
  • a “ deploy.prototxt ” file which is basically the learning model, and in this way it can already be used to perform the task for which it was trained.
  • One of the main advantages of this model is that the network learns the relationship between the data, acquiring the ability to generalize concepts. In this way, a convolutional neural network can operate with information that was not presented during the training phase.
  • the network based classifier convolved neurals comprises: a plurality of feature mapping layers, at least one map of characteristics in at least one of a plurality of function mapping layers that are divided into a plurality of regions; and a plurality of templates convolutional corresponding to the plurality of regions, respectively, each of the templates convolutional is used to obtain a value response of a neuron in the region correspondent.
  • FIG.1a is a example that illustrates how data passes through different types of tests, in order to take a decision in a three layer network.
  • FIG.1b is an example that illustrates how network input layers contain neurons that encode the values of the input pixels.
  • FIG.1c is an example that illustrates a possible architecture, with rectangles denoting the subnets This is not meant to be a realistic approach to solve the problem of detection and identification of plant species, it is only by way of example for understand how neural networks work convolutional
  • the set of devices allows to determine the state of the crop, and being the agrochemical a herbicide, a foliar fertilizer, a insecticide, a fungicide, or a protective compound, is you can apply the appropriate agrochemical according to each circumstance.
  • the method allows you to select a specific herbicide of a set of herbicides for each weed identified with respect to the crop; or a specific foliar fertilizer of a set of foliar fertilizers for the crop identified according to its state; or a specific insecticide of a set of insecticides for the crop identified according to its state of deterioration; or a specific fungicide of a set of fungicides for cultivation identified according to its state of deterioration; and / or a specific protective compound of a set of protective compounds for the identified crop according to your condition
  • the process flow step by step in the process of detection and identification of the plant varieties of interest is as follows: 1) A Real Time Video Stream [FIG.3] is obtained from one or several cameras positioned along the wings or arms of, for example, a spraying machine. This stage is done at 60 frames per second on shutters per second [FIG. 4]. The number of shutters (or shutters) per second depends on the speed of the moving vehicle. 2) Each of the frames obtained is processed. 3) The frame is converted to a numerical matrix with the representation of RGB colors (Red-Green-Blue English) of each pixel in the image [FIG. 5]. Each pixel has blue, green and red components.
  • RGB colors Red-Green-Blue English
  • Each of these components has a range of 0 to 255, which gives a total of 2,563 different possible colors.
  • the matrix is cut to select the area of the frame to be processed [FIG. 6].
  • a horizontal strip size of the image to be processed is determined. This area to be processed is taken according to the subsequent ability to open the sprinkler for the application of the agrochemical exactly on the specific area.
  • the area to be processed is exactly the middle strip, since it maintains an optimal ratio of distance to the camera, low image distortion and time that will pass between the processing and subsequent application of the agrochemical, and success in the shot on the plant .
  • An area of the image is assigned to the corresponding sprinklers, so if weeds are detected in that area, the opening order is sent to the corresponding sprinkler.
  • the first filter transforms the matrix to YCbCr color format and makes a logical operation between the channels, depending on the color to be filtered [FIG. 7a].
  • the second filter subtracts two channels in the RGB format, depending on the color to be filtered [FIG.7b].
  • the third filter is a logical AND (bitwise) operation between the results of the two previous filters [FIG.7c].
  • the fourth filter applies a blur [FIG.7d1] or Gaussian blur to the previous result, the image is converted to black and white [FIG.7d2], and the noise [FIG.7d3] which are the points is eliminated scattered whites. 7)
  • the contours of the image on the color mask are identified [FIG. 8] and the position information of each one is saved. 8)
  • An estimated calculation of the velocity is made with the positions of the contours found in the current frame and their positions in a previous frame. A pixel speed per frame is obtained and a pixel-meter ratio and the frame-second ratio are used to make the speed passage to meters per second.
  • FIG. 11 The 256 x 256 pixel image squares [FIG. 11a] are sent to the first layer or input layer of the convolutional neural network previously trained for analysis and categorization [FIG. 11b]. 12) Each square is processed within the Neural Network, which can process several at a time. The processing is carried out in parallel and executed internally in the GPU and not on the CPU, to achieve great performance in arithmetic operations. Previously trained within the network, the picture is taken and is performed in parallel one pass (forward) and the result to the output layer of the neural network is sent. 13) The result of the last layer or output layer of the convolutional neural network is obtained.
  • the result of the output layer gives us an average value of success of each of the categories, the highest value being the category of plant species to which the image belongs [FIG. 12].
  • the categories of the neural network are the NO plant species, in which there are unknown plant species and / or elements not to be taken into account, earth, sky, etc. If the result is this category, it means that in the analysis of this picture box there is no category of known or pre-trained plant species. 14)
  • the agrochemical to be used is determined.
  • the system contains a table with the possible plant species to be identified and their relationship with the agrochemical to be used according to diagnosis, if applicable.
  • the device for the detection and identification of plant species already has a complete representation of the identified plant species and the agrochemical required to be applied in each of the plants that appear in the processed frame that was obtained from the main video stream [FIG. 13].
  • the mathematical calculation of the exact moment of activation of the electromechanical order is made, taking into account the speed of the spray vehicle and the distance from the chamber to the ground. Depending on the area of the processed frame, only the electromechanical valve corresponding to the specific field of action is activated.
  • valve corresponding to the specific agrochemical to be used is activated. It allows to administer multiple tanks of agrochemical product according to a specific need.
  • a field test of the whole was carried out Autonomous device for detection and identification of plant species in a crop agricultural for the application of agrochemicals in the form selective in the town of Las Rosas, province of Santa Fe, on a lot of 20 hectares planted with soy.
  • the herbicide selected to be applied was glyphosate (RoundUp) to approximately 1.4 liters per hectare on average.
  • mosquito Pulp, MAP II 3250 model
  • MAP II 3250 model An autonomous type sprayer was used mosquito (Pla, MAP II 3250 model) composed of a 3250 liter tank, where the side arms they included a line of pads mounted TeeJet brand sprayers, commanded with solenoid valves connected directly to the computer that controls dose application of herbicide.
  • the height of the peaks and sensors with respect of the floor was 1 meter.
  • the propulsion speed of the fumigator was approximately 16 km / h during The whole application.
  • a quantity of 12 cameras was used with sensors distributed evenly throughout the wing of the boom 28 meters long.
  • the batch chosen for the trial was cultivated with 4-week post-emergence soybeans and it had a low percentage of weeds and high concentration "staining", this is weed randomly distributed in patches.

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Soil Sciences (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Environmental Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Un ensemble autonome de dispositifs pour la détection et l'identification d'espèces végétales, aussi bien sylvestres que cultivées, dans une exploitation agricole, est un logiciel qui, à travers l'obtention d'une vidéo en temps réel, peut détecter, isoler, et identifier différentes espèces végétales aussi bien sylvestres que cultivées, par le biais de l'utilisation de réseaux neuronaux convolutionnels; pouvant différencier des aspects distinctifs au niveau de leur morphologie, taxonomie, et phyllotaxie. Par un entraînement préalable de ces réseaux neuronaux convolutionnels par rapport aux caractéristiques qui différencient une espèce d'une autre, le système permet l'identification particulière de chacune d'elles. Au moyen d'un système de caméras vidéo, montées sur un véhicule de transport, et avec l'obtention de ces données en temps réel, le système informatique peut diagnostiquer le produit agrochimique à appliquer en fonction de la plante identifiée; et actionner électromécaniquement l'ouverture de la soupape de la buse d'aspersion. Ainsi, la plante reçoit la dose exacte et le produit agrochimique spécifique suivant le traitement nécessaire.
PCT/ES2016/070655 2016-04-12 2016-09-20 Ensemble autonome de dispositifs et procédé pour la détection et l'identification d'espèces végétales dans une culture agricole pour l'application de produits agrochimiques de manière sélective WO2017178666A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AR20160100983 2016-04-12
ARP160100983A AR104234A1 (es) 2016-04-12 2016-04-12 Conjunto autónomo de dispositivos y método para la detección e identificación de especies vegetales en un cultivo agrícola para la aplicación de agroquímicos en forma selectiva

Publications (1)

Publication Number Publication Date
WO2017178666A1 true WO2017178666A1 (fr) 2017-10-19

Family

ID=59487587

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/ES2016/070655 WO2017178666A1 (fr) 2016-04-12 2016-09-20 Ensemble autonome de dispositifs et procédé pour la détection et l'identification d'espèces végétales dans une culture agricole pour l'application de produits agrochimiques de manière sélective

Country Status (2)

Country Link
AR (1) AR104234A1 (fr)
WO (1) WO2017178666A1 (fr)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898059A (zh) * 2018-05-30 2018-11-27 上海应用技术大学 花卉识别方法及其设备
WO2019094266A1 (fr) * 2017-11-07 2019-05-16 University Of Florida Research Foundation Détection et gestion de végétation cible au moyen d'une vision artificielle
CN110070101A (zh) * 2019-03-12 2019-07-30 平安科技(深圳)有限公司 植物种类的识别方法及装置、存储介质、计算机设备
WO2019226869A1 (fr) * 2018-05-24 2019-11-28 Blue River Technology Inc. Segmentation sémantique pour identifier et traiter des plantes dans un champ et vérifier le traitement des plantes
CN111325240A (zh) * 2020-01-23 2020-06-23 杭州睿琪软件有限公司 与杂草相关的计算机可执行的方法和计算机***
US10748042B2 (en) 2018-06-22 2020-08-18 Cnh Industrial Canada, Ltd. Measuring crop residue from imagery using a machine-learned convolutional neural network
US20210244010A1 (en) * 2020-02-12 2021-08-12 Martin Perry Heard Ultrasound controlled spot sprayer for row crops
WO2021176254A1 (fr) 2020-03-05 2021-09-10 Plantium S.A. Système et procédé de détection et d'identification de culture et de mauvaise herbe
RU2763438C2 (ru) * 2019-06-20 2021-12-29 ФГБОУ ВО "Оренбургский государственный аграрный университет" Стенд для настройки бесконтактных датчиков
CN115349340A (zh) * 2022-09-19 2022-11-18 沈阳农业大学 基于人工智能的高粱施肥控制方法及***
US11580718B2 (en) 2019-08-19 2023-02-14 Blue River Technology Inc. Plant group identification
EP4245135A1 (fr) 2022-03-16 2023-09-20 Bayer AG Mise en oeuvre et documentation d'une application de produits phytosanitaires

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112581459A (zh) * 2020-12-23 2021-03-30 安徽高哲信息技术有限公司 一种农作物分类***和方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1000540A1 (fr) * 1998-11-16 2000-05-17 McLoughlin, Daniel Traitement d'image
US6714662B1 (en) * 2000-07-10 2004-03-30 Case Corporation Method and apparatus for determining the quality of an image of an agricultural field using a plurality of fuzzy logic input membership functions
JP2004180554A (ja) * 2002-12-02 2004-07-02 National Agriculture & Bio-Oriented Research Organization 果菜類の選択収穫方法及び装置
US20150245565A1 (en) * 2014-02-20 2015-09-03 Bob Pilgrim Device and Method for Applying Chemicals to Specific Locations on Plants

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1000540A1 (fr) * 1998-11-16 2000-05-17 McLoughlin, Daniel Traitement d'image
US6714662B1 (en) * 2000-07-10 2004-03-30 Case Corporation Method and apparatus for determining the quality of an image of an agricultural field using a plurality of fuzzy logic input membership functions
JP2004180554A (ja) * 2002-12-02 2004-07-02 National Agriculture & Bio-Oriented Research Organization 果菜類の選択収穫方法及び装置
US20150245565A1 (en) * 2014-02-20 2015-09-03 Bob Pilgrim Device and Method for Applying Chemicals to Specific Locations on Plants

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019094266A1 (fr) * 2017-11-07 2019-05-16 University Of Florida Research Foundation Détection et gestion de végétation cible au moyen d'une vision artificielle
US11468670B2 (en) 2017-11-07 2022-10-11 University Of Florida Research Foundation, Incorporated Detection and management of target vegetation using machine vision
US11514671B2 (en) 2018-05-24 2022-11-29 Blue River Technology Inc. Semantic segmentation to identify and treat plants in a field and verify the plant treatments
WO2019226869A1 (fr) * 2018-05-24 2019-11-28 Blue River Technology Inc. Segmentation sémantique pour identifier et traiter des plantes dans un champ et vérifier le traitement des plantes
US10713484B2 (en) 2018-05-24 2020-07-14 Blue River Technology Inc. Semantic segmentation to identify and treat plants in a field and verify the plant treatments
CN108898059A (zh) * 2018-05-30 2018-11-27 上海应用技术大学 花卉识别方法及其设备
US10748042B2 (en) 2018-06-22 2020-08-18 Cnh Industrial Canada, Ltd. Measuring crop residue from imagery using a machine-learned convolutional neural network
CN110070101A (zh) * 2019-03-12 2019-07-30 平安科技(深圳)有限公司 植物种类的识别方法及装置、存储介质、计算机设备
CN110070101B (zh) * 2019-03-12 2024-05-14 平安科技(深圳)有限公司 植物种类的识别方法及装置、存储介质、计算机设备
RU2763438C2 (ru) * 2019-06-20 2021-12-29 ФГБОУ ВО "Оренбургский государственный аграрный университет" Стенд для настройки бесконтактных датчиков
US11823388B2 (en) 2019-08-19 2023-11-21 Blue River Technology Inc. Plant group identification
US11580718B2 (en) 2019-08-19 2023-02-14 Blue River Technology Inc. Plant group identification
CN111325240A (zh) * 2020-01-23 2020-06-23 杭州睿琪软件有限公司 与杂草相关的计算机可执行的方法和计算机***
US20210244010A1 (en) * 2020-02-12 2021-08-12 Martin Perry Heard Ultrasound controlled spot sprayer for row crops
WO2021176254A1 (fr) 2020-03-05 2021-09-10 Plantium S.A. Système et procédé de détection et d'identification de culture et de mauvaise herbe
EP4245135A1 (fr) 2022-03-16 2023-09-20 Bayer AG Mise en oeuvre et documentation d'une application de produits phytosanitaires
WO2023174827A1 (fr) 2022-03-16 2023-09-21 Bayer Aktiengesellschaft Mise en oeuvre et documentation de l'application de produits de protection des cultures
CN115349340A (zh) * 2022-09-19 2022-11-18 沈阳农业大学 基于人工智能的高粱施肥控制方法及***
CN115349340B (zh) * 2022-09-19 2023-05-19 沈阳农业大学 基于人工智能的高粱施肥控制方法及***

Also Published As

Publication number Publication date
AR104234A1 (es) 2017-07-05

Similar Documents

Publication Publication Date Title
WO2017178666A1 (fr) Ensemble autonome de dispositifs et procédé pour la détection et l'identification d'espèces végétales dans une culture agricole pour l'application de produits agrochimiques de manière sélective
US10614562B2 (en) Inventory, growth, and risk prediction using image processing
Di Cicco et al. Automatic model based dataset generation for fast and accurate crop and weeds detection
Kamilaris et al. Deep learning in agriculture: A survey
Oberti et al. Selective spraying of grapevines for disease control using a modular agricultural robot
Ajayi et al. Effect of varying training epochs of a faster region-based convolutional neural network on the accuracy of an automatic weed classification scheme
Gao et al. A spraying path planning algorithm based on colour-depth fusion segmentation in peach orchards
Finn et al. Unsupervised spectral-spatial processing of drone imagery for identification of pine seedlings
Lyu et al. Development of phenotyping system using low altitude UAV imagery and deep learning
Olsen Improving the accuracy of weed species detection for robotic weed control in complex real-time environments
Zhang et al. Feasibility assessment of tree-level flower intensity quantification from UAV RGB imagery: a triennial study in an apple orchard
Ahn et al. An overview of perception methods for horticultural robots: From pollination to harvest
Bulanon et al. Machine vision system for orchard management
Valicharla Weed recognition in agriculture: A mask R-CNN approach
US11544920B2 (en) Using empirical evidence to generate synthetic training data for plant detection
Cruz Ulloa et al. Trend technologies for robotic fertilization process in row crops
Negrete Artificial vision in mexican agriculture for identification of diseases, pests and invasive plants
Garibaldi-Márquez et al. Segmentation and classification networks for corn/weed detection under excessive field variabilities
Nirunsin et al. Size Estimation of Mango Using Mask-RCNN Object Detection and Stereo Camera for Agricultural Robotics
Bono et al. Biomass characterization with semantic segmentation models and point cloud analysis for precision viticulture
Chen Estimating plant phenotypic traits from RGB imagery
US12001512B2 (en) Generating labeled synthetic training data
Restrepo-Arias et al. Crops Classification in Small Areas Using Unmanned Aerial Vehicles (UAV) and Deep Learning Pre-trained Models from Detectron2
Han et al. Deep learning-based path detection in citrus orchard
Campos Silvestre Computer vision techniques for greenness identification and obstacle detection in maize fields

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16898531

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.02.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16898531

Country of ref document: EP

Kind code of ref document: A1