WO2023018387A1 - A crop classification method using deep neural networks - Google Patents
A crop classification method using deep neural networks Download PDFInfo
- Publication number
- WO2023018387A1 WO2023018387A1 PCT/TR2021/050792 TR2021050792W WO2023018387A1 WO 2023018387 A1 WO2023018387 A1 WO 2023018387A1 TR 2021050792 W TR2021050792 W TR 2021050792W WO 2023018387 A1 WO2023018387 A1 WO 2023018387A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- time series
- crop
- classification method
- process step
- classification
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000013528 artificial neural network Methods 0.000 title description 5
- 230000000694 effects Effects 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 6
- 230000004807 localization Effects 0.000 claims description 3
- 238000003306 harvesting Methods 0.000 claims description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 2
- 238000001914 filtration Methods 0.000 claims 1
- 238000013527 convolutional neural network Methods 0.000 description 10
- 241000196324 Embryophyta Species 0.000 description 7
- 238000011161 development Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 5
- 230000002123 temporal effect Effects 0.000 description 5
- 230000003252 repetitive effect Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 229920000742 Cotton Polymers 0.000 description 2
- 244000068988 Glycine max Species 0.000 description 2
- 235000010469 Glycine max Nutrition 0.000 description 2
- 241000209140 Triticum Species 0.000 description 2
- 235000021307 Triticum Nutrition 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 241000219146 Gossypium Species 0.000 description 1
- 241000207836 Olea <angiosperm> Species 0.000 description 1
- 244000046052 Phaseolus vulgaris Species 0.000 description 1
- 235000010627 Phaseolus vulgaris Nutrition 0.000 description 1
- 244000269722 Thea sinensis Species 0.000 description 1
- 244000098338 Triticum aestivum Species 0.000 description 1
- 235000007264 Triticum durum Nutrition 0.000 description 1
- 241000209143 Triticum turgidum subsp. durum Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 235000013339 cereals Nutrition 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004460 silage Substances 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 238000009331 sowing Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
Definitions
- the present invention relates to a crop classification method that can operate with a high hit rate on a global scale.
- the present invention particularly relates to a crop detection and classification method that allows for classifying cultivated areas in the observed regions of the satellite imagery for detecting the boundaries of the clusters that exhibit similar development characteristics in the same time period in the regions, and collecting the data related to economic activities such as cultivated crop type and yield estimation in designated agricultural activity group areas.
- CNN-based solutions can be divided into two groups based on the usage of data: FCN (Fully Convolutional Networks) based solutions are generally used for crop classification through an input of a single image.
- FCN Fast Convolutional Networks
- This group of methods uses images of a single time for classification, and images are classified based on pixels by means of using architectures such as MASK R-CNN.
- Phenology based solutions are based on the detection of distinguishing features of the development of the plant groups. Data sets of these feature groups are obtained from multiple satellite images collected progressively over time and subjected to classification. Said group methods are generally based on product- and location-specific heuristic approaches.
- RNN based solutions have been developed as a solution to the problem that CNN-based solutions cannot capture temporal variations in general.
- LSTM Long-Short Term Memory
- LSTM Long-Short Term Memory
- the invention that is subject to the application numbered "CN107121681" relates to a residential land extraction system based on high- resolution satellite remote sensing data.
- the invention provides automatic extraction of residential areas by means of using remote sensing data based on the difference of characteristics between residential areas and non-resident areas. Handcrafted indexes are utilized in order to classify residential areas and deep learning techniques are not utilized in order to obtain distinguishing features for crop types.
- the invention that is subjected to the application numbered "CN109214287” relates to the technical field of remote sensing image processing, particularly to a method and system for crop interpretation based on RapidEye satellite remote sensing images.
- the invention that is subjected to the application numbered “CN110020635” relates to the technical field of crop classification, particularly to a method for finely classifying crops in a planting area based on images of a drone and a satellite images.
- the invention performs product classification by means of using convolutional neural network.
- Phenology based systems in the state of the art are based on finding the temporal patterns of changes that are formed in the reflection values during the development stage of plants.
- the change in reflection values differs according to the aforementioned local variables and sowing time. This situation increases the intraclass variances as in the previous method group and causes that the products in different classes are confused with each other.
- the present invention relates to a crop classification method that can operate with a high hit rate on a global scale.
- the most important object of the present invention is to enable the amount of the crop to be produced. Thus, it provides that the early indicators related to the crop yield and cultivated area are detected.
- Another important object of the present invention is to enable calculation of the agricultural feasibility score by means of evaluating the fields and the agricultural feasibility of the farmers worldwide. Thus, financial risks can become anticipatable.
- Yet another important object of the present invention is to provide frequently updated results and substantial analyzes with the obtained data. Thus, it gains an important place in the decision-making process of all large-scaled crop buyers, including commodity, food, and retail businesses.
- Yet another important object of the present invention is to provide determining, distinguishing, and repetitive features on a global scale, which serve to distinguish plant/product groups from each other, instead of directly using the reflection values for classification.
- Figure 1 illustrates a view of schematic flow diagram of the method according to the present invention.
- the present invention particularly relates to a classification method of a product that allows for classifying the areas in the region of which the satellite images are examined according to use, detecting the boundaries of the clusters that show similar development characteristics in the same time period in the regions that are determined as agricultural activity areas, and collecting the data related to economic activities such as product type classification and harvest estimation in designated agricultural activity group areas.
- the crop classification method (100) enables that the features of determining, distinguishing, and globally repetitive features are found, which serve to distinguish plant/product groups from each other for classification. Deep learning methods are used to extract said features. Then, these determined features are searched and the plants are classified independently from the local variables as much as possible.
- the crop classification method (100) divides the classification problem into three smaller problems unlike the methods in use. Thus, adaptation is provided much more easily compared to methods in the literature when there is a change in classification targets/classes or application areas.
- the crop classification method (100) provides classification by means of classifying the areas in the region of which the satellite images are examined according to use, detecting the boundaries of the clusters that show similar development characteristics in the same time period in the regions that are determined as agricultural activity areas, finding the features of determining, distinguishing, and globally repetitive features, which serve to distinguish plant/product groups from each other, and training the neural network via said data.
- the crop classification method (100) comprises the following process steps. Said process steps are executed on a server.
- step of determining (102) the areas that may be used for classification in the region where satellite images are examined, cloud, cloud shadow, and water areas are filtered from all images.
- Fmask 4.0 algorithm is used in order to carry out said process.
- a time series representation of pixel values and selected indices (NDVI, EVI) is generated for each band during the agricultural season.
- the indices selected therein are the normalized difference vegetation index (NDVI) and the enhanced vegetation index (EVI).
- NDVI normalized difference vegetation index
- EVI enhanced vegetation index
- a separate time series vector is generated for each pixel.
- missing points are obtained by means of modifying one of the predefined curves to the present data.
- the predefined curves are based on generalized cultivated land behavior and are sigmoid functions.
- the process step of extracting (106) the feature by means of using multiple time series from different bands is modified to be based on VGG-16 architecture and to use multiple time series from different bands as inputs. Classification is executed for each pixel independently.
- a detector head with additional convolutional feature layers in multiple scales is utilized in the final stage same as in SSD.
- a detector head with additional convolutional feature layers in multiple scales is utilized in the final stage same as in SSD.
- prior boxes as in SSD only one-dimensional prior regions are used on the time axis since the desired results do not include decomposition on the band axis of the two- dimensional data (bands-time). These regions cover the entire time interval (1 year) and include a wide combination of starting points and lengths.
- Crop index, localization, and classification loss functions are defined for each region in order to train the detector head.
- the crop index loss function measures the suitability of the relevant time period to the generalized crop behavior.
- the localization loss function measures the accuracy of the temporal position and length of the relevant time period.
- the classification loss function measures the accuracy of matching the relevant time period with the specific crop type.
- a vector of class scores is generated for the present classes.
- Said classification is independent of the start, end (time of the year) or duration of the time series. This allows classification of similar crops in multiple climates/geographic regions having differences in growth parameters. Scores are generated for each pixel showing how similar the growth model of said pixel is to the reference data for different crop types. For initial scores, binary regression loss is used to provide independent scoring for each class.
- the CNN model that performs the classification process uses a hierarchical tree. Primarily, the crop index confidence score is generated independently for each determined sub-category and parent category in each prior region that is defined on the time axis.
- the classification process is carried out by means of matching with the classes that are determined within the regions having high crop index.
- the processes of finding and classifying the agricultural activity in the time series are separated from each other by means of this change in the detector.
- the final classification is made by means of multiplying each child by the parental confidence score and navigating the tree. It is provided that the detector is precisely trained for distinguishing and globally repetitive features for each plant and region by means of the changes in this step made to the detector architecture.
- Agricultural areas are divided into two groups as cultivated lands and pastures.
- Cultivated lands create four different sub-branches: com, cotton, soybean, and wheat.
- Com has the grain and silage sub-branch, cotton has the fiber sub-branch, soybean has the bean sub-branch, and wheat has soft wheat and durum wheat sub-branch.
- Permanent vegetation is classified as permanent crop, forest, and shrubs. Permanent crops are classified as olives, vineyards, and tea plantations.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Remote Sensing (AREA)
- Astronomy & Astrophysics (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a crop classification method that can operate with a high accuracy on a global scale.
Description
A CROP CLASSIFICATION METHOD USING DEEP NEURAL NETWORKS
Technical Field of the Invention:
The present invention relates to a crop classification method that can operate with a high hit rate on a global scale.
The present invention particularly relates to a crop detection and classification method that allows for classifying cultivated areas in the observed regions of the satellite imagery for detecting the boundaries of the clusters that exhibit similar development characteristics in the same time period in the regions, and collecting the data related to economic activities such as cultivated crop type and yield estimation in designated agricultural activity group areas.
State of the Art:
The problem of remote sensing based agricultural product classification has become a popular research area in recent years. As methods such as deep learning and artificial neural networks have become more accessible upon no longer being limited due to hardware restrictions, substantial changes have been observed in this particular field. As with other image processing applications, deep learning-based methods have attained or leave behind the conventional systems in terms of hit rate. It is observed that said methods have gained importance compared to conventional methods in publications in recent years when the literature on this subject is examined.
CNN (Convolutional Neural Network) and RNN (Recurrent Neural Network) architectures become prominent in solving said problem. CNN-based solutions, on the other hand, can be divided into two groups based on the usage of data: FCN (Fully Convolutional Networks) based solutions are generally used for crop classification through an input of a single image. This group of methods uses images of a single time for classification, and images are classified based on pixels by means of using architectures such as MASK R-CNN. Phenology based solutions are based on the detection of distinguishing features of the development of the plant groups. Data sets
of these feature groups are obtained from multiple satellite images collected progressively over time and subjected to classification. Said group methods are generally based on product- and location-specific heuristic approaches. Even though the infrastructure of the classification is similar, the size and use of the data to be processed are different from the first group methods. RNN based solutions have been developed as a solution to the problem that CNN-based solutions cannot capture temporal variations in general. LSTM (Long-Short Term Memory) cells are used to capture the temporal overlap in RNN based architectures. Through more efficient exploitation of temporal information from multiple images, a higher accuracy can be achieved compared to CNN based methods. The most important disadvantage of these methods is that the training procedure thereof is complex.
In the state of the art, the invention that is subject to the application numbered "CN107121681" relates to a residential land extraction system based on high- resolution satellite remote sensing data. The invention provides automatic extraction of residential areas by means of using remote sensing data based on the difference of characteristics between residential areas and non-resident areas. Handcrafted indexes are utilized in order to classify residential areas and deep learning techniques are not utilized in order to obtain distinguishing features for crop types.
In the state of the art, the invention that is subjected to the application numbered "CN109214287" relates to the technical field of remote sensing image processing, particularly to a method and system for crop interpretation based on RapidEye satellite remote sensing images.
In the state of the art, the invention that is subjected to the application numbered “CN110020635” relates to the technical field of crop classification, particularly to a method for finely classifying crops in a planting area based on images of a drone and a satellite images. The invention performs product classification by means of using convolutional neural network.
Applications that are available in the state of the art can achieve high success at a local level, however, they cannot demonstrate the same performance when they are applied different regions at a global level. Products have different phrenology curves
in different regions and show different development patterns due to soil, climate, amount of sunlight, and method differences in agricultural activity groups.
The drawback of deep learning-based surface classification systems in the state of the art is that all problems in the classification process (determination of agricultural activities and areas outside the application area of interest, determination of field boundaries, crop classification) are endeavored to be carried out on a single model. It is not possible to provide a successful classification solution on a global scale in the training data that may solve all the problems for a general application since the intraclass variances are very high.
Phenology based systems in the state of the art are based on finding the temporal patterns of changes that are formed in the reflection values during the development stage of plants. However, also the change in reflection values differs according to the aforementioned local variables and sowing time. This situation increases the intraclass variances as in the previous method group and causes that the products in different classes are confused with each other.
While the present algorithms in the state of the art may demonstrate high performance, they cannot be used on a global scale, and the performance thereof is seriously reduced in such a use. The output performance of many of the present algorithms and solutions is low, and making analyzes based on the years becomes difficult unless the boundaries and usage of the fields are detected with high accuracy. It is therefore of great importance to make estimations based on reliable data that accurately anticipate the amount of crops to be produced, and detect the early indicators related to the amount of crop and cultivated area.
Consequently, the disadvantages disclosed above and the inadequacy of available solutions in this regard necessitated making an improvement in the relevant technical field.
Objects of the Invention:
The present invention relates to a crop classification method that can operate with a high hit rate on a global scale.
The most important object of the present invention is to enable the amount of the crop to be produced. Thus, it provides that the early indicators related to the crop yield and cultivated area are detected.
Another important object of the present invention is to enable calculation of the agricultural feasibility score by means of evaluating the fields and the agricultural feasibility of the farmers worldwide. Thus, financial risks can become anticipatable.
Yet another important object of the present invention is to provide frequently updated results and substantial analyzes with the obtained data. Thus, it gains an important place in the decision-making process of all large-scaled crop buyers, including commodity, food, and retail businesses.
Yet another important object of the present invention is to provide determining, distinguishing, and repetitive features on a global scale, which serve to distinguish plant/product groups from each other, instead of directly using the reflection values for classification.
Structural and characteristic features of the present invention as well as all advantages thereof will be understood more clearly from figure disclosed below and the detailed description written by making references to these figures. Therefore, the assessment should be made by taking these figures and the detailed description into consideration.
Description of the Figures:
Figure 1 illustrates a view of schematic flow diagram of the method according to the present invention.
Reference Numerals:
100. Decision Support Method
101. Receiving remote sensing images from satellites
102. Determining the areas that may be used for classification in the region where satellite images are examined
103. Generating a time series representation of pixel values for each band
104. Interpolating for missing points in time series
105. Generating the class scores of time series data for each pixel by means of using CNN
106. Extracting the feature by means of using multiple time series from different bands (106)
107. Collecting the features from additional convolutional feature layers, from multiple scales
108. Scoring prior regions in respective detector head branches and eliminating low- scoring regions
109. Generating a vector of class scores for each time series corresponding to individual pixels
Description of the Invention
The present invention particularly relates to a classification method of a product that allows for classifying the areas in the region of which the satellite images are examined according to use, detecting the boundaries of the clusters that show similar development characteristics in the same time period in the regions that are determined as agricultural activity areas, and collecting the data related to economic activities such as product type classification and harvest estimation in designated agricultural activity group areas.
The crop classification method (100) enables that the features of determining, distinguishing, and globally repetitive features are found, which serve to distinguish plant/product groups from each other for classification. Deep learning methods are used to extract said features. Then, these determined features are searched and the plants are classified independently from the local variables as much as possible. The crop classification method (100) divides the classification problem into three smaller problems unlike the methods in use. Thus, adaptation is provided much more easily
compared to methods in the literature when there is a change in classification targets/classes or application areas.
The crop classification method (100) provides classification by means of classifying the areas in the region of which the satellite images are examined according to use, detecting the boundaries of the clusters that show similar development characteristics in the same time period in the regions that are determined as agricultural activity areas, finding the features of determining, distinguishing, and globally repetitive features, which serve to distinguish plant/product groups from each other, and training the neural network via said data.
The crop classification method (100) comprises the following process steps. Said process steps are executed on a server.
In the process step of receiving (101 ) remote sensing images from satellites, images taken by Sentinel-1 and Sentinel-2 satellites are received.
In the process step of determining (102) the areas that may be used for classification in the region where satellite images are examined, cloud, cloud shadow, and water areas are filtered from all images. Fmask 4.0 algorithm is used in order to carry out said process.
In the process step of generating (103) a time series representation of pixel values for each band, a time series representation of pixel values and selected indices (NDVI, EVI) is generated for each band during the agricultural season. The indices selected therein are the normalized difference vegetation index (NDVI) and the enhanced vegetation index (EVI). In addition, a separate time series vector is generated for each pixel.
In the process step of interpolating (104) for missing points in time series, missing points are obtained by means of modifying one of the predefined curves to the present data. The predefined curves are based on generalized cultivated land behavior and are sigmoid functions.
In the process step of generating (105) the class scores of time series data for each pixel by means of using CNN, a single-shot multi-box approach is used, which is
recommended for the SSD detector, however, adapted for use in hierarchical time series data.
The process step of extracting (106) the feature by means of using multiple time series from different bands is modified to be based on VGG-16 architecture and to use multiple time series from different bands as inputs. Classification is executed for each pixel independently.
In the process step of collecting (107) the features from additional convolutional feature layers, from multiple scales, a detector head with additional convolutional feature layers in multiple scales is utilized in the final stage same as in SSD. Instead of using prior boxes as in SSD, only one-dimensional prior regions are used on the time axis since the desired results do not include decomposition on the band axis of the two- dimensional data (bands-time). These regions cover the entire time interval (1 year) and include a wide combination of starting points and lengths.
In the process step of scoring prior regions in respective detector head branches and eliminating low-scoring regions (108), as in SSD, previous regions are scored in the respective detector head branches, and a non-maximum suppression stage is used in order to eliminate the low-scoring regions. Crop index, localization, and classification loss functions are defined for each region in order to train the detector head. The crop index loss function measures the suitability of the relevant time period to the generalized crop behavior. The localization loss function measures the accuracy of the temporal position and length of the relevant time period. The classification loss function, on the other hand, measures the accuracy of matching the relevant time period with the specific crop type.
In the process step of generating (109) a vector of class scores for each time series corresponding to individual pixels, a vector of class scores is generated for the present classes. Said classification is independent of the start, end (time of the year) or duration of the time series. This allows classification of similar crops in multiple climates/geographic regions having differences in growth parameters. Scores are generated for each pixel showing how similar the growth model of said pixel is to the reference data for different crop types. For initial scores, binary regression loss is used to provide independent scoring for each class.
The CNN model that performs the classification process uses a hierarchical tree. Primarily, the crop index confidence score is generated independently for each determined sub-category and parent category in each prior region that is defined on the time axis. Subsequently, the classification process is carried out by means of matching with the classes that are determined within the regions having high crop index. The processes of finding and classifying the agricultural activity in the time series are separated from each other by means of this change in the detector. Starting at the top of the hierarchy, the final classification is made by means of multiplying each child by the parental confidence score and navigating the tree. It is provided that the detector is precisely trained for distinguishing and globally repetitive features for each plant and region by means of the changes in this step made to the detector architecture.
There are 3 groups classified as agricultural areas, areas with permanent vegetation, and other areas at the top of the hierarchical tree in the classification that is performed. Agricultural areas are divided into two groups as cultivated lands and pastures. Cultivated lands, on the other hand, create four different sub-branches: com, cotton, soybean, and wheat. Com has the grain and silage sub-branch, cotton has the fiber sub-branch, soybean has the bean sub-branch, and wheat has soft wheat and durum wheat sub-branch. Permanent vegetation is classified as permanent crop, forest, and shrubs. Permanent crops are classified as olives, vineyards, and tea plantations.
Claims
CLAIMS A crop classification method (100) that enables crop type classification and harvest estimation in agricultural activity group areas by means of providing classification of the observed regions in the satellite imagery based on use, characterized in that, it comprises the following process steps executed on the server;
• Receiving (101 ) remote sensing images from satellites,
• Determining (102) the areas that may be used for classification in the region where satellite images are examined,
• Generating (103) a time series representation of pixel values for each band,
• Interpolating (104) for missing points in time series,
• Generating (105) the class scores of time series data for each pixel by means of using CNN,
• Extracting (106) the features by means of using multiple time series from different bands,
• Collecting (107) the features from additional feature layers that are consist of convolutional layers, from multiple scales,
• Scoring prior regions in the respective detector head branches and eliminating low-scoring regions (108),
• Generating (109) a vector of class scores for each time series corresponding to individual pixels A crop classification method (100) according to Claim 1 , characterized in that, the images taken by Sentinel-1 and Sentinel-2 satellites are used in the process step of receiving (101 ) remote sensing images from satellites. A crop classification method (100) according to Claim 1 , characterized in that, it provides filtering the cloud, cloud shadow, and water areas from all images in the process step of determining (102) the areas that may be used for classification in the region where satellite images are examined.
9
A crop classification method (100) according to Claim 1 , characterized in that, a time series representation of pixel values and selected indices (NDVI, EVI) is generated for each band in the process step of generating (103) a time series representation of pixel values for each band. A crop classification method (100) according to Claim 1 and Claim 4, characterized in that, the indices selected are the normalized difference vegetation index and the enhanced vegetation index in the process step of generating (103) a time series representation of pixel values for each band. A crop classification method (100) according to Claim 1 , characterized in that, the missing points are obtained by means of modifying one of the predefined curves to the present data in the process step of interpolating (104) for missing points in time series. A crop classification method (100) according to Claim 1 , characterized in that, features for time-series of each pixel are extracted using the VGG-16 architecture and by means of using multiple time series from different bands as input in the process step of extracting (106) the features by means of using multiple time series from different bands. A crop classification method (100) according to Claim 1 , characterized in that, crop index, localization, and classification loss functions are defined for each region in order to train the detector head in the process step of scoring prior regions in the respective detector head branches and eliminating low-scoring regions (108). A crop classification method (100) according to Claim 1 , characterized in that, the scores are generated for each pixel showing how similar the growth model of said pixel is to the reference data for different crop types in the process step of generating (109) a vector of class scores for each time series corresponding to individual pixels
A crop classification method (100) according to Claim 1 and Claim 9, characterized in that, in the initial scores, binary regression loss is used in order to provide independent scoring for each class in the process step of generating (109) a vector of class scores for each time series corresponding to individual pixels.
11
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/TR2021/050792 WO2023018387A1 (en) | 2021-08-11 | 2021-08-11 | A crop classification method using deep neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/TR2021/050792 WO2023018387A1 (en) | 2021-08-11 | 2021-08-11 | A crop classification method using deep neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023018387A1 true WO2023018387A1 (en) | 2023-02-16 |
Family
ID=85200976
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/TR2021/050792 WO2023018387A1 (en) | 2021-08-11 | 2021-08-11 | A crop classification method using deep neural networks |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023018387A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116563720A (en) * | 2023-07-12 | 2023-08-08 | 华中师范大学 | Single-double-season rice sample automatic generation method for cooperative optical-microwave physical characteristics |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180211156A1 (en) * | 2017-01-26 | 2018-07-26 | The Climate Corporation | Crop yield estimation using agronomic neural network |
CN109977802A (en) * | 2019-03-08 | 2019-07-05 | 武汉大学 | Crops Classification recognition methods under strong background noise |
CN112115983A (en) * | 2020-08-28 | 2020-12-22 | 浙大城市学院 | Deep learning-based crop fruit sorting algorithm |
-
2021
- 2021-08-11 WO PCT/TR2021/050792 patent/WO2023018387A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180211156A1 (en) * | 2017-01-26 | 2018-07-26 | The Climate Corporation | Crop yield estimation using agronomic neural network |
CN109977802A (en) * | 2019-03-08 | 2019-07-05 | 武汉大学 | Crops Classification recognition methods under strong background noise |
CN112115983A (en) * | 2020-08-28 | 2020-12-22 | 浙大城市学院 | Deep learning-based crop fruit sorting algorithm |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116563720A (en) * | 2023-07-12 | 2023-08-08 | 华中师范大学 | Single-double-season rice sample automatic generation method for cooperative optical-microwave physical characteristics |
CN116563720B (en) * | 2023-07-12 | 2023-10-03 | 华中师范大学 | Single-double-season rice sample automatic generation method for cooperative optical-microwave physical characteristics |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kamilaris et al. | Deep learning in agriculture: A survey | |
Saranya et al. | A comparative study of deep learning and Internet of Things for precision agriculture | |
Cardellicchio et al. | Detection of tomato plant phenotyping traits using YOLOv5-based single stage detectors | |
Dharani et al. | Review on crop prediction using deep learning techniques | |
Huang et al. | Deep localization model for intra-row crop detection in paddy field | |
Oppenheim et al. | Detecting tomato flowers in greenhouses using computer vision | |
Tiwari et al. | An experimental set up for utilizing convolutional neural network in automated weed detection | |
Farjon et al. | Deep-learning-based counting methods, datasets, and applications in agriculture: A review | |
Chen et al. | Sugarcane nodes identification algorithm based on sum of local pixel of minimum points of vertical projection function | |
Chen et al. | A novel Greenness and Water Content Composite Index (GWCCI) for soybean mapping from single remotely sensed multispectral images | |
Mhango et al. | Applying colour-based feature extraction and transfer learning to develop a high throughput inference system for potato (Solanum tuberosum L.) stems with images from unmanned aerial vehicles after canopy consolidation | |
Nagaraju et al. | Multifactor Analysis to Predict Best Crop using Xg-Boost Algorithm | |
Sapkota et al. | Comparing YOLOv8 and Mask RCNN for object segmentation in complex orchard environments | |
WO2023018387A1 (en) | A crop classification method using deep neural networks | |
Zualkernan et al. | Machine Learning for Precision Agriculture Using Imagery from Unmanned Aerial Vehicles (UAVs): A Survey. Drones 2023, 7, 382 | |
Concepcion et al. | Arabidopsis Tracker: A Centroid-Based Vegetation Localization Model for Automatic Leaf Canopy Phenotyping in Multiple-Pot Cultivation System | |
Widiyanto et al. | Monitoring the growth of tomatoes in real time with deep learning-based image segmentation | |
Tahaseen et al. | An assessment of the machine learning algorithms used in agriculture | |
Jiang et al. | An automatic rice mapping method based on constrained feature matching exploiting Sentinel-1 data for arbitrary length time series | |
Oppenheim et al. | Tomato flower detection using deep learning | |
TR2024001571T2 (en) | A PRODUCT CLASSIFICATION METHOD USING DEEP NEURAL NETWORKS | |
Zou et al. | A deep learning image augmentation method for field agriculture | |
Youssefi et al. | Unsupervised zoning of cultivation areas with similar cultivation pattern in Golestan province based on the vegetation products of MODIS sensor | |
Brilhador et al. | A computer vision approach for automatic measurement of the inter-plant spacing | |
Chaiyana et al. | Mapping and predicting cassava mosaic disease outbreaks using earth observation and meteorological data-driven approaches |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21953582 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2024/001571 Country of ref document: TR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |