CN117392157A - Edge-aware protective cultivation straw coverage rate detection method - Google Patents

Edge-aware protective cultivation straw coverage rate detection method Download PDF

Info

Publication number
CN117392157A
CN117392157A CN202311704265.5A CN202311704265A CN117392157A CN 117392157 A CN117392157 A CN 117392157A CN 202311704265 A CN202311704265 A CN 202311704265A CN 117392157 A CN117392157 A CN 117392157A
Authority
CN
China
Prior art keywords
straw
edge
image
coverage rate
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311704265.5A
Other languages
Chinese (zh)
Other versions
CN117392157B (en
Inventor
杨华民
杨宏伟
张婧
冯欣
蒋振刚
张昕
张剑飞
周超然
白森
戴加海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Research Institute Of Changchun University Of Technology
Changchun University of Science and Technology
Original Assignee
Chongqing Research Institute Of Changchun University Of Technology
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Research Institute Of Changchun University Of Technology, Changchun University of Science and Technology filed Critical Chongqing Research Institute Of Changchun University Of Technology
Priority to CN202311704265.5A priority Critical patent/CN117392157B/en
Publication of CN117392157A publication Critical patent/CN117392157A/en
Application granted granted Critical
Publication of CN117392157B publication Critical patent/CN117392157B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P60/00Technologies relating to agriculture, livestock or agroalimentary industries
    • Y02P60/20Reduction of greenhouse gas [GHG] emissions in agriculture, e.g. CO2

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting coverage rate of protective cultivated straw by edge perception, which comprises the steps of collecting field images through an unmanned aerial vehicle, transmitting collected data to a server, inputting the images into a processing model, preprocessing the images, transmitting the processed images into a segmentation model with edge perception, separating straw parts and other parts of the images, counting the relation between the pixel size of the straw parts and the whole image according to the segmented result to obtain the coverage rate of the straw, and outputting the obtained result.

Description

Edge-aware protective cultivation straw coverage rate detection method
Technical Field
The invention relates to the technical field of detection, in particular to a method for detecting coverage rate of protective cultivated straw by edge perception.
Background
The protective cultivation is an important link in the agricultural production process, is an advanced cultivation technology taking crop straw returning, low-tillage or no-tillage seeding as main contents, can effectively reduce soil wind erosion and water erosion, increase soil fertility, drought resistance and soil moisture conservation capability, and improve agricultural ecology and economic benefits, and is a sustainable development path of agriculture.
Straw returning is one of effective measures of protective cultivation, and straw coverage rate is an important index for measuring the quality of protective cultivation operation, and meanwhile, the straw coverage rate is an important basis for straw returning patch, so that an accurate and efficient straw coverage rate detection strategy has important significance for promoting protective cultivation. The manual pulling rope method is adopted to measure the straw coverage rate in the national standard GB/T20865-2017 No (less) tillage combined seed and fertilizer drill, and the defects of low operation efficiency, large error, high labor intensity and the like exist, which is seriously behind the actual production requirement in the current novel mode of 'Internet+agricultural machinery operation'.
In recent years, with the continuous fusion of technologies such as machine learning, deep learning, artificial intelligence and the like with agricultural scenes, computer vision technology has been successfully applied to straw coverage detection. The traditional deep learning-based method also has a plurality of defects: straw coverage rate detection based on a threshold segmentation method has poor effect on interferents such as agricultural machinery, tree shadows and the like; the straw coverage rate detection method based on the K-means algorithm is insensitive to information such as target edges and the like, and influences the detection accuracy; the detection algorithm based on the characteristics of the histogram is serious in illuminated images, the edge of the target is unclear, and the extracted target is inaccurate. Aiming at the problem that the coverage rate of the straw cannot be accurately measured in the prior art, the edge-sensing protective cultivation straw coverage rate detection method is provided.
Disclosure of Invention
This section is intended to outline some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. Some simplifications or omissions may be made in this section as well as in the description summary and in the title of the application, to avoid obscuring the purpose of this section, the description summary and the title of the invention, which should not be used to limit the scope of the invention.
Therefore, the invention aims to provide the edge-aware protective cultivation straw coverage rate detection method, which is characterized in that the field straw coverage rate is obtained through the edge-aware segmentation technology to evaluate the straw returning effect, the whole process is greatly labor-saving through a deep learning method, and the speed and the accuracy are greatly improved.
In order to solve the technical problems, according to one aspect of the present invention, the following technical solutions are provided:
an edge-aware protective cultivation straw coverage rate detection method, comprising:
s100, acquiring field images through an unmanned aerial vehicle, and transmitting acquired data to a server;
s200, inputting the image into a processing model, and preprocessing the image;
s300, transmitting the processed image to a segmentation model with edge perception, separating a straw part and the rest part of the image, counting the relation between the pixel size of the straw part and the whole image according to the segmented result to obtain straw coverage rate, and outputting the obtained result;
in the step S300, the processed image is transferred to a segmentation model with edge perception, the straw part and the rest part of the image are separated, the relationship between the pixel size of the straw part and the whole image is counted according to the segmented result to obtain the straw coverage rate, and the obtained result is output as follows:
s301, inputting the straw images into a PVT backbone network for feature extraction, and simultaneously respectively storing four layers of pyramid features;
s302, performing BAM and GSA operations on the obtained features in the step S301 to obtain new features;
s303, respectively performing up-sampling and down-sampling operations on the features obtained in the step S302, connecting the obtained features together in series, and inputting the features to a segmentation head;
s304, calculating the loss function L by the results S1 and S2 obtained by segmentation and the true value respectively, and realizing image segmentation.
As a preferable scheme of the edge-aware protective cultivation straw coverage rate detection method of the present invention, in step S200, the preprocessing includes removing noise, and enhancing image quality.
As a preferable scheme of the edge-aware protective cultivation straw coverage rate detection method of the present invention, the GSA specifically represents:
wherein,,/>representing attention manipulation, +_>Representation->Convolution (S)/(S)>Representing a matrix multiplication, MLP is made up of two +.>The convolution layer and a ReLU activation function are combined with the normalization layer.
As a preferable scheme of the edge-aware protective cultivation straw coverage rate detection method of the invention, the BAM specifically represents:
wherein,representing the corresponding multiplication of the element->Representing Sigmoid function->Representing the corresponding feature->Post-treatment get->The results obtained are serially connected along the channel dimension.
As a preferable scheme of the edge-aware protective cultivation straw coverage rate detection method, the loss functionThe method comprises the steps of carrying out a first treatment on the surface of the Finally, the overall loss function is obtained:
wherein representsAnd->Weighting factor->And->Representing the segmentation result +.>Indicating true value(s)>Representing a binary cross entropy loss function, +.>And (5) an cross-ratio loss function.
Compared with the prior art, the invention has the following beneficial effects: the invention aims to obtain a more accurate straw segmentation result and calculate the straw coverage rate. According to the characteristics of straw images collected by unmanned aerial vehicles, a PVT structure is adopted as a backbone network, and characteristics are extracted. In order to obtain a better segmentation result, boundary information and semantic information are selectively aggregated to draw a finer-grained object contour and calibrate an object position, so that accurate straw segmentation is realized. GSA operation is respectively carried out on the characteristics of different layers of PVT, so that the expressive power of the characteristics is improved; meanwhile, BAM operation is carried out on feature layers of different layers of PVT, and boundary details are better saved by means of aggregating boundary information of different layers, so that the problem of boundary blurring can be well solved. And finally, forming complementation by utilizing global enhancement features and local feature details, namely boundary perception modules, and updating weights through training of a neural network to obtain a segmentation result. Compared with the prior algorithm, the segmentation precision is improved, and a powerful guarantee is provided for better calculation of the straw coverage rate.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following detailed description will be given with reference to the accompanying drawings and detailed embodiments, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained from these drawings without inventive faculty for a person skilled in the art. Wherein:
FIG. 1 is a flow chart of a method for detecting coverage rate of edge-aware protective cultivation straw in accordance with the present invention;
FIG. 2 is a block diagram of an edge-aware segmentation algorithm according to the present invention;
FIG. 3 is a block diagram of a GSA provided by the invention;
fig. 4 is a block diagram of a BAM provided by the present invention.
Detailed Description
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to the appended drawings.
Next, the present invention will be described in detail with reference to the drawings, wherein the sectional view of the device structure is not partially enlarged to general scale for the convenience of description, and the drawings are only examples, which should not limit the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in actual fabrication.
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
The invention aims to solve the problem that the prior art cannot accurately measure the coverage rate of field straws, and provides a protective cultivation straw coverage rate detection model and method based on edge perception, so as to realize detection of the coverage rate of the field straws.
In order to achieve the above object, the straw coverage rate detection method of the present invention comprises the steps of: firstly, acquiring field images through an unmanned aerial vehicle, and transmitting acquired data to a server; inputting the image into a processing model, preprocessing the image, removing noise, enhancing the image quality and the like; transmitting the processed image to an edge-aware segmentation model, and separating the straw part and the rest part of the image; according to the segmented result, counting the relation between the size of partial pixels of the straw and the whole image to obtain the straw coverage rate; and finally, outputting the obtained result. Detailed flow chart referring to fig. 1, the invention obtains the field straw coverage rate through the edge-aware segmentation technology to evaluate the straw returning effect, and the whole process greatly saves manpower through a deep learning method, and greatly improves the speed and the accuracy.
The invention provides an edge perception segmentation algorithm, which selectively aggregates boundary information and semantic information to draw object contours with finer granularity and calibrate object positions, and realizes accurate straw segmentation tasks. The structure diagram of the edge-aware segmentation algorithm is shown in fig. 2, and the algorithm uses a feature backbone network Pyramid Vision Transformer (PVT) as a multi-level feature map
The PVT feature network utilizes a pyramid structure to extract features under different scales, so that the accuracy and the robustness of image detection and identification are improved, and the extraction encoder module is particularly used for extracting small target features (the invention is in accordance with the small target characteristics for unmanned aerial vehicle acquisition images). Wherein the first layer feature map->The boundary information is most comprehensive, and the rest of the feature graphs provide higher-level feature expression. Characteristics based on the transducer model have proven successful in computer vision tasks. However, it is often applied to large image feature extraction, and the effect on local detail features such as boundaries, small objects, etc. is not ideal. Based on the invention, global space attention (GSA, global Spatial Attention) and boundary perception modules (BAM, boundary Aware Module) are provided, wherein GSA can enhance the extracted characteristics of PVT backbone network and promote the table thereofThe present capability, BAM, is used for aggregating boundary information of low-level features and from high-level semantic information, so as to better preserve boundary details, locate targets, improve the precision of straw segmentation, and obtain more accurate coverage rate.
The attention mechanism strengthens information related to the optimization objective and suppresses irrelevant information. In order to capture global spatial features, features extracted for the PVT backbone network are fused with global spatial attention units in the process of calculating straw coverage. Fig. 3 depicts a specific structure of global spatial attention. The global space attention emphasizes the long-range relation of each pixel in the global space, so that the expressive capacity of the feature can be improved, a better segmentation result can be obtained, and the feature is complementary with the detail of the local feature, namely the boundary sensing module. Global spatial attentionSpecifically, the method can be expressed as:
(1)
(2)
wherein,attention manipulation, ->Representation->Convolution (S)/(S)>Representing matrix multiplication, MLP consists of twoThe convolution layer and a ReLU activation function are combined with the normalization layer. Finally by a convolution->The number of channels of all the features is unified to 128, so that the following connection operation is facilitated.
The boundary-aware modules are fused in the model, considering that the target boundary regions contain less dependencies while they are common cues within the image region. And the performance of straw segmentation is improved by the boundary constraint of the polymerization context. Different from the traditional boundary fusion method, the boundary information of different layers is aggregated, and the boundary details are better saved. The BAM module aims at highlighting the boundary of the target object, which can well solve the problem of "blurring" of the boundary. Fig. 4 depicts a specific structure of the boundary sensing module, which utilizes feature inputs of different layers to extract boundary information while filtering information unrelated to the boundary. To achieve this, the Sobel operator is applied in both the horizontal and vertical directions to acquire a gradient map. In particular, by two parametersConvolution, two convolution kernels are defined as:
applying two convolutions to the image features to obtainThen normalizing the result by a Sigmoid function, and fusing the result with the input feature map to obtain an edge-enhanced feature map +.>
(3)
Wherein the method comprises the steps ofRepresenting the corresponding multiplication of the element->Representing Sigmoid function->Representing the corresponding feature->Post-treatment get->The results obtained are serially connected along the channel dimension. Finally by a convolution->The number of channels of all the features is unified to 128, so that the following connection operation is facilitated.
The resulting results are up-sampled and down-sampled, respectively, according to the structure of fig. 2, and the obtained features are connected in series and input to the segmentation head to obtain segmentation results S1 and S2. The loss function is calculated by using a binary cross entropy function and a IoU function:
(4)
finally, the overall loss function is obtained:
(5)
wherein representsAnd->Weighting factor->And->Representing the segmentation result +.>Indicating true value(s)>Representing a binary cross entropy loss function, +.>And (5) an cross-ratio loss function.
Our overall architecture is shown in fig. 2, where the segmentation process is divided into the following main steps:
step1: inputting the straw images into a PVT backbone network for feature extraction, and respectively storing four layers of pyramid features;
step2: performing BAM and GSA operations on the obtained features in Step1 to obtain new features;
step3: the features obtained in Step2 are respectively subjected to up-sampling and down-sampling operations according to the structure in fig. 2, and the obtained features are connected in series and input to a dividing head;
step4: and respectively calculating the segmentation results S1 and S2 and the true value to obtain a loss function L (formulas 4 and 5), so as to realize image segmentation.
Although the invention has been described hereinabove with reference to embodiments, various modifications thereof may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In particular, the features of the disclosed embodiments may be combined with each other in any manner as long as there is no structural conflict, and the exhaustive description of these combinations is not given in this specification merely for the sake of omitting the descriptions and saving resources. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (5)

1. The edge-perceived protective cultivation straw coverage rate detection method is characterized by comprising the following steps of:
s100, acquiring field images through an unmanned aerial vehicle, and transmitting acquired data to a server;
s200, inputting the image into a processing model, and preprocessing the image;
s300, transmitting the processed image to a segmentation model with edge perception, separating a straw part and the rest part of the image, counting the relation between the pixel size of the straw part and the whole image according to the segmented result to obtain straw coverage rate, and outputting the obtained result;
in the step S300, the processed image is transferred to a segmentation model with edge perception, the straw part and the rest part of the image are separated, the relationship between the pixel size of the straw part and the whole image is counted according to the segmented result to obtain the straw coverage rate, and the obtained result is output as follows:
s301, inputting the straw images into a PVT backbone network for feature extraction, and simultaneously respectively storing four layers of pyramid features;
s302, performing BAM and GSA operations on the obtained features in the step S301 to obtain new features;
s303, respectively performing up-sampling and down-sampling operations on the features obtained in the step S302, connecting the obtained features together in series, and inputting the features to a segmentation head;
s304, calculating the loss function L by the results S1 and S2 obtained by segmentation and the true value respectively, and realizing image segmentation.
2. The method for detecting coverage rate of edge-aware protective cultivation straw as claimed in claim 1, wherein in step S200, the preprocessing includes removing noise, and enhancing image quality.
3. The edge-aware protective cultivation straw coverage detection method according to claim 1, wherein the GSA is specifically expressed as:
wherein,,/>representing attention manipulation, +_>Representation ofConvolution (S)/(S)>Representing a matrix multiplication, MLP is made up of two +.>The convolution layer and a ReLU activation function are combined with the normalization layer.
4. The edge-aware protective cultivation straw coverage detection method according to claim 1, wherein the BAM is specifically expressed as:
wherein,representing the corresponding multiplication of the element->Representing Sigmoid function->Representing the corresponding feature->Post-treatment get->The results obtained are serially connected along the channel dimension.
5. The edge-aware protective cultivation straw coverage detection method according to claim 1, wherein the loss functionThe method comprises the steps of carrying out a first treatment on the surface of the Finally, the overall loss function is obtained:
wherein representsAnd->Weighting factor->And->Representing the segmentation result +.>Indicating true value(s)>Representing a binary cross entropy loss function, +.>And (5) an cross-ratio loss function.
CN202311704265.5A 2023-12-13 2023-12-13 Edge-aware protective cultivation straw coverage rate detection method Active CN117392157B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311704265.5A CN117392157B (en) 2023-12-13 2023-12-13 Edge-aware protective cultivation straw coverage rate detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311704265.5A CN117392157B (en) 2023-12-13 2023-12-13 Edge-aware protective cultivation straw coverage rate detection method

Publications (2)

Publication Number Publication Date
CN117392157A true CN117392157A (en) 2024-01-12
CN117392157B CN117392157B (en) 2024-03-19

Family

ID=89465330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311704265.5A Active CN117392157B (en) 2023-12-13 2023-12-13 Edge-aware protective cultivation straw coverage rate detection method

Country Status (1)

Country Link
CN (1) CN117392157B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117574280A (en) * 2024-01-15 2024-02-20 长春理工大学 Sowing quality detection method based on multiple characteristic parameters and MDBO-RF

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210089807A1 (en) * 2019-09-25 2021-03-25 Samsung Electronics Co., Ltd. System and method for boundary aware semantic segmentation
US20210216806A1 (en) * 2020-01-12 2021-07-15 Dalian University Of Technology Fully automatic natural image matting method
CN113343789A (en) * 2021-05-20 2021-09-03 武汉大学 High-resolution remote sensing image land cover classification method based on local detail enhancement and edge constraint
CN114565655A (en) * 2022-02-28 2022-05-31 上海应用技术大学 Depth estimation method and device based on pyramid segmentation attention
CN114663445A (en) * 2022-03-07 2022-06-24 重庆邮电大学 Three-dimensional heart image segmentation method based on multi-scale edge perception
CN115909081A (en) * 2022-10-26 2023-04-04 北京理工大学 Optical remote sensing image ground object classification method based on edge-guided multi-scale feature fusion
CN116091951A (en) * 2023-04-07 2023-05-09 华南农业大学 Method and system for extracting boundary line between farmland and tractor-ploughing path
CN116434012A (en) * 2023-04-26 2023-07-14 山东大学 Lightweight cotton boll detection method and system based on edge perception
CN116740121A (en) * 2023-06-15 2023-09-12 吉林大学 Straw image segmentation method based on special neural network and image preprocessing
CN117132774A (en) * 2023-08-29 2023-11-28 河北师范大学 Multi-scale polyp segmentation method and system based on PVT

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210089807A1 (en) * 2019-09-25 2021-03-25 Samsung Electronics Co., Ltd. System and method for boundary aware semantic segmentation
US20210216806A1 (en) * 2020-01-12 2021-07-15 Dalian University Of Technology Fully automatic natural image matting method
CN113343789A (en) * 2021-05-20 2021-09-03 武汉大学 High-resolution remote sensing image land cover classification method based on local detail enhancement and edge constraint
CN114565655A (en) * 2022-02-28 2022-05-31 上海应用技术大学 Depth estimation method and device based on pyramid segmentation attention
CN114663445A (en) * 2022-03-07 2022-06-24 重庆邮电大学 Three-dimensional heart image segmentation method based on multi-scale edge perception
CN115909081A (en) * 2022-10-26 2023-04-04 北京理工大学 Optical remote sensing image ground object classification method based on edge-guided multi-scale feature fusion
CN116091951A (en) * 2023-04-07 2023-05-09 华南农业大学 Method and system for extracting boundary line between farmland and tractor-ploughing path
CN116434012A (en) * 2023-04-26 2023-07-14 山东大学 Lightweight cotton boll detection method and system based on edge perception
CN116740121A (en) * 2023-06-15 2023-09-12 吉林大学 Straw image segmentation method based on special neural network and image preprocessing
CN117132774A (en) * 2023-08-29 2023-11-28 河北师范大学 Multi-scale polyp segmentation method and system based on PVT

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHOU HUANG 等: "Scribble-based Boundary-aware Network for Weakly Supervised Salient Object Detection in Remote Sensing Images", 《ARXIV:2202.03501V1 [CS.CV]》, 7 January 2022 (2022-01-07), pages 1 - 33 *
梁新宇;罗晨;权冀川;肖铠鸿;高伟嘉;: "基于深度学习的图像语义分割技术研究进展", 计算机工程与应用, no. 02, 31 December 2020 (2020-12-31) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117574280A (en) * 2024-01-15 2024-02-20 长春理工大学 Sowing quality detection method based on multiple characteristic parameters and MDBO-RF
CN117574280B (en) * 2024-01-15 2024-04-16 长春理工大学 Sowing quality detection method based on multivariate characteristic parameters and MDBO-RF

Also Published As

Publication number Publication date
CN117392157B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
WO2022160771A1 (en) Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model
Zhou et al. Wheat ears counting in field conditions based on multi-feature optimization and TWSVM
Yu et al. Segmentation and measurement scheme for fish morphological features based on Mask R-CNN
CN111126287B (en) Remote sensing image dense target deep learning detection method
CN103996018B (en) Face identification method based on 4DLBP
Wang et al. SSRNet: In-field counting wheat ears using multi-stage convolutional neural network
CN117392157B (en) Edge-aware protective cultivation straw coverage rate detection method
Liu et al. The recognition of apple fruits in plastic bags based on block classification
Bhagat et al. Eff-UNet++: A novel architecture for plant leaf segmentation and counting
CN111179216B (en) Crop disease identification method based on image processing and convolutional neural network
CN110288033B (en) Sugarcane top feature identification and positioning method based on convolutional neural network
CN114708208B (en) Machine vision-based famous tea tender bud identification and picking point positioning method
Sun et al. Wheat head counting in the wild by an augmented feature pyramid networks-based convolutional neural network
Liu et al. Small unopened cotton boll counting by detection with MRF-YOLO in the wild
Wang et al. Combining SUN-based visual attention model and saliency contour detection algorithm for apple image segmentation
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
Qi et al. Related study based on otsu watershed algorithm and new squeeze-and-excitation networks for segmentation and level classification of tea buds
Umapathy Eaganathan et al. Identification of sugarcane leaf scorch disease using K-means clustering segmentation and KNN based classification
CN115731257A (en) Leaf form information extraction method based on image
Liu et al. WSRD-Net: A convolutional neural network-based arbitrary-oriented wheat stripe rust detection method
CN114677606A (en) Citrus fine-grained disease identification method based on attention mechanism and double-branch network
CN111882573B (en) Cultivated land block extraction method and system based on high-resolution image data
Agarwal et al. Plant leaf disease classification using deep learning: A survey
CN112785548A (en) Pavement crack detection method based on vehicle-mounted laser point cloud
CN111523503A (en) Apple target detection method based on improved SSD algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant