CN115294651A - Behavior analysis method based on farming scene and server - Google Patents

Behavior analysis method based on farming scene and server Download PDF

Info

Publication number
CN115294651A
CN115294651A CN202210928291.5A CN202210928291A CN115294651A CN 115294651 A CN115294651 A CN 115294651A CN 202210928291 A CN202210928291 A CN 202210928291A CN 115294651 A CN115294651 A CN 115294651A
Authority
CN
China
Prior art keywords
farm
farming
behavior analysis
behavior
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210928291.5A
Other languages
Chinese (zh)
Inventor
张美跃
周业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengruitong Fujian Information Technology Co ltd
Original Assignee
Hengruitong Fujian Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hengruitong Fujian Information Technology Co ltd filed Critical Hengruitong Fujian Information Technology Co ltd
Priority to CN202210928291.5A priority Critical patent/CN115294651A/en
Publication of CN115294651A publication Critical patent/CN115294651A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

According to the behavior analysis method and the server based on the farming scene, the target monitoring model of the farm tool is constructed, training is carried out according to various farm tool information, and the trained farm tool monitoring model is obtained; acquiring video data, and analyzing the human body posture in the video data by using a behavior recognition model to obtain joint point coordinate characteristic matrixes of all human bodies; identifying the video data through a trained farm tool monitoring model to obtain an identification result; and analyzing the farming behavior according to the joint point coordinate feature matrix and the recognition result of all human bodies, introducing special farm tools in the farming behavior as target features into a behavior recognition network, and constructing richer behavior features to improve the accuracy of farming behavior recognition.

Description

Behavior analysis method based on farming scene and server
Technical Field
The invention relates to the field of data analysis, in particular to a behavior analysis method and a server based on a farming scene.
Background
In the field of video image behavior analysis, the traditional scheme is realized based on a computer vision technology, and the traditional method is to manually extract image features.
Under the rapid development of machine learning and deep learning in recent years, feature extraction and fusion by machine learning or deep learning have been relatively conventional methods. The current behavior recognition of farming scenes usually has the following problems:
1: the cameras arranged in the farming scenes are wide in visual field range, contain a lot of information, and have a large number of crops, vegetation, trees, steep slopes and other shade objects, and the traditional model target detection model cannot process complex scenes.
2: in an agricultural scene, a behavior of multi-person cooperation exists, the recognition effect of the model in the multi-person cooperation scene is not ideal, the behavior of multiple persons can be recognized as the behavior of multiple persons, and the behavior of multiple persons is not completed by multiple persons.
3: different farming behaviors have greater similarity, and the above models have insufficient distinction degree for the behaviors in the scene of similar actions.
4: farming activities typically involve one or more farming implements in the image, if differences in the use of different farming implements cannot be reflected by traditional skeletal-based behavior recognition methods.
5: the rural park is generally wide in area, a plurality of cameras are required to be arranged for real-time monitoring, a large amount of video streaming data are generated by real-time monitoring of multiple cameras, data calculation amount for real-time analysis by directly using a traditional model is huge, and requirements for hardware resources are high.
Disclosure of Invention
Technical problem to be solved
In order to solve the above problems in the prior art, the present invention provides a behavior analysis method and a server based on a farming scene, which can improve the accuracy of farming behavior recognition.
(II) technical scheme
In order to achieve the purpose, the invention adopts a technical scheme that:
a behavior analysis method based on a farming scene comprises the following steps:
s1, constructing a target monitoring model of the farm tool, and training according to various farm tool information to obtain a trained farm tool monitoring model;
s2, video data are obtained, and human body postures in the video data are analyzed by using a behavior recognition model to obtain joint point coordinate feature matrixes of all human bodies;
s3, identifying the video data through the trained farm tool monitoring model to obtain an identification result;
and S4, carrying out farming behavior analysis according to the joint point coordinate feature matrixes and the recognition results of all human bodies.
In order to achieve the purpose, the invention adopts another technical scheme as follows:
a farming scenario-based behavior analysis server comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the program:
s1, constructing a target monitoring model of the farm tool, and training according to information of various farm tools to obtain a trained farm tool monitoring model;
s2, video data are obtained, and human body postures in the video data are analyzed by using a behavior recognition model to obtain joint point coordinate feature matrixes of all human bodies;
s3, identifying the video data through the trained farm tool monitoring model to obtain an identification result;
and S4, carrying out farming behavior analysis according to the joint point coordinate feature matrixes and the recognition results of all human bodies.
(III) advantageous effects
The invention has the beneficial effects that: training according to various agricultural implement information by constructing a target monitoring model of the agricultural implement to obtain a trained agricultural implement monitoring model; acquiring video data, and analyzing the human body posture in the video data by using a behavior recognition model to obtain joint point coordinate characteristic matrixes of all human bodies; identifying the video data through a trained farm tool monitoring model to obtain an identification result; and analyzing the farming behavior according to the joint point coordinate feature matrix and the recognition result of all human bodies, introducing special farm tools in the farming behavior as target features into a behavior recognition network, and constructing richer behavior features to improve the accuracy of farming behavior recognition.
Drawings
FIG. 1 is a flow chart of a behavior analysis method based on a farming scene according to an embodiment of the present invention;
fig. 2 is a schematic overall structure diagram of a behavior analysis server based on a farming scene according to an embodiment of the present invention.
[ description of reference ]
1: a behavior analysis server based on the farming scene;
2: a memory;
3: a processor.
Detailed Description
For the purpose of better explaining the present invention and to facilitate understanding, the present invention will be described in detail by way of specific embodiments with reference to the accompanying drawings.
Example one
Referring to fig. 1, a behavior analysis method based on a farming scene includes the steps of:
in this embodiment, step S1 further includes:
s01, building a plurality of paths of monitoring cameras according to the terrain and the coverage range of the cameras;
s02, constructing a target monitoring model of the pedestrian, and training by using YOLOv3 to obtain a trained pedestrian monitoring model;
s03, embedding the trained pedestrian monitoring model into each path of monitoring camera, enabling each path of monitoring camera to carry out target detection on pedestrians in the edge section, and transmitting the detection result to a server.
S1, constructing a target monitoring model of the farm tool, and training according to information of various farm tools to obtain a trained farm tool monitoring model;
in this embodiment, step S1 specifically includes:
s11, constructing a target monitoring model of the farm tool, and training by using YOLOv3 according to various farm tool information to obtain a trained farm tool monitoring model;
and S12, embedding the trained farm tool monitoring model into a server.
S2, video data are obtained, and human body postures in the video data are analyzed by using a behavior recognition model to obtain joint point coordinate feature matrixes of all human bodies;
in this embodiment, step S2 specifically includes:
video data are obtained, and the human body posture in the video data is analyzed by using an OpenPose model to obtain the joint point coordinate feature matrix of all human bodies in the current data frame.
The joint point coordinate feature matrix comprises the number of people, the number of bone points, the positions of each bone point in the image along the X axis and the Y axis and a confidence score.
S3, identifying the video data through the trained farm tool monitoring model to obtain an identification result;
the identification result comprises classification and position information characteristic matrixes of the agricultural implements;
the position information characteristic matrix comprises the number of farm implements, the category of each farm implement, the positions of the x and y axes of the upper left corner point of the image area, the positions of the x and y axes of the lower right corner point of the image area and a confidence score.
And S4, carrying out farming behavior analysis according to the joint point coordinate feature matrixes and the recognition results of all human bodies.
In this embodiment, step S4 specifically includes:
s41, calculating the association degree A (p) (k) of the human body and the farm implement to obtain a corresponding association degree set, and obtaining the final grouping of the human body and the farm implement through a Hungarian algorithm;
s42, fusing the grouped human body characteristics and farm tool characteristics, and performing farm behavior analysis in an ST-GCN unit of an ST-GCN network to generate action time, action places and action types to form an analysis log;
wherein A (p) (k) represents the relevance of the p-th person and the k-th farm tool, and the value is the sum of the Euclidean distances of the coordinates of all joint points of the p-th person and the middle coordinates of the farm tool.
In this embodiment, the method further comprises the steps of:
the video stream data collected by the multi-path monitoring camera is transmitted to the server in real time for storage, two flow directions are stored, one flow direction is stored in the hard disk as review data, and the other flow direction is stored in the memory as data to be analyzed.
Example two
The difference between the present embodiment and the first embodiment is that the present embodiment will further explain how the behavior analysis method based on the farming scene is implemented in combination with a specific application scenario:
the invention aims to solve the problem of accurate identification of the farming behavior, improve the identification rate and the accuracy rate of the farming behavior, ensure that the system can accurately identify the production activities of farmers in real time, effectively supervise and assist the production activity process, ensure that the production activities are in compliance and legal and also effectively improve the production efficiency of the farmers.
The core of realization is mainly through laying the multichannel camera on the scene, carries out data analysis to real time monitoring video stream, and the action of discernment camera supervision within range farming can be mastered in real time to the platform to through the model algorithm of building oneself, the action type is discerned fast, generates all kinds of relevant information: action time, action location, action type, etc.; and various information records are filed to form analysis logs, including logs for finding various behavior analysis results and past behavior logs can be checked.
The specific process is as follows:
1. infrastructure construction:
according to the field terrain and the coverage range of the camera, a plurality of paths of monitoring cameras are built, a target monitoring algorithm is embedded into the camera, and the target monitoring algorithm uses YOLOv3 to identify a target.
2. Target monitoring model training procedure
2.1, constructing a target monitoring model of the pedestrian, and training by using YOLOv3 to obtain a trained pedestrian monitoring model;
and embedding the trained pedestrian monitoring model into each path of monitoring camera so that each path of camera can carry out target detection on the pedestrian at the edge section, and transmitting the detection result to the server.
2.2, constructing a target monitoring model of the farm tool, and training by using YOLOv3 according to various farm tool information to obtain a trained farm tool monitoring model;
and embedding the trained farm tool monitoring model into a server.
3. Video stream acquisition and storage step
The video stream data collected by the multi-path monitoring camera is transmitted to the server in real time for storage, two flow directions are stored, one flow direction is stored in the hard disk as review data, the other flow direction is stored in the memory as data to be analyzed, the data in the memory usually only keeps the data volume of one time window, and the real-time data source of the video classification model is ensured.
4. Pedestrian target detection step
The multi-path monitoring camera carries out target detection on the pedestrian at the edge section and transmits a detection result to the server, the server monitors the monitoring result, and the calculation process of behavior recognition is started to be triggered when the pedestrian exists.
5. Behavior recognition model construction and analysis steps
Video data are obtained, human body postures in the video data are analyzed by utilizing an OpenPose model, and joint point coordinate feature matrixes of all human bodies in a current data frame are obtained.
The joint point coordinate feature matrix comprises the number of people, the number of bone points, the position of each bone point in an X axis and a Y axis in an image and a confidence score.
Specifically, the shape of the feature matrix of the joint point coordinates is: [ n,25,3] where n is the number of people, 25 is the number of bone points, 3 is the position of each bone point in the image on the x, y axis and the confidence score.
Identifying the video data by using the YOLOv3 model trained in the step 2 to obtain an identification result containing the classification and position information characteristic matrix of the agricultural implement;
the position information characteristic matrix comprises the number of farm implements, the category of each farm implement, the positions of the x and y axes of the upper left corner point of the image area, the positions of the x and y axes of the lower right corner point of the image area and a confidence score.
Specifically, the classification of the farm implement and the shape of the position information feature matrix are: and [ k,6], wherein k is the number of farm implements, 6 is the category of each farm implement, the positions of the x and y axes of the upper left corner point of the image area, the positions of the x and y axes of the lower right corner point of the image area and the confidence score, wherein k can be 0 and represents that no farm implement is identified, and the farm implement category is trained according to an actual scene.
Calculating the relevance A (p) (k) of the human body and the farm implements to obtain a corresponding relevance set, constructing a relational bipartite graph by using the human body characteristic set and the farm implement characteristic set, and obtaining a final group of the human body and the farm implements through a Hungary algorithm;
fusing the grouped human body characteristics and farm tool characteristics, and performing farm behavior analysis in an ST-GCN unit of an ST-GCN network to generate action time, action places and action types to form an analysis log;
wherein A (p) (k) represents the relevance of the p person and the k farm implement, and the value is the sum of the Euclidean distance of the coordinates of all joint points of the p person and the middle coordinates of the farm implement.
6. Behavior recognition model training and deployment
And (3) training the video data acquired in advance by using the model in the step (5), inputting and training the video data by using 10s (the video length setting can be adjusted according to actual farming behaviors), deploying the trained model into a server, receiving a target detection result of a front-end camera by the server in real time, extracting frames of the video stream in a corresponding time period from the memory video stream in the step (3) according to the detection result, and inputting the frames into the trained model for behavior recognition.
According to the agricultural behavior recognition method, the modeling of the agricultural scene behavior analysis process and the reconstruction and assembly of various algorithm models are performed by utilizing an attitude estimation algorithm OpenPose, a skeleton behavior recognition model ST-GCN network and a target detection algorithm YOLOv3 deep learning algorithm, special farm implements in agricultural behaviors are introduced into a behavior recognition network as target characteristics, richer behavior characteristics are constructed, and the accuracy of agricultural behavior recognition is improved.
And a pedestrian target detection algorithm is integrated at the edge section, and a part of simple calculation is preposed to the edge end, so that the pressure of a server is reduced, and the throughput of behavior identification is improved.
EXAMPLE III
Referring to fig. 2, a behavior analysis server 1 based on a farming scene includes a memory 2, a processor 3, and a computer program stored in the memory 2 and executable on the processor 3, wherein the processor 3 implements the steps of the first embodiment when executing the program.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.

Claims (10)

1. A behavior analysis method based on a farming scene is characterized by comprising the following steps:
s1, constructing a target monitoring model of the farm tool, and training according to information of various farm tools to obtain a trained farm tool monitoring model;
s2, video data are obtained, and human body postures in the video data are analyzed by using a behavior recognition model to obtain joint point coordinate feature matrixes of all human bodies;
s3, identifying the video data through the trained farm tool monitoring model to obtain an identification result;
and S4, carrying out farming behavior analysis according to the joint point coordinate feature matrixes and the recognition results of all human bodies.
2. The behavior analysis method based on the farming scene according to claim 1, wherein the step S1 specifically comprises:
s11, constructing a target monitoring model of the farm tool, and training by using YOLOv3 according to various farm tool information to obtain a trained farm tool monitoring model;
and S12, embedding the trained farm tool monitoring model into a server.
3. The farming scene-based behavior analysis method according to claim 1, further comprising, before step S1:
s01, building a plurality of paths of monitoring cameras according to the terrain and the coverage range of the cameras;
s02, constructing a target monitoring model of the pedestrian, and training by using YOLOv3 to obtain a trained pedestrian monitoring model;
s03, embedding the trained pedestrian monitoring model into each path of monitoring camera, enabling each path of monitoring camera to carry out target detection on pedestrians in the edge section, and transmitting the detection result to a server.
4. The behavior analysis method based on the farming scene according to claim 1, wherein the step S2 specifically comprises:
video data are obtained, and the human body posture in the video data is analyzed by using an OpenPose model to obtain the joint point coordinate feature matrix of all human bodies in the current data frame.
5. The farming scene-based behavior analysis method of claim 1, wherein the joint point coordinate feature matrix comprises the number of people, the number of bone points, the position of each bone point in the image along the X-axis and the Y-axis, and a confidence score.
6. The farming scene-based behavior analysis method according to claim 1, wherein the recognition result comprises a classification and location information feature matrix of farming implements;
the position information characteristic matrix comprises the number of farm implements, the category of each farm implement, the positions of the x and y axes of the upper left corner point of the image area, the positions of the x and y axes of the lower right corner point of the image area and a confidence score.
7. The behavior analysis method based on the farming scene according to claim 1, wherein the step S4 specifically comprises:
s41, calculating the association degree A (p) (k) of the human body and the farm implement to obtain a corresponding association degree set, and obtaining the final grouping of the human body and the farm implement through a Hungarian algorithm;
s42, fusing the grouped human body characteristics and farm tool characteristics, and performing farm behavior analysis in an ST-GCN unit of an ST-GCN network to generate action time, action places and action types to form an analysis log;
wherein A (p) (k) represents the relevance of the p-th person and the k-th farm tool, and the value is the sum of the Euclidean distances of the coordinates of all joint points of the p-th person and the middle coordinates of the farm tool.
8. The farming scene-based behavior analysis method according to claim 3, further comprising the steps of:
the video stream data collected by the multi-path monitoring camera is transmitted to the server in real time for storage, two flow directions are stored, one flow direction is stored in the hard disk as review data, and the other flow direction is stored in the memory as data to be analyzed.
9. A farming scenario-based behavior analysis server comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the program, implements the steps of:
s1, constructing a target monitoring model of the farm tool, and training according to information of various farm tools to obtain a trained farm tool monitoring model;
s2, video data are obtained, and human body postures in the video data are analyzed by using a behavior recognition model to obtain joint point coordinate feature matrixes of all human bodies;
s3, identifying the video data through the trained farm tool monitoring model to obtain an identification result;
and S4, carrying out farming behavior analysis according to the joint point coordinate feature matrixes and the recognition results of all human bodies.
10. The farming scene-based behavior analysis server according to claim 1, wherein the step S4 specifically comprises:
s41, calculating the association degree A (p) (k) of the human body and the farm implement to obtain a corresponding association degree set, and obtaining the final grouping of the human body and the farm implement through a Hungarian algorithm;
s42, fusing the grouped human body characteristics and farm tool characteristics, and performing farm behavior analysis in an ST-GCN unit of an ST-GCN network to generate action time, action places and action types to form an analysis log;
wherein A (p) (k) represents the relevance of the p person and the k farm implement, and the value is the sum of the Euclidean distance of the coordinates of all joint points of the p person and the middle coordinates of the farm implement.
CN202210928291.5A 2022-08-03 2022-08-03 Behavior analysis method based on farming scene and server Pending CN115294651A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210928291.5A CN115294651A (en) 2022-08-03 2022-08-03 Behavior analysis method based on farming scene and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210928291.5A CN115294651A (en) 2022-08-03 2022-08-03 Behavior analysis method based on farming scene and server

Publications (1)

Publication Number Publication Date
CN115294651A true CN115294651A (en) 2022-11-04

Family

ID=83826314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210928291.5A Pending CN115294651A (en) 2022-08-03 2022-08-03 Behavior analysis method based on farming scene and server

Country Status (1)

Country Link
CN (1) CN115294651A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880558A (en) * 2023-03-03 2023-03-31 北京市农林科学院信息技术研究中心 Farming behavior detection method and device, electronic equipment and storage medium
CN115937795A (en) * 2023-03-15 2023-04-07 湖北泰跃卫星技术发展股份有限公司 Method and device for acquiring farming activity record based on rural video

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880558A (en) * 2023-03-03 2023-03-31 北京市农林科学院信息技术研究中心 Farming behavior detection method and device, electronic equipment and storage medium
CN115937795A (en) * 2023-03-15 2023-04-07 湖北泰跃卫星技术发展股份有限公司 Method and device for acquiring farming activity record based on rural video

Similar Documents

Publication Publication Date Title
Li et al. Detection of fruit-bearing branches and localization of litchi clusters for vision-based harvesting robots
Parvathi et al. Detection of maturity stages of coconuts in complex background using Faster R-CNN model
CN115294651A (en) Behavior analysis method based on farming scene and server
Vidović et al. Crop row detection by global energy minimization
Karkee et al. Identification of pruning branches in tall spindle apple trees for automated pruning
CN109271888A (en) Personal identification method, device, electronic equipment based on gait
CN109190508A (en) A kind of multi-cam data fusion method based on space coordinates
CN105426399A (en) Eye movement based interactive image retrieval method for extracting image area of interest
Huang et al. Deep localization model for intra-row crop detection in paddy field
CN111476883B (en) Three-dimensional posture trajectory reconstruction method and device for multi-view unmarked animal
Karkee et al. A method for three-dimensional reconstruction of apple trees for automated pruning
CN107808376A (en) A kind of detection method of raising one's hand based on deep learning
CN111914951A (en) Crop pest intelligent diagnosis system and method based on image real-time identification
Fernandes et al. Grapevine winter pruning automation: On potential pruning points detection through 2D plant modeling using grapevine segmentation
CN111950391A (en) Fruit tree bud recognition method and device
CN116152494A (en) Building foot point identification segmentation method based on two-stage 3D point cloud semantic segmentation
CN111552762A (en) Orchard planting digital map management method and system based on fruit tree coding
CN115330833A (en) Fruit yield estimation method with improved multi-target tracking
CN111461222A (en) Method and device for acquiring target object track similarity and electronic equipment
CN114937293A (en) Agricultural service management method and system based on GIS
Badeka et al. Harvest crate detection for grapes harvesting robot based on YOLOv3 model
CN117152719B (en) Weeding obstacle detection method, weeding obstacle detection equipment, weeding obstacle detection storage medium and weeding obstacle detection device
CN108551473B (en) Agricultural product communication method and device based on visual agriculture
Goondram et al. Strawberry Detection using Mixed Training on Simulated and Real Data
Dahiya et al. An effective detection of litchi disease using deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination