CN115956500A - Distributed automatic pepper pollination device and method based on deep learning - Google Patents
Distributed automatic pepper pollination device and method based on deep learning Download PDFInfo
- Publication number
- CN115956500A CN115956500A CN202211594927.3A CN202211594927A CN115956500A CN 115956500 A CN115956500 A CN 115956500A CN 202211594927 A CN202211594927 A CN 202211594927A CN 115956500 A CN115956500 A CN 115956500A
- Authority
- CN
- China
- Prior art keywords
- pepper
- flower
- current
- deep learning
- flowers
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Breeding Of Plants And Reproduction By Means Of Culturing (AREA)
Abstract
The invention belongs to the technical field of automatic pollination of pepper flowers, and provides a distributed automatic pollination device and method of pepper flowers based on deep learning. Wherein the device comprises a plurality of robots in communication with each other; each robot comprises a body, a mechanical arm end effector, a plurality of groups of depth cameras and a deep learning control board card; the deep learning control board card is used for identifying the central points of all the pepper flowers to be pollinated in the current pepper flower depth image; calculating three-dimensional position information between the center point of each pepper flower to be pollinated and the corresponding depth camera one by one; determining three-dimensional position information of the end effector of the mechanical arm and the central point of each pepper flower to be pollinated so as to control the end effector of the mechanical arm to carry out pollination; after all targets in the current pepper flower depth image are pollinated, calculating the current robot completion progress; and sending the completion progress of the current robot to other intercommunicating robots, and receiving the completion progress of the other intercommunicating robots so as to control the current robot to execute the next action.
Description
Technical Field
The invention belongs to the technical field of automatic pollination of pepper flowers, and particularly relates to a distributed automatic pollination device and method of pepper flowers based on deep learning.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Aiming at the pepper three-line matching hybrid seed production technology with a male sterile line as a female parent, after a sterile line flower is identified, carried restorer line pollen is pollinated to the sterile line flower to produce hybrid seeds. Wherein, the three lines refer to a male sterile line, a male sterile maintainer line and a restorer line. The male sterile line refers to a parent material with normal pistil and with no result of flowering; the male sterile maintainer line is a parent material of which the progeny can bear fruit and seed after the pollen is pollinated by the sterile line and is still the sterile line; the restoring line means that the pollen of the restoring line can bear fruit and bear seeds after being pollinated to the sterile line, and the produced hybrid restores fertility and is used for producing products. The inventor finds that the pollination operation of the existing pepper three-line matching hybrid seed production is still mainly performed by manpower, so that the pollination operation becomes the link with the longest time consumption, the largest workload and the largest investment.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides a distributed automatic pepper flower pollination device and method based on deep learning, which can realize patrol work in a pepper greenhouse, automatically identify pepper flowers to be pollinated and carry out automatic pollination, have a working mode of multi-machine cooperative distributed type, and can finish the pepper flower pollination work in the current field in a short time.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a distributed automatic pepper flower pollination device based on deep learning.
A distributed automatic pepper pollination device based on deep learning comprises a plurality of robots which are communicated with each other; each robot comprises a body, a mechanical arm end effector, a plurality of groups of depth cameras and a deep learning control board card; each group of depth cameras is used for acquiring a pepper flower depth image with corresponding dimensionality and transmitting the pepper flower depth image to a deep learning control board card; the deep learning control board card is used for:
identifying the central points of all pepper flowers to be pollinated in the current pepper flower depth image;
calculating three-dimensional position information between the center point of each pepper flower to be pollinated and the corresponding depth camera one by one;
determining three-dimensional position information of the tail end actuator of the mechanical arm and the central point of each capsicum flower to be pollinated based on the three-dimensional coordinate conversion relation between the depth camera and the tail end actuator of the mechanical arm so as to control the tail end actuator of the mechanical arm to carry out pollination;
after pollination of all targets in the current pepper flower depth image is completed, calculating the completion progress of the current robot;
and sending the completion progress of the current robot to other intercommunicated robots, and receiving the completion progress of the other intercommunicated robots so as to control the current robot to execute the next action.
As an embodiment, in the deep learning control board, the process of identifying the pepper flowers to be pollinated in the current pepper flower depth image is as follows:
extracting the pepper flower posture characteristics in the multi-dimensional pepper flower depth image;
when the posture characteristic is positive, judging that the corresponding pepper flower is a flower to be pollinated and labeling; wherein, the positive direction is that all petals are not shielded and the flower core is completely exposed.
As an embodiment, the posture characteristics of the pepper flowers further include horizontal, inclined and vertical;
the horizontal direction is that the petals are not closed and the flower center faces the direction vertical to the image shooting direction;
the petals are not closed when the flower is inclined, partial petals are visible, and the flower center faces to a certain angle with the image shooting direction;
the flower core is completely wrapped when the flower is vertically in a flower-bone state.
As an embodiment, before identifying the pepper flowers to be pollinated in the current pepper flower depth image, the deep learning control board further includes:
all pepper flowers in the current pepper flower depth image are identified.
As an embodiment, in the deep learning control board, the process of identifying all the pepper flowers in the current pepper flower depth image is as follows:
and extracting color and shape characteristics in the current pepper flower depth image based on a pepper flower recognition model trained in advance, and judging whether pepper flowers exist in the current pepper flower depth image or not and judging the positions of the pepper flowers.
In one embodiment, in the deep learning control board, the completion progress is a ratio of a current distance traveled to a total distance of the task track.
As an embodiment, the deep learning control board is further configured to:
judging whether the current robot completes the task, if not, controlling the current robot to continue to execute the task; and if so, controlling the current robot to stop running the current task, searching other robots which do not finish the task, and distributing corresponding tasks to execute.
In one embodiment, the depth cameras are at least three groups.
The invention provides a control method of a distributed automatic pepper flower pollination device based on deep learning.
A control method of a distributed automatic pepper flower pollination device based on deep learning comprises the following steps:
receiving a multi-dimensional pepper flower depth image transmitted by a depth camera;
identifying the central points of all pepper flowers to be pollinated in the current pepper flower depth image;
calculating three-dimensional position information between the central point of each pepper flower to be pollinated and the corresponding depth camera one by one;
determining three-dimensional position information of the tail end actuator of the mechanical arm and the central point of each capsicum flower to be pollinated based on the three-dimensional coordinate conversion relation between the depth camera and the tail end actuator of the mechanical arm so as to control the tail end actuator of the mechanical arm to carry out pollination;
after all targets in the current pepper flower depth image are pollinated, calculating the current robot completion progress;
and sending the completion progress of the current robot to other intercommunicating robots, and receiving the completion progress of the other intercommunicating robots so as to control the current robot to execute the next action.
As an embodiment, the control method further includes:
judging whether the current robot completes the task, if not, controlling the current robot to continue to execute the task; if yes, the current robot is controlled to stop running the current task, other robots which do not finish the task are searched, and corresponding tasks are distributed to be executed.
Compared with the prior art, the invention has the beneficial effects that:
the invention discloses a distributed automatic pepper flower pollination device and method based on deep learning by combining deep learning and robot distributed layout technologies, which can realize patrol work of a robot in a pepper planting greenhouse, automatically identify pepper flowers to be pollinated and carry out automatic pollination, have a working mode of multi-machine cooperative distributed type and can finish the pepper flower pollination work in the current field in a short time.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a schematic view of a robot mechanical structure according to an embodiment of the present invention;
fig. 2 is a flow chart of the distributed pepper flower automatic pollination based on deep learning in the embodiment of the invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example one
The embodiment provides a distributed automatic pepper flower pollination device based on deep learning, which comprises a plurality of robots which are communicated with each other; each robot comprises a body, a mechanical arm end effector, a plurality of groups of depth cameras and a deep learning control board card.
As shown in fig. 1, the specific mechanical structure of the robot of the embodiment includes an automatic guiding cart with adjustable speed, steering and high precision, the chassis of the cart is a metal plate, and the cart is provided with a heat dissipation and interlayer device for placing a control board card, a power supply and a mechanical arm base; the three-axis or five-axis mechanical arm 1 can move with high precision according to a given target position; a set of pollination end actuating mechanism, this mechanism are in 1 end on the arm, and pollen is stored in specific gasbag, and when carrying out the pollination operation, the arm moves the dead ahead of target flower, extrudees the pollen sac through pneumatics or steering wheel, erupts hot pepper pollen, accomplishes the pollination. The driving mechanism of the robot of the embodiment comprises a servo motor 2, and a driving transmission belt 3 drives a driving wheel to rotate.
The electronic control module of the robot of the embodiment comprises:
the group of high-precision positioning equipment can adopt a Beidou/GPS dual-mode positioning and inertial navigation algorithm, and the positioning precision is required to be in a millimeter level when the driving speed is 1 m/s; the touch screen is capable of displaying running state information of the robot and acquired image information and controlling the robot to start and stop in a touch manner; the wireless communication module is a set of, can adopt modules such as WIFI, bluetooth, zigbee, if the area of big-arch shelter is great, short distance communication can't satisfy, can adopt GSM communication module.
The image recognition module of the robot of the embodiment:
the three-dimensional pepper flower detection system comprises a plurality of groups of depth cameras, wherein the depth cameras can collect image and video information and also can collect three-dimensional position information between a target object and a camera;
the deep learning control board card comprises a deep learning frame and a GPIO (general purpose input/output), can directly control external equipment and signals of a perception sensor, and can use a Jetson series board card of the English Wigner company; and finally, configuring a high-performance display card according to the requirement, wherein the deep learning control board card already comprises the display card, and if the target is identified more complexly, the high-performance display card can be externally arranged to improve the calculation performance.
In this embodiment, install the arm on the automated guided vehicle bottom plate to install terminal pollination actuating mechanism at the arm end, connect touch-sensitive screen, high accuracy positioning device, wireless communication module and degree of deep learning control panel card, insert degree of deep learning control integrated circuit board with three group depth cameras at last, use the support of customization to guarantee that three group depth cameras shoot the hot pepper plant in left side, right side, directly over. All the hardware devices are powered by a unified lithium battery (group) and a plurality of groups of voltage stabilizing modules.
In a specific implementation process, each group of depth cameras is used for acquiring a pepper flower depth image with corresponding dimensionality and transmitting the pepper flower depth image to the deep learning control board card.
As shown in fig. 2, the deep learning control board is configured to:
identifying the central points of all pepper flowers to be pollinated in the current pepper flower depth image;
calculating three-dimensional position information between the center point of each pepper flower to be pollinated and the corresponding depth camera one by one;
determining three-dimensional position information of the end effector of the mechanical arm and the central point of each pepper flower to be pollinated based on the three-dimensional coordinate conversion relation between the depth camera and the end effector of the mechanical arm so as to control the end effector of the mechanical arm to carry out pollination;
after pollination of all targets in the current pepper flower depth image is completed, calculating the completion progress of the current robot;
and sending the completion progress of the current robot to other intercommunicating robots, and receiving the completion progress of the other intercommunicating robots so as to control the current robot to execute the next action.
In a specific implementation process, in the deep learning control board card, the process of identifying the pepper flowers to be pollinated in the current pepper flower depth image is as follows:
extracting the pepper flower posture characteristics in the multi-dimensional pepper flower depth image;
when the posture characteristic is positive, judging that the corresponding pepper flower is a flower to be pollinated and labeling; wherein, the positive direction is that all petals are not shielded and the flower core is completely exposed.
Wherein the posture characteristics of the pepper flowers further comprise horizontal, inclined and vertical;
the horizontal direction is that the petals are not closed and the flower center faces the direction vertical to the image shooting direction;
the petals are not closed when the flower is inclined, partial petals are visible, and the flower center faces to a certain angle with the image shooting direction;
the vertical direction is the state of the flower bone, and the flower core is completely wrapped.
In some embodiments, before identifying the pepper flowers to be pollinated in the current pepper flower depth image in the deep learning control board, the method further includes:
all pepper flowers in the current pepper flower depth image are identified.
The process of identifying all the pepper flowers in the current pepper flower depth image is as follows:
and extracting color and shape characteristics in the current pepper flower depth image based on a pepper flower recognition model trained in advance, and judging whether pepper flowers exist in the current pepper flower depth image or not and judging the positions of the pepper flowers.
In the present embodiment, the position information format is (x, y, z). Wherein x is the horizontal distance of waiting to pollinate flower distance present degree of depth camera, and y is vertical distance, and z is the degree of depth distance, and the unit is the millimeter. In the same frame of image, there may be one or more targets, and if there are more targets, the targets are transmitted in array form.
After the control board card receives the distance information between the camera and the target, coordinate conversion is carried out according to the known three-dimensional distance between the tail end of the actuator and the depth camera, and the three-dimensional information of the distance between the tail end of the actuator of the pollination robot and the target is calculated in sequence, wherein the calculation method comprises the following steps:
when the arm is at the origin position, the position of the end effector is set to the zero point (0,0,0), and the position of the depth camera (taking a certain one as an example) with respect to the end effector is set to the (X) position 1 ,Y 1 ,Z 1 ) The position of the depth camera does not change relative to the zero point, i.e., the position is fixed;
for example, with a single target, the target position is (X) from the depth camera 2 ,Y 2 ,Z 2 );
The three-dimensional position of the target and the end effector is (X) 1 ±X 2 ,Y 1 ±Y 2 ,Z 1 ±Z 2 )。
In this embodiment, the completion progress is a ratio of the current traveled distance to the total distance of the task trajectory.
Here, the robot movement track is set, and the setting track can be set by 2 methods:
mode 1: setting a track on positioning software in a track drawing mode, and then sending the track to the robot, wherein the reliability of the track is ensured by means of high-precision positioning equipment of the robot;
the positioning software consists of positioning equipment and upper computer software, wherein the positioning equipment is installed on the robot, and the positioning software runs in the upper computer. The software can display the position information of the robot in real time, mark a certain point or a certain track in the software, send a signal to the positioning equipment, and move the robot to a target point or move along the target track according to the information.
Mode 2: the robot runs along a fixed line by means of laying rails, positioning lines and the like, the positioning performance is low in requirement in the manner, external physical facilities are mainly relied on, and guide rails or tracking lines need to be laid on the ground of the greenhouse in advance.
In some embodiments, the deep learning control board is further configured to:
judging whether the current robot completes the task, if not, controlling the current robot to continue to execute the task; and if so, controlling the current robot to stop running the current task, searching other robots which do not finish the task, and distributing corresponding tasks to execute.
The robot of the embodiment patrols along a specific track, when a depth camera in front of the robot acquires an image of a current position, the pepper flowers in the image are firstly identified, then the pepper flower posture identification is carried out, the pepper flowers to be pollinated are determined, finally, three-dimensional distance data between the pepper flowers to be pollinated and a pollination end execution mechanism is calculated, then a mechanical arm mechanism moves to the position right in front of the pepper flowers to be pollinated, the end pollination execution mechanism automatically extrudes and sprays pollen, after pollination work of all the pepper flowers to be pollinated in the current image is completed, the mechanical arm returns to the zero point, then the robot continues to advance according to the original track until the set track is completely covered, data transmission is carried out between different robots in a wireless communication mode, and the robot which is completed firstly can assist the robot which is not completed to carry out pollination work.
Example two
The embodiment provides a control method of a distributed automatic pepper flower pollination device based on deep learning, which comprises the following steps:
receiving a multi-dimensional pepper flower depth image transmitted by a depth camera;
identifying the central points of all pepper flowers to be pollinated in the current pepper flower depth image;
calculating three-dimensional position information between the central point of each pepper flower to be pollinated and the corresponding depth camera one by one;
determining three-dimensional position information of the end effector of the mechanical arm and the central point of each pepper flower to be pollinated based on the three-dimensional coordinate conversion relation between the depth camera and the end effector of the mechanical arm so as to control the end effector of the mechanical arm to carry out pollination;
after all targets in the current pepper flower depth image are pollinated, calculating the current robot completion progress;
and sending the completion progress of the current robot to other intercommunicated robots, and receiving the completion progress of the other intercommunicated robots so as to control the current robot to execute the next action.
In one or more embodiments, the control method further includes:
judging whether the current robot completes the task, if not, controlling the current robot to continue to execute the task; and if so, controlling the current robot to stop running the current task, searching other robots which do not finish the task, and distributing corresponding tasks to execute.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A distributed automatic pepper pollination device based on deep learning is characterized by comprising a plurality of robots which are communicated with each other; each robot comprises a body, a mechanical arm end effector, a plurality of groups of depth cameras and a deep learning control board card; each group of depth cameras is used for acquiring a pepper flower depth image with corresponding dimensionality and transmitting the pepper flower depth image to a deep learning control board card; the deep learning control board card is used for:
identifying the central points of all pepper flowers to be pollinated in the current pepper flower depth image;
calculating three-dimensional position information between the center point of each pepper flower to be pollinated and the corresponding depth camera one by one;
determining three-dimensional position information of the tail end actuator of the mechanical arm and the central point of each capsicum flower to be pollinated based on the three-dimensional coordinate conversion relation between the depth camera and the tail end actuator of the mechanical arm so as to control the tail end actuator of the mechanical arm to carry out pollination;
after pollination of all targets in the current pepper flower depth image is completed, calculating the completion progress of the current robot;
and sending the completion progress of the current robot to other intercommunicated robots, and receiving the completion progress of the other intercommunicated robots so as to control the current robot to execute the next action.
2. The deep learning-based distributed automatic pollination device for pepper flowers as claimed in claim 1, wherein in the deep learning control board card, the process of identifying pepper flowers to be pollinated in the current pepper flower depth image is as follows:
extracting the pepper flower posture characteristics in the multi-dimensional pepper flower depth image;
when the posture characteristic is positive, judging that the corresponding pepper flower is a flower to be pollinated and labeling; wherein, the positive direction is that all petals are not shielded and the flower core is completely exposed.
3. The deep learning-based distributed automatic pollination device for pepper flowers as claimed in claim 2, wherein the posture characteristics of pepper flowers further comprise horizontal, oblique and vertical;
the horizontal direction is that the petals are not closed and the flower center faces the direction vertical to the image shooting direction;
the petals are not closed when the flower is inclined, part of the petals are visible, and the flower center faces to a certain angle with the image shooting direction;
the vertical direction is the state of the flower bone, and the flower core is completely wrapped.
4. The deep learning based distributed automatic pollination device for pepper flowers as claimed in claim 1, wherein in the deep learning control board, before identifying pepper flowers to be pollinated in the current pepper flower depth image, the device further comprises:
all pepper flowers in the current pepper flower depth image are identified.
5. The deep learning-based distributed automatic pollination device for pepper flowers as claimed in claim 4, wherein in the deep learning control board, the process of identifying all pepper flowers in the current pepper flower depth image is as follows:
and extracting the color and shape characteristics in the current pepper flower depth image based on a pepper flower recognition model trained in advance, and judging whether pepper flowers exist in the current pepper flower depth image and the positions of the pepper flowers.
6. The distributed automatic pepper flower pollination device based on deep learning as claimed in claim 1, wherein in the deep learning control board, the completion progress is the ratio of the current traveled distance to the total distance of the task track.
7. The deep learning-based distributed automatic pollination device for pepper flowers according to claim 1, wherein the deep learning control board card is further used for:
judging whether the current robot completes the task, if not, controlling the current robot to continue to execute the task; if yes, the current robot is controlled to stop running the current task, other robots which do not finish the task are searched, and corresponding tasks are distributed to be executed.
8. The distributed automatic pepper flower pollination device based on deep learning of claim 1, wherein the depth cameras are at least three groups.
9. A control method of the deep learning based distributed automatic pollination device for pepper flowers according to any one of claims 1-8, comprising the following steps:
receiving a multi-dimensional pepper flower depth image transmitted by a depth camera;
identifying the central points of all pepper flowers to be pollinated in the current pepper flower depth image;
calculating three-dimensional position information between the center point of each pepper flower to be pollinated and the corresponding depth camera one by one;
determining three-dimensional position information of the tail end actuator of the mechanical arm and the central point of each capsicum flower to be pollinated based on the three-dimensional coordinate conversion relation between the depth camera and the tail end actuator of the mechanical arm so as to control the tail end actuator of the mechanical arm to carry out pollination;
after all targets in the current pepper flower depth image are pollinated, calculating the current robot completion progress;
and sending the completion progress of the current robot to other intercommunicating robots, and receiving the completion progress of the other intercommunicating robots so as to control the current robot to execute the next action.
10. The control method of the distributed automatic pollination device for pepper flowers based on deep learning as claimed in claim 9, further comprising:
judging whether the current robot completes the task, if not, controlling the current robot to continue to execute the task; and if so, controlling the current robot to stop running the current task, searching other robots which do not finish the task, and distributing corresponding tasks to execute.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211594927.3A CN115956500A (en) | 2022-12-13 | 2022-12-13 | Distributed automatic pepper pollination device and method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211594927.3A CN115956500A (en) | 2022-12-13 | 2022-12-13 | Distributed automatic pepper pollination device and method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115956500A true CN115956500A (en) | 2023-04-14 |
Family
ID=87352040
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211594927.3A Pending CN115956500A (en) | 2022-12-13 | 2022-12-13 | Distributed automatic pepper pollination device and method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115956500A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109272553A (en) * | 2018-09-03 | 2019-01-25 | 刘庆飞 | Localization method, controller and the ablation device extractd for the cotton top heart |
CN110352702A (en) * | 2019-07-31 | 2019-10-22 | 江苏大学 | A kind of pineapple harvesting robot system and implementation method |
JP2019191854A (en) * | 2018-04-24 | 2019-10-31 | ローム株式会社 | Image recognition device, artificial pollination system, and program |
CN111844007A (en) * | 2020-06-02 | 2020-10-30 | 江苏理工学院 | Pollination robot mechanical arm obstacle avoidance path planning method and device |
CN113207675A (en) * | 2021-05-10 | 2021-08-06 | 北京农业智能装备技术研究中心 | Airflow vibration type facility crop automatic pollination device and method |
CA3199668A1 (en) * | 2020-11-23 | 2022-05-27 | Polybee Pte. Ltd. | Method and system for pollination |
CN114946439A (en) * | 2022-03-23 | 2022-08-30 | 华中农业大学 | Intelligent accurate topping device for field cotton |
CN115443906A (en) * | 2022-10-18 | 2022-12-09 | 西北农林科技大学 | Accurate pollination robot of kiwi fruit based on visual perception and double-flow spraying |
-
2022
- 2022-12-13 CN CN202211594927.3A patent/CN115956500A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019191854A (en) * | 2018-04-24 | 2019-10-31 | ローム株式会社 | Image recognition device, artificial pollination system, and program |
CN109272553A (en) * | 2018-09-03 | 2019-01-25 | 刘庆飞 | Localization method, controller and the ablation device extractd for the cotton top heart |
CN110352702A (en) * | 2019-07-31 | 2019-10-22 | 江苏大学 | A kind of pineapple harvesting robot system and implementation method |
CN111844007A (en) * | 2020-06-02 | 2020-10-30 | 江苏理工学院 | Pollination robot mechanical arm obstacle avoidance path planning method and device |
CA3199668A1 (en) * | 2020-11-23 | 2022-05-27 | Polybee Pte. Ltd. | Method and system for pollination |
CN113207675A (en) * | 2021-05-10 | 2021-08-06 | 北京农业智能装备技术研究中心 | Airflow vibration type facility crop automatic pollination device and method |
CN114946439A (en) * | 2022-03-23 | 2022-08-30 | 华中农业大学 | Intelligent accurate topping device for field cotton |
CN115443906A (en) * | 2022-10-18 | 2022-12-09 | 西北农林科技大学 | Accurate pollination robot of kiwi fruit based on visual perception and double-flow spraying |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104067781B (en) | Based on virtual robot and integrated picker system and the method for real machine people | |
WO2020221311A1 (en) | Wearable device-based mobile robot control system and control method | |
US11584004B2 (en) | Autonomous object learning by robots triggered by remote operators | |
CN104057290B (en) | A kind of robotic asssembly method and system of view-based access control model and force-feedback control | |
CN110170995B (en) | Robot rapid teaching method based on stereoscopic vision | |
CN109333506B (en) | Humanoid intelligent robot system | |
CN111421539A (en) | Industrial part intelligent identification and sorting system based on computer vision | |
CN101373380B (en) | Humanoid robot control system and robot controlling method | |
CN204462851U (en) | Mecanum wheel Omni-mobile crusing robot | |
CN108811766B (en) | Man-machine interactive greenhouse fruit and vegetable harvesting robot system and harvesting method thereof | |
CN104714550A (en) | Mecanum wheel omni-directional mobile inspection robot | |
Li et al. | Design of a lightweight robotic arm for kiwifruit pollination | |
US20200316780A1 (en) | Systems, devices, articles, and methods for calibration of rangefinders and robots | |
CN108614564A (en) | A kind of Intelligent cluster storage robot system based on pheromones navigation | |
CN107671838B (en) | Robot teaching recording system, teaching process steps and algorithm flow thereof | |
CN114080905A (en) | Picking method based on digital twins and cloud picking robot system | |
CN108375979A (en) | Self-navigation robot general-purpose control system based on ROS | |
WO2021121429A1 (en) | Intelligent agricultural machine based on binary control system | |
CN108931979A (en) | Vision tracking mobile robot and control method based on ultrasonic wave auxiliary positioning | |
CN115956500A (en) | Distributed automatic pepper pollination device and method based on deep learning | |
CN109176464A (en) | Cable auxiliary assembly system | |
CN112284373A (en) | AGV navigation method and system based on UWB wireless positioning and visual positioning | |
CN202453676U (en) | Semi-physical simulation platform of flying robot control system | |
CN106843221B (en) | Turning coordination control method and device for multiple agricultural robots | |
CN115661726A (en) | Autonomous video acquisition and analysis method for rail train workpiece assembly |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |