CN115299245B - Control method and control system of intelligent fruit picking robot - Google Patents

Control method and control system of intelligent fruit picking robot Download PDF

Info

Publication number
CN115299245B
CN115299245B CN202211109274.5A CN202211109274A CN115299245B CN 115299245 B CN115299245 B CN 115299245B CN 202211109274 A CN202211109274 A CN 202211109274A CN 115299245 B CN115299245 B CN 115299245B
Authority
CN
China
Prior art keywords
picked
mobile robot
picking
fruit
farm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211109274.5A
Other languages
Chinese (zh)
Other versions
CN115299245A (en
Inventor
王红星
邱舒鑫
李兵
刘青
万智超
吕凌波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Institute of Technology
Original Assignee
Nanchang Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Institute of Technology filed Critical Nanchang Institute of Technology
Priority to CN202211109274.5A priority Critical patent/CN115299245B/en
Publication of CN115299245A publication Critical patent/CN115299245A/en
Application granted granted Critical
Publication of CN115299245B publication Critical patent/CN115299245B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D46/00Picking of fruits, vegetables, hops, or the like; Devices for shaking trees or shrubs
    • A01D46/30Robotic devices for individually picking crops
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J15/00Gripping heads and other end effectors
    • B25J15/08Gripping heads and other end effectors having finger members
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/08Programme-controlled manipulators characterised by modular constructions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a control method and a control system of an intelligent fruit picking robot, wherein the control method of the intelligent fruit picking robot comprises the following steps: constructing a real-time three-dimensional map and environment map information of the farm to be picked; planning a motion path of a mobile robot carrying a picking mechanical arm for picking; constructing a YOLOv4 depth network model according to different depth image characteristic information of fruits to be picked; carrying out image information target identification on the fruit to be picked by utilizing a YOLOv4 depth network model, and obtaining center point coordinate information of the fruit to be picked; according to the central point coordinate information of the fruit to be picked and the planned movement path of the mobile robot, controlling the mobile robot to move to a target position, and controlling a picking mechanical arm on the mobile robot to pick. The method and the device realize the high-precision acquisition of the center coordinate information of the target image and realize the high-precision positioning and grabbing of the fruit to be picked.

Description

Control method and control system of intelligent fruit picking robot
Technical Field
The invention relates to the technical field of fruit picking robots, in particular to a control method and a control system of an intelligent fruit picking robot.
Background
Along with the development of intelligent robots, the intelligent robots are more and more convenient, rapid, efficient and the like in daily production operation of people. The picking mechanical arm is used as an executing mechanism for assisting human power to perform operation, and is widely applied in the fields of engineering machinery welding, automobile part assembly, agricultural orchard picking production, part machining, whole machine coating of large equipment and the like.
At present, the population of China is numerous, the aging problem is becoming serious, and the process of town generation drives the traditional handcraftsman to approach to the labor-intensive industry. And most people are moving towards the service industry, the problem of shortage of agricultural labor is becoming more serious. Although the fruit industry in China rapidly develops, a series of problems to be solved urgently are faced, wherein the agricultural mechanization and intelligence level in China is low. Most fruits are picked manually. The lag of the production technology leads to high labor intensity, bad operation environment, high input cost and low production efficiency of fruit farmers. The work of fruit picking is very heavy, including judgment of fruit maturity. Therefore, in the aspect of agricultural development, the intelligent robot is introduced, so that the labor force can be greatly saved, and the labor productivity is improved. To ensure that the mobile robot can accurately pick fruits in an orchard, two problems need to be solved:
(1) Due to the randomness of fruit growth and the complex and diverse growing environments of fruits, the picking robot accurately performs positioning detection and classification recognition on the fruits in the natural environment.
(2) Lidar sensors have limited scanning observation distances and are susceptible to complex geometries in the environment. The camera has certain requirements on the illumination conditions of the surrounding environment of the robot. The encoder motor may generate accumulated errors over a long period of time.
Disclosure of Invention
The invention provides a control method and a control system of an intelligent fruit picking robot, which are characterized in that a multi-mode attention fusion network is added into a detection network of an original YOLOV4 deep learning network model, global features and local features of fruits are fused, recall rate of fruit detection and identification is improved, a center point of a target image is analyzed, center coordinate information of the target image is obtained, and a positioning and grabbing function is realized; the position of the fruit to be picked is obtained, and the mobile robot carrying the picking mechanical arm for picking is used for carrying out autonomous motion path navigation to carry out picking planning, so that obstacle avoidance of the mobile robot and movement during picking are improved.
The invention provides a control method of an intelligent fruit picking robot, which comprises the following steps:
constructing a real-time three-dimensional map and environment map information of the farm to be picked;
planning a motion path of a mobile robot carrying a picking mechanical arm when the farm to be picked moves according to a real-time three-dimensional map and environment map information of the farm to be picked;
constructing a YOLOv4 depth network model according to different depth image characteristic information of fruits to be picked;
shooting in the field to obtain image information of fruits to be picked at a target position, and carrying out target identification on the image information by utilizing a YOLOv4 depth network model to obtain central point coordinate information of the fruits to be picked;
according to the central point coordinate information of the fruit to be picked and the planned movement path of the mobile robot, controlling the mobile robot to move to a target position, and controlling a picking mechanical arm on the mobile robot to pick the fruit.
Further, the construction of the real-time three-dimensional map of the farm to be picked comprises the following steps:
and constructing a real-time three-dimensional map of the farm to be picked by utilizing a three-dimensional physical simulation platform Gazebo according to the scene of the farm to be picked and the obstacles existing in the farm to be picked.
Further, the building the environment map information includes:
acquiring point cloud data of the environmental topography of the farm to be picked by using a laser radar carried on the mobile robot;
and performing raspberry dispatching processing on the point cloud data, and obtaining environment map information in a two-dimensional grid form of the farm to be picked by using a mapping algorithm.
Further, according to the real-time three-dimensional map and the environmental map information of the farm to be picked, planning a motion path of the mobile robot carrying the picking mechanical arm when the farm to be picked moves, including:
according to the environmental map information, performing simulation on the motion of the mobile robot by utilizing the ROS three-dimensional visual engineering Rviz, and planning the motion path of the mobile robot carrying the picking mechanical arm;
the planned motion path and the real-time three-dimensional map are subjected to joint simulation, so that the planned motion path is improved and optimized;
the motion path comprises an obstacle avoidance motion path of the mobile robot and a movement motion path when picking.
Further, constructing a YOLOv4 depth network model according to the depth image characteristic information of different fruits to be picked, including:
acquiring different single-mode image data and multi-mode image data of fruits to be picked;
constructing a multi-mode attention fusion network by utilizing a part of the single-mode image data and the multi-mode image data, and increasing the receptive field of the multi-mode attention fusion network by adopting multi-scale feature fusion;
extracting image depth features of the multi-mode attention fusion network, and calibrating an image depth feature channel in a self-adaptive manner by using an attention mechanism;
fusing the calibrated image depth characteristics with visible light to construct a fused multi-mode attention fusion network;
and training the fused multi-mode attention fusion network by utilizing the other part of the single-mode image data and the multi-mode image data to obtain a YOLOv4 depth network model.
Further, the method for simulating the obstacle avoidance motion path of the mobile robot with the picking mechanical arm and the moving motion path during picking by using the ROS three-dimensional visualization tool Rviz comprises the following steps:
acquiring motion data of the mobile robot by using an inertial measurement unit and wheel mileage points arranged on the mobile robot;
fusing the motion data of the mobile robot and the point cloud data of the environmental topography of the farm to be picked, which are acquired by a laser radar carried on the mobile robot, by using a Kalman filtering algorithm;
according to the fused data, constructing a moving map of the mobile robot by utilizing an SLAM algorithm, and positioning the moving mobile robot;
and planning a path of the mobile robot according to the fused data.
Further, the constructing a moving map of the mobile robot by using a SLAM algorithm according to the fused data, and positioning the moving mobile robot, includes:
constructing a map of the mobile robot during movement by using a Cartographer algorithm;
and positioning the mobile robot by using an AMCL algorithm.
Further, the planning the path of the mobile robot according to the fused data includes:
planning a global motion path of the mobile robot by using an A-algorithm of heuristic search;
and planning a local motion path of the mobile robot by adopting a TEB algorithm.
Further, according to the coordinate information of the center point of the fruit to be picked and the planned movement path of the mobile robot, the mobile robot is controlled to move to the target position, and the picking mechanical arm on the mobile robot is controlled to pick, comprising:
establishing a kinematic equation according to the coordinate information of the central point of the fruit to be picked;
performing inverse kinematics solution on the kinematics equation;
and simultaneously controlling the mobile robot to move and the picking mechanical arm to perform grabbing motion by utilizing the ROS system, so that the mobile robot moves to the fruit to be picked, and the picking mechanical arm grabs the fruit to be picked.
The invention provides a control system of an intelligent fruit picking robot, which comprises the following components:
the map information construction module is used for constructing a real-time three-dimensional map and environment map information of the farm to be picked;
the motion path acquisition module is used for planning a motion path of the mobile robot carrying the picking mechanical arm when the farm to be picked moves according to the real-time three-dimensional map and the environment map information of the farm to be picked;
the model construction module is used for constructing a YOLOv4 depth network model according to different depth image characteristic information of fruits to be picked;
the identification module is used for shooting and acquiring image information of fruits to be picked at a target position in the field, and carrying out target identification on the image information by utilizing a YOLOv4 depth network model to acquire center point coordinate information of the fruits to be picked;
the picking execution module is used for controlling the mobile robot to move to a target position according to the central point coordinate information of the fruit to be picked and the planned movement path of the mobile robot, and controlling the picking mechanical arm on the mobile robot to pick the fruit.
Compared with the prior art, the invention has the beneficial effects that:
in order to enable the intelligent picking robot to rapidly and accurately detect and identify fruits with different maturity under the complex and various environments of all-weather different illumination intensities, fruit leaf overlapping shielding and other orchards, the invention provides a fruit detection and identification method for improving YOLOv4, a multi-scale feature fusion module Spatial Pyramid Pooling is added into a detection network of an original YOLOv4 deep learning network model, global features and local features of fruits are fused, recall rate for fruit detection and identification is improved, a central point of a target image is analyzed, central coordinate information of the target image is obtained, and a positioning and grabbing function is realized. In addition, aiming at the problems of large particle proposal distribution error, particle consumption, long algorithm running time and the like in the traditional RBPF-SLAM algorithm, the invention fuses data among a laser radar, an inertial measurement unit and a wheel type odometer, provides a novel method for fusing SLAM by multiple sensors, establishes a robot motion model based on fusion of the inertial measurement unit and the wheel type odometer, optimizes particle proposal distribution by utilizing the laser radar observation information fusion motion model, solves the problems of large system prediction distribution error and particle memory explosion, improves a particle resampling strategy, maintains particle diversity and slows down particle dissipation.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
fig. 1 is a schematic flow chart of a control method of an intelligent fruit picking robot according to the present invention;
fig. 2 is a flow chart of a control system of a control method of an intelligent fruit picking robot according to the present invention;
fig. 3 is a three-dimensional map of a farm established in an embodiment of a control method of an intelligent fruit picking robot according to the present invention;
FIG. 4 is a two-dimensional grid map of a farm in an embodiment of a method of controlling an intelligent fruit picking robot in accordance with the present invention;
fig. 5 is a hardware connection diagram of the intelligent fruit picking robot in the control method of the intelligent fruit picking robot according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, but it should be understood that the protection scope of the present invention is not limited by the specific embodiments.
Example 1
As shown in fig. 1-4, the invention provides a control method of an intelligent fruit picking robot, comprising the following steps:
step 1: constructing a real-time three-dimensional map and environment map information of the farm to be picked;
step 2: planning a motion path of a mobile robot carrying a picking mechanical arm when the farm to be picked moves according to a real-time three-dimensional map and environment map information of the farm to be picked;
step 3: constructing a YOLOv4 depth network model according to different depth image characteristic information of fruits to be picked;
step 4: shooting and acquiring image information of fruits to be picked at a target position in the field by using a depth camera, and carrying out target identification on the image information by using a YOLOv4 depth network model to acquire center point coordinate information of the fruits to be picked;
step 5: according to the central point coordinate information of the fruit to be picked and the planned movement path of the mobile robot, controlling the mobile robot to move to a target position, and controlling a picking mechanical arm on the mobile robot to pick the fruit.
The step 1 specifically comprises the following steps:
step 1.1: constructing a real-time three-dimensional map of the farm to be picked, comprising:
and constructing a real-time three-dimensional map of the farm to be picked by utilizing a three-dimensional physical simulation platform Gazebo according to the scene of the farm to be picked and the obstacles in the farm to be picked.
The real-time three-dimensional map is utilized to simulate the problems encountered or about to be encountered by the picking robot in the actual picking operation, so that the multi-physical combined scene simulation of the farm to be picked is realized.
Step 1.2: building environment map information, including:
acquiring point cloud data of the environmental topography of the farm to be picked by using a laser radar carried on the mobile robot;
and performing raspberry dispatching processing on the point cloud data, and obtaining environment map information in a two-dimensional grid form of the farm to be picked by using a mapping algorithm.
The step 2 specifically comprises the following steps:
step 2.1: according to the environmental map information, the movement of the mobile robot is simulated by utilizing the ROS three-dimensional visual engineering Rviz, and the movement path of the mobile robot carrying the picking mechanical arm is planned, and the method comprises the following steps:
and acquiring motion data of the mobile robot by using an inertial measurement unit and wheel type mileage points arranged on the mobile robot. The data acquired by the inertial measurement unit are the acceleration and the angular velocity of the mobile robot respectively; the data acquired by the wheel odometer is a speed value measured by an encoder on the mobile robot.
The motion data of the mobile robot and the point cloud data of the environmental terrain of the farm to be picked, which are acquired by the laser radar carried on the mobile robot, are fused by using a Kalman filtering algorithm, and the specific fusion process is as follows:
initializing the current pose of the robot, initializing Kalman, resolving data acquired by an inertial measurement unit, resolving data acquired by a wheel type odometer, fusing the resolved motion data information of the mobile robot and the acquired point cloud data of the environmental topography of the farm to be picked by using a Kalman filtering algorithm, and obtaining fused data.
According to the fused data, a moving map of the mobile robot is constructed by utilizing an SLAM algorithm, and the moving mobile robot is positioned;
and planning a path of the mobile robot according to the fused data.
The method for constructing the moving map of the mobile robot by utilizing the SLAM algorithm and positioning the moving mobile robot comprises the following steps:
constructing a map of the mobile robot during movement by using a Cartographer algorithm;
and positioning the mobile robot by using an AMCL algorithm.
And planning a path of the mobile robot according to the fused data, wherein the path planning comprises the following steps:
the motion path planning aspect of the mobile robot is mainly divided into global path planning and local path planning, and then the global motion path of the mobile robot is planned by using a heuristic search A-algorithm;
and planning a local motion path of the mobile robot by adopting a TEB algorithm. The local path algorithm optimizes and corrects the initial trajectory generated by the global path.
Step 2.2: and carrying out joint simulation on the planned motion path and a real-time three-dimensional map to improve and optimize the planned motion path.
The motion path comprises an obstacle avoidance motion path of the mobile robot and a movement motion path when picking. The motion path of the mobile robot is improved and optimized, so that the optimal motion path of the mobile robot during picking is obtained.
Aiming at step 3, as fruits to be picked are divided into a plurality of varieties when fruits are actually picked, not only are color differences and shape differences exist, but it is obviously impractical to distinguish thick branches and leaves from fruits only by the shape and the color. However, if the fruit labeling is realized by image processing, the time is very long, and in classification, the picking purpose is far from achieved by the light depending on two variables, so that the target detection needs to be carried out by means of a classification concept with stronger robustness. The concept of deep convolutional neural networks is introduced. Based on the deep convolutional neural network for target detection, the YOLO lightweight algorithm network is selected for target detection. Therefore, constructing the YOLOv4 depth network model in step 3 specifically includes:
respectively acquiring different single-mode image data and multi-mode image data of fruits to be picked;
constructing a multi-mode attention fusion network by utilizing a part of single-mode image data and multi-mode image data, and increasing the receptive field of the multi-mode attention fusion network by adopting multi-scale feature fusion;
extracting image depth features of the multi-mode attention fusion network, and calibrating an image depth feature channel in a self-adaptive manner by using an attention mechanism;
fusing the calibrated image depth characteristics with visible light to construct a fused multi-mode attention fusion network;
and training the fused multi-mode attention fusion network by utilizing the other part of single-mode image data and the multi-mode image data to obtain a YOLOv4 depth network model.
According to different single-mode image data and multi-mode image data of fruits to be picked, target detection research of the fruits to be picked is carried out, the proposed multi-mode attention fusion network adopts multi-scale feature fusion to increase receptive fields, and an attention mechanism is used for adaptively calibrating a feature channel, so that the network focuses on more important information. And because the target detection algorithm based on the visible light image is easy to generate the problems of false detection, missed detection and the like under the conditions of strong light and shadow. Therefore, in the method, by fusing the visible light and the depth image characteristics of the fruits to be picked, when certain mode information is greatly changed or missing, the accuracy and the robustness of the detector are integrally improved in a mode of information complementation among modes.
And then transplanting the designed vision system into a raspberry group 4b development board, performing a physical experiment based on an embedded platform, and analyzing an experiment result.
The step 5 specifically comprises the following steps:
establishing a kinematic equation according to the coordinate information of the central point of the fruit to be picked;
performing inverse kinematics solution on the kinematics equation;
the ROS system is utilized to control the mobile robot to move and the picking mechanical arm to perform grabbing motion simultaneously, so that the mobile robot moves to the fruit to be picked, and the picking mechanical arm grabs the fruit to be picked.
Example 2
The invention provides a control system of an intelligent fruit picking robot, which comprises the following components:
the map information construction module is used for constructing a real-time three-dimensional map and environment map information of the farm to be picked;
the motion path acquisition module is used for planning a motion path of the mobile robot carrying the picking mechanical arm when the farm to be picked moves according to the real-time three-dimensional map and the environment map information of the farm to be picked;
the model construction module is used for constructing a YOLOv4 depth network model according to different depth image characteristic information of fruits to be picked;
the identification module is used for shooting and acquiring image information of fruits to be picked at a target position in the field, and carrying out target identification on the image information by utilizing a YOLOv4 depth network model to acquire center point coordinate information of the fruits to be picked;
the picking execution module is used for controlling the mobile robot to move to a target position according to the central point coordinate information of the fruit to be picked and the planned movement path of the mobile robot, and controlling the picking mechanical arm on the mobile robot to pick the fruit.
The present invention will be described in further detail with reference to specific examples.
1. A real-time three-dimensional map of the farm to be picked, which is built by using the three-dimensional physical simulation platform Gazebo, is shown in fig. 3.
2. The map information of the environment of the farm to be picked, which is established by using the mapping algorithm, is shown in fig. 4.
3. Training the single-mode image data and the multi-mode image data of the fruits to be picked in a multi-mode attention fusion network in a Ubuntu20.04LTS system at the PC end, generating a YOLOv4 depth network model, and testing the identification performance of the YOLOv4 depth network model.
And (3) shooting and acquiring image information of the fruit to be picked by using an Orbbecastra depth camera in the field, acquiring fruit tree images by using a camera, and carrying out target identification on fruit related information by using a YOLOv4 depth network model to obtain the central point coordinate information of the fruit.
According to the invention, the feasibility of a YOLO algorithm is detected by fusing visible light and depth image characteristics of fruits to be picked and selecting a detection data set with picked fruits, then an existing ROS depth camera is utilized to obtain an experimental recognition effect of fruit pictures through field shooting, and then the picking mechanical arm is combined with the YOLO target detection to comprehensively detect the accuracy of fruit picking.
4. A hardware connection diagram of the picking robot in the present invention is shown in fig. 5. The method comprises the following steps: raspberry group 4b development board, MCU controller, motion, laser radar, ROS depth camera, ultrasonic sensor, picking arm, infrared ray tube, remote control device on the embarkation motion. Wherein the movement mechanism is a Mecanum wheel vehicle.
5. The control system adopts a modularized design, uses STM32F4 series chips as a main control, controls each joint driver of the picking mechanical arm, uses the mode of an upper computer (PC machine) +a lower computer (singlechip) to control the grabbing and releasing actions of the end effector to complete the control system, and on the basis, uses QT visual program writing software to design an upper computer control interface.
6. And a communication node program of the control panel at the bottom layer is written in the ROS system by using C++ language to realize information interaction, so that real-time control of the Mecanum wheeler and the picking mechanical arm is realized.
7. When picking work is carried out, the robot is placed in a picking environment, the initial pose of the picking manipulator is determined, the joints and the control system are determined to be in a normal state, and the control page initialization is waited to be completed.
8. After the initialization is completed, setting a picking target position and an optimal path, setting motion parameters of a picking manipulator, starting picking operation, and sending an end effector to a fruit picking position by the picking manipulator according to the setting.
The last explanation is: the above disclosure is only one specific embodiment of the present invention, but the embodiment of the present invention is not limited thereto, and any changes that can be thought by those skilled in the art should fall within the protection scope of the present invention.

Claims (6)

1. The control method of the intelligent fruit picking robot is characterized by comprising the following steps of:
constructing a real-time three-dimensional map and environment map information of the farm to be picked;
planning a motion path of a mobile robot carrying a picking mechanical arm when the farm to be picked moves according to a real-time three-dimensional map and environment map information of the farm to be picked;
constructing a YOLOv4 depth network model according to different depth image characteristic information of fruits to be picked;
shooting in the field to obtain image information of fruits to be picked at a target position, and carrying out target identification on the image information by utilizing a YOLOv4 depth network model to obtain central point coordinate information of the fruits to be picked;
according to the central point coordinate information of the fruit to be picked and the planned movement path of the mobile robot, controlling the mobile robot to move to a target position, and controlling a picking mechanical arm on the mobile robot to pick the fruit;
according to the real-time three-dimensional map and the environment map information of the farm to be picked, the method for planning the motion path of the mobile robot carrying the picking mechanical arm when the farm to be picked moves comprises the following steps:
according to the environmental map information, performing simulation on the motion of the mobile robot by utilizing the ROS three-dimensional visual engineering Rviz, and planning the motion path of the mobile robot carrying the picking mechanical arm;
the planned motion path and the real-time three-dimensional map are subjected to joint simulation, so that the planned motion path is improved and optimized;
the motion path comprises an obstacle avoidance motion path of the mobile robot and a movement motion path during picking;
according to the depth image characteristic information of different fruits to be picked, constructing a YOLOv4 depth network model, which comprises the following steps:
acquiring different single-mode image data and multi-mode image data of fruits to be picked;
constructing a multi-mode attention fusion network by utilizing a part of the single-mode image data and the multi-mode image data, and increasing the receptive field of the multi-mode attention fusion network by adopting multi-scale feature fusion;
extracting image depth features of the multi-mode attention fusion network, and calibrating an image depth feature channel in a self-adaptive manner by using an attention mechanism;
fusing the calibrated image depth characteristics with visible light to construct a fused multi-mode attention fusion network;
training the fused multi-mode attention fusion network by utilizing the other part of the single-mode image data and the multi-mode image data to obtain a YOLOv4 depth network model;
the obstacle avoidance motion path of the mobile robot carrying the picking mechanical arm and the moving motion path during picking are simulated by utilizing the ROS three-dimensional visualization tool Rviz, and the robot comprises the following components:
acquiring motion data of the mobile robot by using an inertial measurement unit and wheel mileage points arranged on the mobile robot;
fusing the motion data of the mobile robot and the point cloud data of the environmental topography of the farm to be picked, which are acquired by a laser radar carried on the mobile robot, by using a Kalman filtering algorithm;
according to the fused data, constructing a moving map of the mobile robot by utilizing an SLAM algorithm, and positioning the moving mobile robot;
planning a path of the mobile robot according to the fused data;
and constructing a moving map of the mobile robot by utilizing a SLAM algorithm according to the fused data, and positioning the moving mobile robot, wherein the method comprises the following steps of:
constructing a map of the mobile robot during movement by using a Cartographer algorithm;
and positioning the mobile robot by using an AMCL algorithm.
2. The method for controlling an intelligent fruit picking robot of claim 1, wherein: the construction of the real-time three-dimensional map of the farm to be picked comprises the following steps:
and constructing a real-time three-dimensional map of the farm to be picked by utilizing a three-dimensional physical simulation platform Gazebo according to the scene of the farm to be picked and the obstacles existing in the farm to be picked.
3. The method for controlling an intelligent fruit picking robot of claim 2, wherein: the construction environment map information includes:
acquiring point cloud data of the environmental topography of the farm to be picked by using a laser radar carried on the mobile robot;
and performing raspberry dispatching processing on the point cloud data, and obtaining environment map information in a two-dimensional grid form of the farm to be picked by using a mapping algorithm.
4. The method for controlling an intelligent fruit picking robot of claim 1, wherein: and planning a path of the mobile robot according to the fused data, wherein the path planning comprises the following steps:
planning a global motion path of the mobile robot by using an A-algorithm of heuristic search;
and planning a local motion path of the mobile robot by adopting a TEB algorithm.
5. The method for controlling an intelligent fruit picking robot of claim 1, wherein: according to the central point coordinate information of the fruit to be picked and the planned movement path of the mobile robot, the mobile robot is controlled to move to the target position, and the picking mechanical arm on the mobile robot is controlled to pick, and the picking mechanical arm comprises the following steps:
establishing a kinematic equation according to the coordinate information of the central point of the fruit to be picked;
performing inverse kinematics solution on the kinematics equation;
and simultaneously controlling the mobile robot to move and the picking mechanical arm to perform grabbing motion by utilizing the ROS system, so that the mobile robot moves to the fruit to be picked, and the picking mechanical arm grabs the fruit to be picked.
6. A control system based on the control method of the intelligent fruit picking robot of any one of claims 1-5, characterized in that: comprising the following steps:
the map information construction module is used for constructing a real-time three-dimensional map and environment map information of the farm to be picked;
the motion path acquisition module is used for planning a motion path of the mobile robot carrying the picking mechanical arm when the farm to be picked moves according to the real-time three-dimensional map and the environment map information of the farm to be picked;
the model construction module is used for constructing a YOLOv4 depth network model according to different depth image characteristic information of fruits to be picked;
the identification module is used for shooting and acquiring image information of fruits to be picked at a target position in the field, and carrying out target identification on the image information by utilizing a YOLOv4 depth network model to acquire center point coordinate information of the fruits to be picked;
the picking execution module is used for controlling the mobile robot to move to a target position according to the central point coordinate information of the fruit to be picked and the planned movement path of the mobile robot, and controlling the picking mechanical arm on the mobile robot to pick the fruit.
CN202211109274.5A 2022-09-13 2022-09-13 Control method and control system of intelligent fruit picking robot Active CN115299245B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211109274.5A CN115299245B (en) 2022-09-13 2022-09-13 Control method and control system of intelligent fruit picking robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211109274.5A CN115299245B (en) 2022-09-13 2022-09-13 Control method and control system of intelligent fruit picking robot

Publications (2)

Publication Number Publication Date
CN115299245A CN115299245A (en) 2022-11-08
CN115299245B true CN115299245B (en) 2023-07-14

Family

ID=83866338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211109274.5A Active CN115299245B (en) 2022-09-13 2022-09-13 Control method and control system of intelligent fruit picking robot

Country Status (1)

Country Link
CN (1) CN115299245B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115918377B (en) * 2022-12-21 2024-06-14 北京理工大学 Control method and control device of automatic tree fruit picking machine and automatic tree fruit picking machine
CN116977615B (en) * 2023-07-27 2024-04-30 广东技术师范大学 Litchi stem identification and positioning method and device
CN117617002A (en) * 2024-01-04 2024-03-01 太原理工大学 Method for automatically identifying tomatoes and intelligently harvesting tomatoes

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978949A (en) * 2019-03-26 2019-07-05 南开大学 A kind of method that crops identification based on computer vision is extracted with characteristic point three-dimensional coordinate
WO2019179270A1 (en) * 2018-03-23 2019-09-26 广州极飞科技有限公司 Plant planting data measuring method, working route planning method, device and system
CN112243698A (en) * 2020-10-22 2021-01-22 安徽农业大学 Automatic walnut picking and collecting method based on multi-sensor fusion technology
CN113099848A (en) * 2021-06-04 2021-07-13 山东农业大学 Picking robot efficient picking, placing and transporting integrated platform and using method
CN113110513A (en) * 2021-05-19 2021-07-13 哈尔滨理工大学 ROS-based household arrangement mobile robot
CN114429310A (en) * 2022-02-08 2022-05-03 辽宁科技大学 Database-based agricultural picking object identification algorithm
CN114902872A (en) * 2022-04-26 2022-08-16 华南理工大学 Visual guidance method for picking fruits by robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220203547A1 (en) * 2020-12-31 2022-06-30 Plus One Robotics, Inc. System and method for improving automated robotic picking via pick planning and interventional assistance

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019179270A1 (en) * 2018-03-23 2019-09-26 广州极飞科技有限公司 Plant planting data measuring method, working route planning method, device and system
CN109978949A (en) * 2019-03-26 2019-07-05 南开大学 A kind of method that crops identification based on computer vision is extracted with characteristic point three-dimensional coordinate
CN112243698A (en) * 2020-10-22 2021-01-22 安徽农业大学 Automatic walnut picking and collecting method based on multi-sensor fusion technology
CN113110513A (en) * 2021-05-19 2021-07-13 哈尔滨理工大学 ROS-based household arrangement mobile robot
CN113099848A (en) * 2021-06-04 2021-07-13 山东农业大学 Picking robot efficient picking, placing and transporting integrated platform and using method
CN114429310A (en) * 2022-02-08 2022-05-03 辽宁科技大学 Database-based agricultural picking object identification algorithm
CN114902872A (en) * 2022-04-26 2022-08-16 华南理工大学 Visual guidance method for picking fruits by robot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于SLAM算法和深度神经网络的语义地图构建研究;白云汉;;计算机应用与软件(第01期);全文 *

Also Published As

Publication number Publication date
CN115299245A (en) 2022-11-08

Similar Documents

Publication Publication Date Title
CN115299245B (en) Control method and control system of intelligent fruit picking robot
CN111602517B (en) Distributed visual active perception method for string-type fruits and application of distributed visual active perception method
Lee et al. Deep learning-based monocular obstacle avoidance for unmanned aerial vehicle navigation in tree plantations: Faster region-based convolutional neural network approach
CN107246876B (en) Method and system for autonomous positioning and map construction of unmanned automobile
CN111496770A (en) Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method
CN114407030A (en) Autonomous navigation distribution network live working robot and working method thereof
CN114080905A (en) Picking method based on digital twins and cloud picking robot system
CN116494201A (en) Monitoring integrated power machine room inspection robot and unmanned inspection method
Chen et al. Design and implementation of an artificial intelligence of things-based autonomous mobile robot system for pitaya harvesting
CN116852352A (en) Positioning method for mechanical arm of electric secondary equipment based on ArUco code
Li A Visual Recognition and Path Planning Method for Intelligent Fruit‐Picking Robots
KR102452315B1 (en) Apparatus and method of robot control through vision recognition using deep learning and marker
CN115810188A (en) Method and system for identifying three-dimensional pose of fruit on tree based on single two-dimensional image
CN115690343A (en) Robot laser radar scanning and mapping method based on visual following
CN115589845A (en) Intelligent cotton picking robot and cotton picking operation path planning method thereof
CN115588036A (en) Image acquisition method and device and robot
CN115139315A (en) Grabbing motion planning method for picking mechanical arm
CN114888768A (en) Mobile duplex robot cooperative grabbing system and method based on multi-sensor fusion
CN114800524A (en) System and method for actively avoiding collision of human-computer interaction cooperative robot
CN114474004A (en) Multi-factor coupling vehicle-mounted construction robot error compensation planning control strategy
Wang et al. Agv navigation based on apriltags2 auxiliary positioning
Crnokic et al. Artificial neural networks-based simulation of obstacle detection with a mobile robot in a virtual environment
Fu et al. Robotic arm intelligent grasping system for garbage recycling
Patil et al. AgriDoc: ROS integrated agricultural robot
Le et al. Design and control of agriculture robot integrated a delta manipulator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant