CN115299245A - Control method and control system of intelligent fruit picking robot - Google Patents

Control method and control system of intelligent fruit picking robot Download PDF

Info

Publication number
CN115299245A
CN115299245A CN202211109274.5A CN202211109274A CN115299245A CN 115299245 A CN115299245 A CN 115299245A CN 202211109274 A CN202211109274 A CN 202211109274A CN 115299245 A CN115299245 A CN 115299245A
Authority
CN
China
Prior art keywords
mobile robot
picked
picking
farm
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211109274.5A
Other languages
Chinese (zh)
Other versions
CN115299245B (en
Inventor
王红星
邱舒鑫
李兵
刘青
万智超
吕凌波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Institute of Technology
Original Assignee
Nanchang Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Institute of Technology filed Critical Nanchang Institute of Technology
Priority to CN202211109274.5A priority Critical patent/CN115299245B/en
Publication of CN115299245A publication Critical patent/CN115299245A/en
Application granted granted Critical
Publication of CN115299245B publication Critical patent/CN115299245B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D46/00Picking of fruits, vegetables, hops, or the like; Devices for shaking trees or shrubs
    • A01D46/30Robotic devices for individually picking crops
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J15/00Gripping heads and other end effectors
    • B25J15/08Gripping heads and other end effectors having finger members
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/08Programme-controlled manipulators characterised by modular constructions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a control method and a control system of an intelligent fruit picking robot, wherein the control method of the intelligent fruit picking robot comprises the following steps: constructing a real-time three-dimensional map and environment map information of a farm to be picked; planning a motion path of a mobile robot carrying a picking mechanical arm for picking; constructing a YOLOv4 depth network model according to the depth image characteristic information of different fruits to be picked; carrying out image information target identification on the image information on the fruits to be picked by using a YOLOv4 depth network model, and acquiring the coordinate information of the central point of the fruits to be picked; and controlling the mobile robot to move to a target position according to the coordinate information of the central point of the fruit to be picked and the planned motion path of the mobile robot, and controlling a picking mechanical arm on the mobile robot to grab and pick the fruit. The invention realizes the high-precision acquisition of the central coordinate information of the target image and the high-precision positioning and grabbing of the fruits to be picked.

Description

Control method and control system of intelligent fruit picking robot
Technical Field
The invention relates to the technical field of fruit picking robots, in particular to a control method and a control system of an intelligent fruit picking robot.
Background
Along with the development of intelligent robot, it shows advantages such as convenient and fast high-efficient more and more in people's daily production operation. The picking mechanical arm is used as a novel execution mechanism for assisting manpower to operate, and is widely applied to the fields of engineering machinery welding, automobile part assembly, agricultural orchard picking production, part machining, large-scale equipment complete machine coating and the like.
At the present stage, china has a large population and aging problems become more and more serious, and the urbanization process drives the traditional handcrafts to approach the labor-intensive industry. And most people are turning to the service industry and the like, and the problem of agricultural labor shortage is increasingly serious. Although the fruit industry in China is rapidly developed, the fruit industry in China also faces a series of problems which need to be solved urgently, wherein the agricultural mechanization and intellectualization level in China is low. The picking of most fruits is manually completed. The backward production technology causes great labor intensity, severe working environment, high investment cost and low production efficiency of fruit growers. The fruit picking work is very heavy, including the judgment of the ripeness of the fruit. Therefore, in the aspect of agricultural development, the introduction of the intelligent robot can greatly save labor force and improve labor productivity. Guarantee that mobile robot can accurately pick the fruit in the orchard, need solve two aspects problem:
(1) Due to the randomness of fruit growth and the complex and various growing environments of the fruits, how accurately the picking robot carries out positioning detection and classification recognition on the fruits in the natural environment.
(2) The laser radar sensor has limited scanning observation distance and is easily influenced by complex geometrical structures in the environment. The camera has certain requirements on the lighting conditions of the robot's surroundings. Encoder motors can produce cumulative errors over long periods of time.
Disclosure of Invention
The invention provides a control method and a control system of an intelligent fruit picking robot, wherein a multi-mode attention fusion network is added into a detection network of an original YOLOV4 deep learning network model, so that the global characteristics and the local characteristics of fruits are fused, the recall rate of fruit detection and identification is improved, the central point of a target image is analyzed, the central coordinate information of the target image is obtained, and the positioning and grabbing functions are realized; by acquiring the position of the fruit to be picked, the mobile robot provided with the picking mechanical arm for picking is used for carrying out autonomous motion path navigation to carry out picking planning, and the obstacle avoidance and the movement during picking of the mobile robot are improved.
The invention provides a control method of an intelligent fruit picking robot, which comprises the following steps:
constructing a real-time three-dimensional map and environment map information of a farm to be picked;
planning a motion path of a mobile robot carrying a picking mechanical arm when the mobile robot moves on the farm to be picked according to the real-time three-dimensional map and the environment map information of the farm to be picked;
constructing a YOLOv4 depth network model according to the depth image characteristic information of different fruits to be picked;
the method comprises the steps of shooting on site to obtain image information of fruits to be picked at a target position, carrying out target recognition on the image information by utilizing a YOLOv4 depth network model, and obtaining central point coordinate information of the fruits to be picked;
and controlling the mobile robot to move to a target position according to the coordinate information of the central point of the fruit to be picked and the planned motion path of the mobile robot, and controlling a picking mechanical arm on the mobile robot to pick and grab the fruit.
Further, the constructing of the real-time three-dimensional map of the farm to be picked includes:
according to the scene of the farm to be picked and the obstacles in the farm to be picked, a real-time three-dimensional map of the farm to be picked is constructed by utilizing a three-dimensional physical simulation platform Gazebo.
Further, the constructing the environment map information includes:
acquiring point cloud data of an environmental terrain of a farm to be picked by using a laser radar carried on a mobile robot;
and carrying out raspberry dispatching processing on the point cloud data, and obtaining environment map information of the farm to be picked in a two-dimensional grid form by using a gmapping algorithm.
Further, according to the real-time three-dimensional map and the environmental map information of the farm to be picked, planning a motion path of the mobile robot carrying the picking mechanical arm when the farm to be picked moves, the method includes:
according to the environment map information, simulating the motion of the mobile robot by using ROS three-dimensional visual chemical industry Rviz, and planning the motion path of the mobile robot carrying the picking mechanical arm;
the planned motion path is improved and optimized by performing joint simulation on the planned motion path and the real-time three-dimensional map;
the motion path comprises an obstacle avoidance motion path of the mobile robot and a motion path during picking.
Further, the constructing a YOLOv4 depth network model according to the depth image feature information of different fruits to be picked includes:
acquiring single-mode image data and multi-mode image data of different fruits to be picked;
constructing a multi-modal attention fusion network by using a part of the single-modal image data and the multi-modal image data, and increasing the receptive field of the multi-modal attention fusion network by adopting multi-scale feature fusion;
extracting image depth features of the multi-modal attention fusion network, and calibrating an image depth feature channel in a self-adaptive manner by using an attention mechanism;
fusing the calibrated image depth features with visible light to construct a fused multi-modal attention fusion network;
and training the fused multi-modal attention fusion network by using the other part of the single-modal image data and the multi-modal image data to obtain a YOLOv4 depth network model.
Further, utilize ROS three-dimensional visualization tool Rviz, simulation carries with the mobile robot who picks the arm and keeps away the barrier motion path and the motion path when carrying out the harvesting, include:
respectively acquiring motion data of the mobile robot by using an inertia measurement unit and a wheel type odometer which are arranged on the mobile robot;
fusing the motion data of the mobile robot and point cloud data of the environmental terrain of the farm to be picked, which is acquired by a laser radar carried on the mobile robot, by using a Kalman filtering algorithm;
according to the fused data, a motion map of the mobile robot is constructed by utilizing an SLAM algorithm, and the mobile robot in motion is positioned;
and planning a path of the mobile robot according to the fused data.
Further, the constructing a motion map of the mobile robot by using a SLAM algorithm according to the fused data and positioning the mobile robot in motion comprises:
constructing a map of the mobile robot during movement by using a Cartogrer algorithm;
and positioning the mobile robot by using an AMCL algorithm.
Further, the planning a path of the mobile robot according to the fused data includes:
planning the global motion path of the mobile robot by using an heuristic search A-algorithm;
and planning a local motion path of the mobile robot by adopting a TEB algorithm.
Further, the controlling the mobile robot to move to the target position and controlling the picking mechanical arm on the mobile robot to pick according to the coordinate information of the central point of the fruit to be picked and the planned motion path of the mobile robot includes:
establishing a kinematic equation according to the coordinate information of the central point of the fruit to be picked;
performing inverse kinematics solution on a kinematics equation;
and simultaneously controlling the mobile robot to move and the picking mechanical arm to grab by using the ROS system, so that the mobile robot moves to the position of the fruit to be picked, and the picking mechanical arm grabs the fruit to be picked.
The invention provides a control system of an intelligent fruit picking robot, comprising:
the map information construction module is used for constructing a real-time three-dimensional map and environment map information of the farm to be picked;
the motion path acquisition module is used for planning a motion path of the mobile robot carrying the picking mechanical arms when the mobile robot moves on the farm to be picked according to the real-time three-dimensional map and the environment map information of the farm to be picked;
the model building module is used for building a YOLOv4 depth network model according to the depth image characteristic information of different fruits to be picked;
the identification module is used for shooting and acquiring image information of fruits to be picked at a target position on the spot, and performing target identification on the image information by using a YOLOv4 depth network model to acquire central point coordinate information of the fruits to be picked;
and the picking execution module is used for controlling the mobile robot to move to a target position according to the coordinate information of the central point of the fruit to be picked and the planned motion path of the mobile robot, and controlling a picking mechanical arm on the mobile robot to pick and grab the fruit.
Compared with the prior art, the invention has the beneficial effects that:
in order to enable the intelligent picking robot to rapidly and accurately detect and identify fruits with different ripeness degrees under the complex and various environments of orchards with different illumination intensities, fruit leaf overlapping and shielding and the like all day long, the invention provides the fruit detection and identification method for improving the Yolov4, a multi-scale feature fusion module Spatial Pyramid Pooling is added into a detection network of an original Yolov4 deep learning network model, the global features and the local features of the fruits are fused, the recall rate of the fruit detection and identification is improved, the central point of a target image is analyzed, the central coordinate information of the target image is obtained, and the positioning and grabbing function is realized. In addition, aiming at the problems of large particle suggested distribution error, particle consumption, long algorithm running time and the like in the traditional RBPF-SLAM algorithm, the invention fuses data among a laser radar, an inertia measurement unit and a wheel type odometer, provides a novel method for fusing the SLAM by multiple sensors, establishes a robot motion model based on the fusion of the inertia measurement unit and the wheel type odometer, optimizes particle suggested distribution by using a laser radar observation information fusion motion model, solves the problems of large system predicted distribution error and particle memory explosion, improves a particle resampling strategy, keeps particle diversity and slows down the particle dissipation problem.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic flow chart of a control method of an intelligent fruit picking robot provided by the invention;
FIG. 2 is a flow chart of a control system of a control method of an intelligent fruit picking robot, which is provided by the invention;
fig. 3 is a farm three-dimensional map established in an embodiment of the control method of the intelligent fruit picking robot according to the invention;
fig. 4 is a two-dimensional grid map of a farm in an embodiment of the control method of the intelligent fruit picking robot according to the invention;
fig. 5 is a hardware connection diagram of the intelligent fruit picking robot in the control method of the intelligent fruit picking robot provided by the invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, but it should be understood that the scope of the present invention is not limited by the specific embodiments.
Example 1
As shown in fig. 1-4, the present invention provides a control method of an intelligent fruit picking robot, comprising the following steps:
step 1: constructing a real-time three-dimensional map and environment map information of a farm to be picked;
step 2: planning a motion path of a mobile robot carrying a picking mechanical arm when the mobile robot moves on a farm to be picked according to a real-time three-dimensional map and environment map information of the farm to be picked;
and step 3: constructing a YOLOv4 depth network model according to the depth image characteristic information of different fruits to be picked;
and 4, step 4: utilizing a depth camera to shoot and obtain image information of fruits to be picked at a target position on the spot, utilizing a YOLOv4 depth network model to carry out target recognition on the image information, and obtaining central point coordinate information of the fruits to be picked;
and 5: and controlling the mobile robot to move to a target position according to the coordinate information of the central point of the fruit to be picked and the planned motion path of the mobile robot, and controlling a picking mechanical arm on the mobile robot to pick and grab the fruit.
The step 1 specifically comprises the following steps:
step 1.1: constructing a real-time three-dimensional map of a farm to be picked, comprising:
and according to the scene of the farm to be picked and the obstacles in the farm to be picked, constructing a real-time three-dimensional map of the farm to be picked by utilizing a three-dimensional physical simulation platform Gazebo.
The real-time three-dimensional map is used for simulating the problems encountered or about to be encountered by the picking robot in the actual picking operation, so that the multi-physical combined scene simulation of the farm to be picked is realized.
Step 1.2: constructing environment map information, comprising:
acquiring point cloud data of an environmental terrain of a farm to be picked by using a laser radar carried on a mobile robot;
and carrying out raspberry dispatching processing on the point cloud data, and obtaining environment map information of the farm to be picked in a two-dimensional grid form by using a gmapping algorithm.
The step 2 specifically comprises the following steps:
step 2.1: according to environment map information, the motion of the mobile robot is simulated by utilizing ROS three-dimensional visual chemical industry Rviz, and the motion path of the mobile robot with the picking mechanical arm is planned, and the method comprises the following steps:
and respectively acquiring the motion data of the mobile robot by using an inertia measurement unit and a wheel type odometer which are arranged on the mobile robot. The data acquired by the inertia measurement unit are the acceleration and the angular velocity of the mobile robot respectively; the data acquired by the wheel type odometer is a speed value measured by an encoder on the mobile robot.
The method comprises the following steps of utilizing a Kalman filtering algorithm to fuse motion data of the mobile robot and point cloud data of the environment terrain of a farm to be picked, wherein the point cloud data are acquired by a laser radar carried on the mobile robot, and the specific fusion process is as follows:
initializing the current pose of the robot and Kalman initialization, resolving data acquired by an inertial measurement unit, resolving data acquired by a wheel-type odometer, and fusing the resolved motion data information of the mobile robot and the acquired point cloud data of the environmental terrain of the farm to be picked by using a Kalman filtering algorithm to obtain fused data.
According to the fused data, a motion map of the mobile robot is constructed by utilizing an SLAM algorithm, and the mobile robot in motion is positioned;
and planning a path of the mobile robot according to the fused data.
The method for constructing the motion map of the mobile robot by utilizing the SLAM algorithm and positioning the mobile robot in motion comprises the following steps:
constructing a map of the mobile robot during movement by using a Cartogrrapher algorithm;
and positioning the mobile robot by utilizing an AMCL algorithm.
According to the fused data, path planning is carried out on the mobile robot, and the method comprises the following steps:
the method comprises the following steps that (1) because the motion path planning aspect of the mobile robot is mainly divided into global path planning and local path planning, the global motion path of the mobile robot is planned by using an A-x algorithm of heuristic search;
and planning a local motion path of the mobile robot by adopting a TEB algorithm. The local path algorithm is used for optimizing and correcting the initial track generated by the global path.
Step 2.2: and improving and optimizing the planned motion path by performing joint simulation on the planned motion path and the real-time three-dimensional map.
The motion path comprises an obstacle avoidance motion path of the mobile robot and a moving motion path during picking. By improving and optimizing the motion path of the mobile robot, the optimal motion path of the mobile robot during picking can be obtained.
In step 3, as the fruits to be picked are divided into a plurality of varieties during actual fruit picking, the colors and the shapes of the fruits are different, and obviously, it is not practical to distinguish dense branches and fruits only by the shapes and the colors. If the fruit is marked by completely depending on image processing, the time is very consumed, and in the classification, the picking purpose is far from being achieved by two variables, so that the target detection needs to be carried out by means of a classification concept with stronger robustness. The concept of deep convolutional neural networks was therefore introduced. And carrying out target detection based on the deep convolutional neural network, wherein a YOLO lightweight algorithm network is selected for carrying out target detection. Therefore, the step 3 of constructing the YOLOv4 deep network model specifically includes:
respectively acquiring single-mode image data and multi-mode image data of different fruits to be picked;
constructing a multi-modal attention fusion network by using a part of single-modal image data and multi-modal image data, and increasing the receptive field of the multi-modal attention fusion network by adopting multi-scale feature fusion;
extracting image depth features of the multi-modal attention fusion network, and calibrating an image depth feature channel in a self-adaptive manner by using an attention mechanism;
fusing the calibrated image depth features with visible light to construct a fused multi-modal attention fusion network;
and training the fused multi-modal attention fusion network by using the other part of the single-modal image data and the multi-modal image data to obtain a YOLOv4 deep network model.
The target detection research of the fruits to be picked is carried out according to the single-mode image data and the multi-mode image data of different fruits to be picked, the multi-mode attention fusion network provided adopts multi-scale feature fusion to increase the receptive field, and the feature channels are calibrated in a self-adaptive manner by using an attention system, so that the network pays attention to more important information. And because the target detection algorithm based on the visible light image easily generates the problems of false detection, missed detection and the like under the conditions of strong light and shadow. Therefore, through the fusion of the visible light and the depth image characteristics of the fruits to be picked, when certain modal information is changed greatly or is lacked, the accuracy and the robustness of the detector are improved integrally in a mode of information complementation between modes.
And then transplanting the designed visual system into a raspberry pi 4b development board, carrying out a real-object experiment based on an embedded platform and analyzing the experiment result.
The step 5 specifically comprises the following steps:
establishing a kinematic equation according to the coordinate information of the central point of the fruit to be picked;
performing inverse kinematics solution on a kinematics equation;
and simultaneously controlling the mobile robot to move and the picking mechanical arm to grab by using the ROS system, so that the mobile robot moves to the position of the fruit to be picked, and the picking mechanical arm grabs the fruit to be picked.
Example 2
The invention provides a control system of an intelligent fruit picking robot, which comprises:
the map information construction module is used for constructing a real-time three-dimensional map and environment map information of the farm to be picked;
the motion path acquisition module is used for planning a motion path of the mobile robot carrying the picking mechanical arms when the mobile robot moves on the farm to be picked according to the real-time three-dimensional map and the environment map information of the farm to be picked;
the model building module is used for building a YOLOv4 depth network model according to the depth image characteristic information of different fruits to be picked;
the identification module is used for shooting and acquiring image information of fruits to be picked at a target position in the field, and performing target identification on the image information by using a YOLOv4 depth network model to acquire central point coordinate information of the fruits to be picked;
and the picking execution module is used for controlling the mobile robot to move to a target position according to the coordinate information of the central point of the fruit to be picked and the planned motion path of the mobile robot, and controlling a picking mechanical arm on the mobile robot to pick and grab the fruit.
The present invention will be described in further detail with reference to specific examples.
1. A real-time three-dimensional map of the farm to be picked, which is built by using a three-dimensional physical simulation platform Gazebo, is shown in fig. 3.
2. The environment map information of the farm to be picked established by the mapping algorithm is shown in fig. 4.
3. In an Ubuntu20.04LTS system at a PC end, single-mode image data and multi-mode image data of fruits to be picked are trained in a multi-mode attention fusion network to generate a YOLOv4 deep network model, and the performance of the YOLOv4 deep network model in recognition is tested.
The method comprises the steps of utilizing an OrbbecaAstra depth camera to shoot on the spot to obtain image information of fruits to be picked, utilizing a camera to collect images of fruit trees, and utilizing a YOLOv4 depth network model to identify targets of related information of the fruits to obtain central point coordinate information of the fruits.
The method disclosed by the invention has the advantages that the feasibility of a YOLO algorithm is detected by fusing visible light and depth image characteristics of fruits to be picked and selecting a detection data set with the picked fruits, the experimental recognition effect of fruit picture acquisition is obtained by using the existing ROS depth camera to shoot on the spot, the grabbing of a picking mechanical arm is combined with the YOLO target detection, and the fruit picking accuracy is comprehensively detected.
4. The hardware connection diagram of the picking robot in the invention is shown in fig. 5. Which comprises the following steps: raspberry group 4b development board, MCU controller, motion to and carry on laser radar, ROS degree of depth camera, ultrasonic sensor, pick arm, infrared shot tube, remote control equipment on the motion. Wherein the motion mechanism is a Mecanum wheel vehicle.
5. The modular design is adopted for the control system, STM32F4 series chips are used as main control, drivers of all joints of the picking mechanical arm are controlled, the grabbing and releasing actions of the end effector are controlled by using modes of an upper computer (PC) and a lower computer (single chip microcomputer) to complete the control system, and on the basis, the upper computer control interface is designed by using QT visual program compiling software.
6. And compiling a communication node program with the bottom control panel in the ROS system by utilizing a C + + language to realize information interaction, thereby realizing real-time control over the Mecanum wheel vehicle and the picking mechanical arm.
7. When picking work is carried out, the robot is placed in a picking environment, the initial pose of a picking manipulator is determined, each joint and the control system are determined to be in a normal state, and the initialization of the control page is waited to be completed.
8. After initialization is completed, a picking target position and an optimal path are set, the motion parameters of the picking mechanical arm are set, picking operation is started, and the picking mechanical arm sends the end effector to the fruit picking position according to the setting.
Finally, the description is as follows: the above disclosure is only one specific embodiment of the present invention, however, the present invention is not limited thereto, and any modifications that can be made by those skilled in the art should fall within the protection scope of the present invention.

Claims (10)

1. A control method of an intelligent fruit picking robot is characterized by comprising the following steps:
constructing a real-time three-dimensional map and environment map information of a farm to be picked;
planning a motion path of a mobile robot carrying a picking mechanical arm when the mobile robot moves on the farm to be picked according to the real-time three-dimensional map and the environment map information of the farm to be picked;
constructing a YOLOv4 depth network model according to the depth image characteristic information of different fruits to be picked;
the method comprises the steps of shooting on the spot to obtain image information of fruits to be picked at a target position, carrying out target recognition on the image information by utilizing a YOLOv4 depth network model, and obtaining central point coordinate information of the fruits to be picked;
and controlling the mobile robot to move to a target position according to the coordinate information of the central point of the fruit to be picked and the planned motion path of the mobile robot, and controlling a picking mechanical arm on the mobile robot to pick and grab the fruit.
2. The control method of an intelligent fruit picking robot according to claim 1, characterized in that: the construction of the real-time three-dimensional map of the farm to be picked comprises the following steps:
according to the scene of the farm to be picked and the obstacles in the farm to be picked, a real-time three-dimensional map of the farm to be picked is constructed by utilizing a three-dimensional physical simulation platform Gazebo.
3. The control method of an intelligent fruit picking robot according to claim 2, characterized in that: the constructing of the environment map information includes:
acquiring point cloud data of an environmental terrain of a farm to be picked by using a laser radar carried on a mobile robot;
and performing raspberry dispatching processing on the point cloud data, and obtaining environment map information of the farm to be picked in a two-dimensional grid form by using a gmapping algorithm.
4. The control method of an intelligent fruit picking robot according to claim 3, characterized in that: according to the real-time three-dimensional map and the environmental map information of waiting to pick the farm, the motion route of the mobile robot who carries the harvesting arm when waiting to pick the farm and moving is planned to include:
according to the environment map information, simulating the motion of the mobile robot by using ROS three-dimensional visual chemical industry Rviz, and planning the motion path of the mobile robot carrying the picking mechanical arm;
the planned motion path is improved and optimized by performing joint simulation on the planned motion path and the real-time three-dimensional map;
the motion path comprises an obstacle avoidance motion path of the mobile robot and a motion path during picking.
5. The control method of an intelligent fruit picking robot according to claim 1, characterized in that: the method for constructing the YOLOv4 depth network model according to the depth image characteristic information of different fruits to be picked comprises the following steps:
acquiring single-mode image data and multi-mode image data of different fruits to be picked;
constructing a multi-modal attention fusion network by using a part of the single-modal image data and the multi-modal image data, and increasing the receptive field of the multi-modal attention fusion network by adopting multi-scale feature fusion;
extracting image depth features of the multi-modal attention fusion network, and calibrating an image depth feature channel in a self-adaptive manner by using an attention mechanism;
fusing the calibrated image depth features with visible light to construct a fused multi-modal attention fusion network;
and training the fused multi-modal attention fusion network by using the other part of the single-modal image data and the multi-modal image data to obtain a YOLOv4 depth network model.
6. The control method of an intelligent fruit picking robot according to claim 4, characterized in that: utilize ROS three-dimensional visualization tool Rviz, simulation carries with the moving robot of picking arm's obstacle avoidance motion route and the moving motion route when carrying out the harvesting, include:
respectively acquiring motion data of the mobile robot by using an inertia measurement unit and a wheel type odometer which are arranged on the mobile robot;
fusing the motion data of the mobile robot and point cloud data of the environmental terrain of the farm to be picked, which is acquired by a laser radar carried on the mobile robot, by using a Kalman filtering algorithm;
according to the fused data, a motion map of the mobile robot is constructed by utilizing an SLAM algorithm, and the mobile robot in motion is positioned;
and planning a path of the mobile robot according to the fused data.
7. The control method of an intelligent fruit picking robot according to claim 6, characterized in that: the method for constructing the motion map of the mobile robot by utilizing the SLAM algorithm according to the fused data and positioning the mobile robot in motion comprises the following steps:
constructing a map of the mobile robot during movement by using a Cartogrrapher algorithm;
and positioning the mobile robot by using an AMCL algorithm.
8. The control method of an intelligent fruit picking robot according to claim 6, characterized in that: the path planning of the mobile robot according to the fused data comprises the following steps:
planning the global motion path of the mobile robot by using an heuristic search A-algorithm;
and planning a local motion path of the mobile robot by adopting a TEB algorithm.
9. The control method of an intelligent fruit picking robot according to claim 1, characterized in that: the method for controlling the mobile robot to move to a target position and controlling a picking mechanical arm on the mobile robot to grab and pick according to the coordinate information of the central point of the fruit to be picked and the planned motion path of the mobile robot comprises the following steps:
establishing a kinematic equation according to the coordinate information of the central point of the fruit to be picked;
performing inverse kinematics solution on a kinematics equation;
and simultaneously controlling the mobile robot to move and the picking mechanical arm to grab by using the ROS system, so that the mobile robot moves to the position of the fruit to be picked, and the picking mechanical arm grabs the fruit to be picked.
10. The utility model provides an intelligence fruit picking robot's control system which characterized in that: the method comprises the following steps:
the map information construction module is used for constructing a real-time three-dimensional map and environment map information of the farm to be picked;
the motion path acquisition module is used for planning a motion path of the mobile robot carrying the picking mechanical arms when the mobile robot moves on the farm to be picked according to the real-time three-dimensional map and the environment map information of the farm to be picked;
the model building module is used for building a YOLOv4 depth network model according to the depth image characteristic information of different fruits to be picked;
the identification module is used for shooting and acquiring image information of fruits to be picked at a target position in the field, and performing target identification on the image information by using a YOLOv4 depth network model to acquire central point coordinate information of the fruits to be picked;
and the picking execution module is used for controlling the mobile robot to move to a target position according to the coordinate information of the central point of the fruit to be picked and the planned motion path of the mobile robot, and controlling a picking mechanical arm on the mobile robot to pick and grab the fruit.
CN202211109274.5A 2022-09-13 2022-09-13 Control method and control system of intelligent fruit picking robot Active CN115299245B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211109274.5A CN115299245B (en) 2022-09-13 2022-09-13 Control method and control system of intelligent fruit picking robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211109274.5A CN115299245B (en) 2022-09-13 2022-09-13 Control method and control system of intelligent fruit picking robot

Publications (2)

Publication Number Publication Date
CN115299245A true CN115299245A (en) 2022-11-08
CN115299245B CN115299245B (en) 2023-07-14

Family

ID=83866338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211109274.5A Active CN115299245B (en) 2022-09-13 2022-09-13 Control method and control system of intelligent fruit picking robot

Country Status (1)

Country Link
CN (1) CN115299245B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115918377A (en) * 2022-12-21 2023-04-07 北京理工大学 Control method and control device of automatic tree fruit picking machine and automatic tree fruit picking machine
CN116977615A (en) * 2023-07-27 2023-10-31 广东技术师范大学 Litchi stem identification and positioning method and device
CN117617002A (en) * 2024-01-04 2024-03-01 太原理工大学 Method for automatically identifying tomatoes and intelligently harvesting tomatoes

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978949A (en) * 2019-03-26 2019-07-05 南开大学 A kind of method that crops identification based on computer vision is extracted with characteristic point three-dimensional coordinate
WO2019179270A1 (en) * 2018-03-23 2019-09-26 广州极飞科技有限公司 Plant planting data measuring method, working route planning method, device and system
CN112243698A (en) * 2020-10-22 2021-01-22 安徽农业大学 Automatic walnut picking and collecting method based on multi-sensor fusion technology
CN113099848A (en) * 2021-06-04 2021-07-13 山东农业大学 Picking robot efficient picking, placing and transporting integrated platform and using method
CN113110513A (en) * 2021-05-19 2021-07-13 哈尔滨理工大学 ROS-based household arrangement mobile robot
CN114429310A (en) * 2022-02-08 2022-05-03 辽宁科技大学 Database-based agricultural picking object identification algorithm
US20220203547A1 (en) * 2020-12-31 2022-06-30 Plus One Robotics, Inc. System and method for improving automated robotic picking via pick planning and interventional assistance
CN114902872A (en) * 2022-04-26 2022-08-16 华南理工大学 Visual guidance method for picking fruits by robot

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019179270A1 (en) * 2018-03-23 2019-09-26 广州极飞科技有限公司 Plant planting data measuring method, working route planning method, device and system
CN109978949A (en) * 2019-03-26 2019-07-05 南开大学 A kind of method that crops identification based on computer vision is extracted with characteristic point three-dimensional coordinate
CN112243698A (en) * 2020-10-22 2021-01-22 安徽农业大学 Automatic walnut picking and collecting method based on multi-sensor fusion technology
US20220203547A1 (en) * 2020-12-31 2022-06-30 Plus One Robotics, Inc. System and method for improving automated robotic picking via pick planning and interventional assistance
CN113110513A (en) * 2021-05-19 2021-07-13 哈尔滨理工大学 ROS-based household arrangement mobile robot
CN113099848A (en) * 2021-06-04 2021-07-13 山东农业大学 Picking robot efficient picking, placing and transporting integrated platform and using method
CN114429310A (en) * 2022-02-08 2022-05-03 辽宁科技大学 Database-based agricultural picking object identification algorithm
CN114902872A (en) * 2022-04-26 2022-08-16 华南理工大学 Visual guidance method for picking fruits by robot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
白云汉;: "基于SLAM算法和深度神经网络的语义地图构建研究", 计算机应用与软件, no. 01 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115918377A (en) * 2022-12-21 2023-04-07 北京理工大学 Control method and control device of automatic tree fruit picking machine and automatic tree fruit picking machine
CN116977615A (en) * 2023-07-27 2023-10-31 广东技术师范大学 Litchi stem identification and positioning method and device
CN116977615B (en) * 2023-07-27 2024-04-30 广东技术师范大学 Litchi stem identification and positioning method and device
CN117617002A (en) * 2024-01-04 2024-03-01 太原理工大学 Method for automatically identifying tomatoes and intelligently harvesting tomatoes

Also Published As

Publication number Publication date
CN115299245B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN111602517B (en) Distributed visual active perception method for string-type fruits and application of distributed visual active perception method
CN115299245A (en) Control method and control system of intelligent fruit picking robot
CN111693050B (en) Indoor medium and large robot navigation method based on building information model
WO2017028653A1 (en) Method and system for automatically establishing map indoors by mobile robot
CN111496770A (en) Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method
CN113189977B (en) Intelligent navigation path planning system and method for robot
Jiang et al. Autonomous navigation system of greenhouse mobile robot based on 3D Lidar and 2D Lidar SLAM
WO2015180021A1 (en) Pruning robot system
CN113325837A (en) Control system and method for multi-information fusion acquisition robot
CN111462154A (en) Target positioning method and device based on depth vision sensor and automatic grabbing robot
CN109708644A (en) Mobile Robotics Navigation method, apparatus and mobile robot
EP4141474A1 (en) System for 3d surveying by an autonomous robotic vehicle using lidar-slam and an estimated point distribution map for path planning
CN216058333U (en) Intelligent movement fruit picking robot
CN114407030A (en) Autonomous navigation distribution network live working robot and working method thereof
CN114080905A (en) Picking method based on digital twins and cloud picking robot system
CN116494201A (en) Monitoring integrated power machine room inspection robot and unmanned inspection method
Chen et al. Design and implementation of an artificial intelligence of things-based autonomous mobile robot system for pitaya harvesting
Ju Application of autonomous navigation in robotics
CN115890677B (en) Dead chicken picking robot for standardized cage chicken house and method thereof
CN116620802A (en) Transportation method utilizing indoor construction intelligent material transportation system
CN116852352A (en) Positioning method for mechanical arm of electric secondary equipment based on ArUco code
CN116352722A (en) Multi-sensor fused mine inspection rescue robot and control method thereof
CN115589845A (en) Intelligent cotton picking robot and cotton picking operation path planning method thereof
CN115690343A (en) Robot laser radar scanning and mapping method based on visual following
CN115139315A (en) Grabbing motion planning method for picking mechanical arm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant