CN115629609A - Many AGV collaborative work system - Google Patents

Many AGV collaborative work system Download PDF

Info

Publication number
CN115629609A
CN115629609A CN202211323178.0A CN202211323178A CN115629609A CN 115629609 A CN115629609 A CN 115629609A CN 202211323178 A CN202211323178 A CN 202211323178A CN 115629609 A CN115629609 A CN 115629609A
Authority
CN
China
Prior art keywords
module
agv
motion
path
planning module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211323178.0A
Other languages
Chinese (zh)
Inventor
任鹏举
宋翔
丁焱
景鑫
于航
郑南宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202211323178.0A priority Critical patent/CN115629609A/en
Publication of CN115629609A publication Critical patent/CN115629609A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A multiple AGV cooperative system comprising: the system comprises a plurality of AGVs, a controller, a motion planning module, a behavior level planning module, a path resource allocation module, a perception module and a scheduling module, wherein the perception module comprises a single perception module and a perception fusion module. The system adopts an integrated controller based on Zynq UltraScale + platform as a processor, has high hardware integration level, low power consumption and cost saving, and has certain universality and mobility.

Description

Many AGV collaborative work system
Technical Field
The utility model belongs to the technical field of AGV intelligent control, in particular to many AGV collaborative work system.
Background
An Automated Guided Vehicle (AGV) is a transport device that can automatically perform material handling, and it is usually navigated using radio, camera, laser radar, or magnetic stripes, magnetic nails, and two-dimensional codes marked on the ground. Compared with other logistics transportation equipment, the AGV has the advantages of being strong in adaptability, high in automation degree, capable of saving labor cost, convenient to maintain and the like. With the gradual increase of labor cost and the increasingly varied production patterns, more and more enterprises adopt highly automated production systems, and the AGVs are important components of the automated production systems. Therefore, the design research of the AGV has important significance for improving the production efficiency and reducing the production cost of enterprises.
Compare with single AGV, the ability that many AGV systems carried the transport operation is stronger to in the face of complicated changeable factory environment, many AGV systems can the quick response external environment's change, and the task is accomplished to nimble efficient, consequently develops many AGV systems just in order to be necessary. Compared with a single AGV, the multi-AGV system needs to solve the problems of multi-machine cooperation, information interaction, conflict resolution and the like, so that the design of the multi-AGV system is more complex. At present, designing a multiple AGV system with high population, self-organization and self-adaptability becomes a research hotspot, mainly including the following research directions: the method comprises the following steps of research of an action analysis and control algorithm, research of an autonomous perception and networking algorithm, research of a multi-machine positioning algorithm, research of a multi-machine autonomous dynamic decision and path planning algorithm, research of a multi-machine formation combination solution and a bionic behavior simulation algorithm and the like.
In addition, as deep learning develops, more and more deep learning algorithms are applied to AGVs. Such as target detection and tracking based on deep learning, AGV control planning based on deep reinforcement learning, etc. Embedded development of implementing deep learning on AGVs has also become a key to the research in the AGV field.
Most of the AGVs with complex functions at the present stage are equipped with more than two processors, for example, an industrial personal computer and a single chip microcomputer are respectively used for a perception planning function and a bottom layer control function, which brings much extra power consumption overhead, material and development cost. Furthermore, many of the recent papers and patents focus on the ability of AGVs to perform advanced tasks or to expand and improve existing functions, but these methods are applied in specific scenarios, such as adding temporary hardware solutions to an AGV in a laboratory scenario or designing a specific multiple AGV system in a factory environment. These methods lack certain universality and migratability.
Disclosure of Invention
In order to solve the above technical problem, the present disclosure discloses a multiple AGV cooperative work system, including: the system comprises a plurality of AGVs, a controller, a motion planning module, a behavior level planning module, a path resource allocation module, a sensing module and a scheduling module, wherein the sensing module comprises a single sensing module and a sensing fusion module; wherein the content of the first and second substances,
the controller receives an instruction from the motion planning module and feeds back the running state of the AGV to the motion planning module;
the motion planning module receives the space-time channel sent by the behavior-level planning module and the motion mode sent by the scheduling module, and sends the vehicle body speed to the controller;
the behavior level planning module receives the behaviors of the semantic information level sent by the path resource allocation module, determines whether each AGV immediately starts lane changing or firstly moves for a distance and then changes lanes, generates a space-time channel and sends the space-time channel to the motion planning module;
the path resource allocation module receives the scene real-time information flow from the perception fusion module, reasonably allocates path resources according to a preset rule, and sends a semantic information level behavior to the behavior level planning module;
the single sensing module detects the sizes and the types of the AGV, the pedestrian and the goods in the scene and transmits the obtained detection result to the sensing fusion module;
the perception fusion module fuses an environment static scene, AGV real-time perception information and a sensor real-time monitoring result installed in the environment, so that a global dynamic scene map is obtained, and a scene real-time information stream is transmitted to the path resource distribution module;
and the scheduling module is responsible for task allocation, multi-AGV cooperative path planning and traffic management control.
Through the technical scheme, the system adopts the integrated controller based on the Zynq UltraScale + platform as the processor, has high hardware integration level, low power consumption and cost saving, and has certain universality and mobility.
Drawings
FIG. 1 is a diagram of the hardware and software architecture of a multiple AGV cooperative work system provided in one embodiment of the present disclosure;
FIG. 2 is an AGV end hardware and software architecture diagram provided in one embodiment of the present disclosure;
FIG. 3 is a detailed design drawing of a standalone AGV control plan provided in one embodiment of the present disclosure;
fig. 4 (a) to fig. 4 (b) are schematic diagrams of obstacle avoidance measurement and control provided in an embodiment of the present disclosure;
FIG. 5 is a detailed design view of an exercise planning module provided in one embodiment of the present disclosure;
fig. 6 (a) to 6 (b) are spatiotemporal channel diagrams provided in one embodiment of the present disclosure;
FIG. 7 is a diagram of a model of a bicycle provided in one embodiment of the present disclosure;
FIG. 8 is a diagram of a Purepursuit algorithm provided in one embodiment of the present disclosure;
FIG. 9 is a diagram of the Stanley algorithm provided in one embodiment of the present disclosure;
FIG. 10 is a detailed design drawing of the standalone AGV positioning and sensing provided in one embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art understand the technical solutions disclosed in the present disclosure, the technical solutions of various embodiments will be described below with reference to the embodiments and the accompanying fig. 1 to 10, where the described embodiments are some embodiments of the present disclosure, but not all embodiments. The terms "first," "second," and the like as used in this disclosure are used for distinguishing between different objects and not for describing a particular order. Furthermore, "include" and "have," as well as any variations thereof, are intended to cover and not to exclude inclusions. For example, a process, method, system, or article or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, system, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the disclosure. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It will be appreciated by those skilled in the art that the embodiments described herein may be combined with other embodiments.
Referring to FIG. 1, in one embodiment, the present disclosure discloses a multiple AGV collaborative work system comprising: the system comprises a plurality of AGVs, a controller, a motion planning module, a behavior level planning module, a path resource allocation module, a sensing module and a scheduling module, wherein the sensing module comprises a single sensing module and a sensing fusion module; wherein
The controller receives an instruction from the motion planning module and feeds back the running state of the AGV to the motion planning module;
the motion planning module receives the space-time channel sent by the behavior-level planning module and the motion mode sent by the scheduling module, and sends the vehicle body speed to the controller;
the behavior level planning module receives the semantic information level behaviors sent by the path resource allocation module, determines whether each AGV immediately starts lane changing or directly moves for a distance and then lane changing, generates a space-time channel and sends the space-time channel to the motion planning module;
the path resource distribution module receives the scene real-time information flow from the perception fusion module, reasonably distributes path resources according to a preset rule, and sends semantic information level behaviors to the behavior level planning module;
the single sensing module detects the sizes and the types of the AGV, the pedestrian and the goods in the scene and transmits the obtained detection result to the sensing fusion module;
the perception fusion module fuses an environment static scene, AGV real-time perception information and a sensor real-time monitoring result installed in the environment, so that a global dynamic scene map is obtained, and a scene real-time information stream is transmitted to the path resource distribution module;
and the scheduling module is responsible for task allocation, multi-AGV cooperative path planning and traffic management control.
With this embodiment, the multiple AGV cooperative work system architecture may be adapted to different kinds of AGVs and may be capable of operating in different scenarios. As can be seen from fig. 1, the software architecture is mainly divided into two parts: single positioning, environment sensing and single control planning running on the single AGV at the end side, and multi-machine cooperative sensing and multi-machine cooperative decision planning and scheduling running on the cloud side. And the end side and the cloud side are subjected to information interaction through 5G or WIFI.
A method for fusing different AGV sensing results is provided at a multi-machine cooperative sensing part of a cloud side, so that contradictions and differences among different AGV observations are solved, and a fused occupation grid map is obtained.
In a multi-machine collaborative decision planning part at a cloud side, a multipath channel passing rule and a corresponding centralized multi-AGV multipath channel lane change decision planning method based on priority are provided, and the method can be suitable for complex environments containing obstacles or non-collaborative AGVs, and is mainly divided into two modules as shown in figure l: a path resource allocation module and a behavior planning module.
And a multi-machine collaborative decision planning part at the cloud side also provides a multi-AGV global scheduling scheme based on flow, a multi-AGV collision-free path planning method based on path expected flow is used for planning an optimal global path for the multi-AGV, and traffic management control is carried out through real-time flow, so that multi-AGV scheduling is efficiently and safely completed. The specific design scheme mainly comprises the following seven parts:
1, the method comprises the following steps: and a monomer sensing module. The single sensing module mainly adopts a sensor installed on a single AGV to realize local scene sensing and is mainly responsible for detecting the size, category, position and boundary box descriptors of the AGV, pedestrians and goods in a scene.
And 2, a step of: and a perception fusion module. The perception gathering and fusing module is mainly used for fusing environment static scenery, AGV real-time perception information and a sensor real-time monitoring result installed in the environment, so that a global dynamic scene map is obtained, and a scene real-time information stream is transmitted to the path resource distribution module.
And (3): and a path resource allocation module. Receiving scene real-time information flow from the perception fusion module, reasonably distributing path resources according to a preset rule, and sending semantic information level behaviors to the behavior level planning module.
And 4, a step of: a behavioral level planning module. And receiving the semantic information level behavior sent by the path resource allocation module, determining whether each AGV immediately starts lane changing or firstly moves for a distance and then changes lanes, generating a space-time channel and sending the space-time channel to the motion planning module.
And (5) a step of: and a scheduling module. The scheduling module is designed according to a flow-based multi-AGV global scheduling scheme.
6, a step of: and an exercise planning module. The motion planning module receives the space-time channel from the behavior-level planning module to realize lane change. In addition, the motion planning module receives the path sequence and the path attribute issued by the scheduling module, and is used for ensuring the continuous operation of the AGV. According to the space-time channel, the path sequence and the path attribute, the motion planning module generates the speed of the vehicle body through processes of Bezier curve generation, pre-aiming point tracking and the like and sends the speed of the vehicle body to the controller.
7, the method comprises the following steps: and (4) designing a controller. The controller mainly receives the vehicle body speed from the motion planning module, carries out motion calculation, converts the vehicle body speed into a specific driver instruction and sends the driver instruction to the actuator. Besides motion calculation, the controller also comprises parts such as manual control, obstacle avoidance measurement and control and the like.
In another embodiment of the present invention, the substrate is,
the AGV hardware architecture of the multi-AGV cooperative work system adopts a Zynq-based UltraScale + development platform design and comprises a processing system PS and a programmable logic PL, wherein the processing system PS comprises a quad-core APU and a dual-core RPU, and the programmable logic PL is a programmable logic device FPGA; the hardware architecture of the server side of the multi-AGV cooperative work system comprises a CPU and a HiPU, wherein the HiPU is used for neural network reasoning acceleration and is suitable for processing a large number of images and point cloud data in real time.
For this embodiment, as shown in FIG. 2, the hardware architecture of the end-side (standalone AGV) in this system employs a Zynq UltraScale + based development platform design. The board card comprises a Processing System (PS) and a Programmable Logic (PL). The PS end comprises a four-core APU and a two-core RPU, and the PL end is a programmable logic device FPGA. The Linux system is operated on the APU of the PS end, the RTOS is operated on the RPU, and the bidirectional inter-core communication between the APU and the RPU is realized through the OpenAMP framework. The hardware architecture of the cloud side (server side) mainly comprises a CPU and a HiPU. The HiPU is used for neural network reasoning acceleration and is suitable for processing a large number of images and point cloud data in real time.
The integrated controller based on the Zynq UltraScale + platform is adopted as the processor, and has the following advantages: 1) The hardware integration level is high, an industrial personal computer and a single chip microcomputer are usually adopted as an upper computer and a lower computer in a traditional framework respectively, and the design can complete all information processing and control operations by adopting one board card; 2) The power consumption is low, and the integrated controller based on the Zynq UltraScale + platform is adopted, and the PS end of the integrated controller adopts APUs and RPUs with low power consumption, so that the power consumption is much lower compared with that of X86 architecture processors adopted by most industrial personal computers. The FPGA at the PL end can be used for accelerating computation-intensive application programs, such as deep learning reasoning acceleration and the like, and the power consumption is much lower than that of a GPU scheme adopted by most AGV at present; 3) The cost is saved, the price of the Zynq UltraScale + platform series processor module is 1500-11000 yuan, the Zynq UltraScale + platform series processor module can be selected according to the complexity of practical application, and the cost is lower compared with the cost of an industrial personal computer + GPU scheme; 4) The standardization of pencil, unified multi-functional processing module design is with interface physical form solidification, but binding post batch production. The large workload of customizing the wiring harness due to the physical form of the interface when different industrial personal computers are adopted by the current AGV enterprises is avoided, the standardized production is facilitated, particularly, the current AGV industry is gradually developing towards the direction of small batch, flexibility and customization, and the production efficiency can be effectively improved through the unified wiring harness interface and the unified manufacturing mode.
In another embodiment of the present invention, the substrate is,
the controller, the motion planning module and the single sensing module run on a single AGV, and the behavior level planning module, the path resource distribution module and the sensing fusion module run on a server.
With this embodiment, a reasonable communication security design solves the possible problems of high communication delay and communication interruption between the two. Since most of the multiple AGV systems must adopt wireless communication, certain instability exists, and communication safety design must be considered. The decision planning of the AGV needs to interact with the cloud, the local decision planning needs high real-time performance, the tolerance to communication delay is low, and the decision reasonability can be reduced due to cooperation AGV information interaction delay, even the decision failure is caused.
In another embodiment, the controller includes a motion solution module, a manual manipulation module, and an obstacle avoidance module.
For this embodiment, the controller receives instructions from the motion planning module and feeds back the operating status of the AGV to the motion planning module; as shown in fig. 3, the controller mainly includes a driving measurement and control module, a manual control module, and an obstacle avoidance module.
In another embodiment of the present invention, the substrate is,
the motion resolving module converts the speed into a specific driver instruction according to a motion model of the vehicle body, and is suitable for different types of AGVs.
With this embodiment, first, the motion solution would receive the vehicle body speed from the motion planning module and convert the speed into front wheel steering angle and rear wheel speed based on a motion model of the vehicle body. The motion calculation method can adopt a bicycle model and can be compatible with AGV motion models such as an Ackerman model, a differential model and an actuating mechanism model, so that the method is suitable for different types of AGVs. And issuing the motion calculation result in a message format of a uint16 array and a pool array, wherein the motion calculation result mainly comprises the contents of the steering angle, the rear wheel speed, the steering direction and the like of the front wheel of the AGV.
As shown in table 1, some important definitions of the vehicle body state quantities and the input/output quantities are given. When the user adopts the system, the state quantity of the AGV body needs to be measured and modified in the system. The input and output quantities are variables for motion calculation, and do not need to be measured by a user, the input quantities (omega, v, a) are measured by a motion planning module or a sensor, and the output quantity (omega) is d 、δ f 、δ b ) For controlling the motion of the AGV. In addition, here, for safety reasons, the range of the velocity v and the acceleration a may be limited within the system.
State quantity of vehicle body Definition of Input and output quantity Definition of
L Distance from front wheel to center of rear wheel axle ω Angular velocity of vehicle body
R b Diameter of driving wheel v Linear velocity of vehicle body
R s Diameter of the tachometer wheel ω b Rotational speed of the driving wheel
D Wheel track of driving wheel δ f Front wheel steering angle
l b Distance from rear wheel to center of gravity of vehicle δ b Rotation angle of driving wheel
l f Distance from front wheel to center of gravity of vehicle a Acceleration of a vehicle
TABLE 1
The process of motion solution is described herein with respect to a bicycle model as an example. The AGV speed which is received by the motion planning module and sent by the motion resolving module comprises the linear velocity v, the slip angle beta and the angular velocity omega of the center of mass of the AGV, and the front wheel steering angle delta can be obtained according to an inverse kinematics formula f And rear wheel speed v b
Figure BDA0003908778340000111
And issuing the motion resolving result to an actuator to complete the control of the rear wheel speed and the front wheel steering of the AGV.
In another embodiment, the manual control module is used for AGV testing and handling emergencies, and mode switching and speed adjustment of the AGV are achieved through manual operation.
For the embodiment, the manual operation mainly refers to manually operating a key panel (manual mode key, automatic mode key, pause key), a foot brake, a hand brake, etc. on the AGV. When the foot brake or the hand brake is started, the manual operation module sends a signal to the brake driver, so that the brake is completed. The manual operation is mainly convenient for AGV testing and handling emergencies or emergencies, and mode switching and speed adjustment of the AGV are achieved through manual operation.
In another embodiment, when an unexpected situation occurs, the obstacle avoidance module timely brakes when the distance between the AGV and the obstacle is smaller than a threshold value, so that the safety of the AGV is guaranteed.
In this embodiment, the obstacle avoidance measurement and control is a safety guarantee of the bottommost layer of the AGVs, and when the cloud side multi-machine cooperative control planning makes mistakes, the communication between the single AGVs and the cloud side is interrupted or other unexpected situations occur, the obstacle avoidance measurement and control can timely brake when the distance between the AGVs and the obstacles is smaller than a threshold value, and the AGVs and safety are guaranteed. In order to be compatible with different obstacle avoidance sensors, the data format of obstacle avoidance measurement and control receiving is defined as sensor _ msgs/range. The logic of the obstacle avoidance measurement and control is as follows:
as shown in fig. 4 (a) to 4 (b), the gray squares represent an obstacle, an expansion range x is provided around the obstacle, a safety area y is provided around the AGV, and y < x, and the distance between the obstacle and the AGV is d. The values of x and y may be determined based on response time, vehicle body speed, and acceleration. There are two main logics for the handling of obstacles: 1) In the case shown in fig. 4 (a), when the distance between the AGV and the obstacle is outside the expansion range x, i.e., d > x, the AGV detects the obstacle, and the processing is performed on the cloud side. Planning a motion path through multi-machine collaborative decision, controlling the AGV to detour the obstacle, and avoiding the AGV from entering an expansion range x as much as possible through the motion path; 2) In the situation shown in fig. 4 (b), when the distance between the AGV and the obstacle is within the safety range y, i.e. d < y, the multi-machine cooperative decision planning fails, and the AGV needs to be braked urgently to avoid collision. In addition, under the condition that the AGV needs lane changing or turning, when the cloud side multi-machine collaborative decision planning and the end side motion level planning module carry out path planning, the fact that the distance between the AGV and other obstacles is larger than y is also considered, and the safety logic of obstacle avoidance measurement and control is prevented from being triggered.
Since different AGVs may use different actuators (e.g., different types of motor drivers, lamps, speakers, or other actuators), in order to improve the universality of the software system, the communication protocol between the controller and the actuators may be modified according to the actual situation. When a new actuator is adopted, a user only needs to rewrite a communication protocol code and assemble different sensors into a data type specified by a communication protocol for transmission. Table 2 shows some parameters included in the communication protocol, and the controller sends a signal value to the actuator, thereby implementing the control of the AGV; the controller receives the signal values uploaded by the actuators, and therefore the states of the actuators are obtained in a summarizing mode.
The controller sends to the actuator The actuator sends to the controller
Key panel pause/manual/automatic Electric quantity of battery
Brake indicator lamp Speed value of walking driver
Indicator lamp for left/right rotation Rotational speed value of steering drive
Rotating speed of walking motor Actual torque of brake motor
Rotation speed of steering motor Pedal accelerator
Braking torque Key panel pause/manual/automatic with input
Music loudspeaker Braking/steeringTravel drive exception
TABLE 2
As shown in fig. 2, the controller runs on the RPU and is developed based on a Real Time Operating System (RTOS). Because the RPU has the characteristics of high performance and high real-time performance in cooperation with the RTOS, the RPU has quick, bounded and deterministic interrupt response, and can ensure the deterministic operation of response speed and high throughput. The controller controls RPU interfaces (CAN, serial ports, IO and the like) of a PL end through an AXI bus, and data interaction with actuators such as an encoder, a driver, a photoelectric switch, a power module and the like is realized.
In another embodiment of the present invention, the substrate is,
the motion mode comprises an open field, a multi-path channel and a single-path channel; when the AGV judges that the current area belongs to the open area, the motion planning module receives a local cost map from a cloud so as to complete motion planning; if the channel is a multipath channel, the motion planning module receives a space-time channel from the cloud so as to determine the time for completing the lane change or the turning and the limit of the operation area; for a single pass, the AGV would receive the pseudo-offset distance, generate a Bezier curve as the trajectory, and then generate the preview point.
For the embodiment, the motion planning module mainly consists of three parts, namely local bezier curve generation, pre-aiming point generation and pre-aiming point tracking. The motion planning module receives a space-time channel sent by the cloud side behavior level planning module and an S2V _ protocol communication protocol sent by the scheduling module, and analyzes a path sequence and a path attribute from the communication protocol.
As shown in fig. 5, the AGVs have different types of path attributes, including an open field, which means a field allowing the AGVs to freely walk, a multi-path channel, which means a path allowing a plurality of AGVs to walk side by side, and a single-path channel, which means a path allowing only one AGV to pass through. Different motion planning methods are adopted for the method according to different path attributes.
When the AGV judges that the current channel belongs to the multipath channel, the AGV receives a space-time channel sent by the cloud side so as to implement a lane changing behavior. Fig. 6 (a) shows the lane change operation to be performed by the AGV, and the gray area in fig. 6 (b) shows the corresponding temporal and spatial channels. The method applies constraint conditions such as constraint of a target state, continuity constraint, safety constraint, kinematic constraint and the like to convert the calculation Bezier curve problem into a quadratic programming problem, and the obtained smooth local Bezier curve is used as the motion trail of the AGV. The input of the method is a space-time channel, and the output is a Bezier curve. The Bezier curve is represented by a series of dense Point coordinates and is issued in a message format of a geometry _ msgs/Point array.
Next, preview point generation is performed. The pre-aiming point generates the first and last points of the retained Bezier curve, then samples the rest of the middle points one by one every ten, and the obtained sparse points are used as the pre-aiming points. The preview Point is also issued in the message format of geometry _ msgs/Point array for preview Point tracking.
The method is compatible with different motion models and preview point tracking methods, and therefore the method is suitable for different types of AGV. The motion model can adopt an Ackerman model, a bicycle model, a differential model and the like, and the sighting point tracking method can adopt a method based on geometric tracking and a method based on model prediction. Here, the pureprorsuit algorithm and the Stanley algorithm in the geometry tracking based method are described by taking a bicycle model as an example.
FIG. 7 is a schematic diagram of a bicycle model, which is established based on the following assumptions:
1) Assuming that the vehicle moves in a two-dimensional plane (x, y directions), ignoring movement in the vertical direction;
2) Assuming that the behavior of the left and right tires of the vehicle is consistent, i.e., have the same turning angle and rotation speed;
3) Assuming that the vehicle is running at a low speed, neglecting the transfer of front and rear axle loads;
4) Assuming that the vehicle is front wheel driven, i.e. steering and speed are controlled by the front wheels;
5) The body and suspension system are assumed to be rigid.
F in the figure is frontThe wheel, B is the rear wheel, C is the vehicle centroid; o is the intersection point OF a front wheel vertical OF and a rear wheel vertical OB and is the instantaneous rotation center OF the vehicle; beta is a slip angle and is an angle formed by the vehicle advancing direction and the vehicle body direction; l is the wheelbase, i.e. the distance between the center points of the front and rear wheels; v. of f Is the front wheel speed, v is the vehicle travel speed, v b For rear wheel drive speed, δ f Is a front wheel corner; r is a radical of hydrogen c Is the distance between the centroid and the point O, r b Is the distance between the rear wheel and point O.
The pureprursuit algorithm is described next. On the basis of bicycle model, a preview point (g) is added x ,g y ) The pureprossuit algorithm can be used to track the preview point. L in FIG. 8 d The distance between the rear wheel of the vehicle and the preview point (forward looking distance) is shown, alpha is the included angle between the orientation of the vehicle body and the preview point, and delta f Is the angle OF the front wheel, O is the intersection OF the front wheel perpendicular OF and the rear wheel perpendicular OB, r b Is the turning radius of the vehicle, and L is the vehicle wheelbase. The pureprossuit algorithm takes the rear wheel as a tangent point, the vehicle body orientation as a tangent line, and the vehicle can run along an arc and the rear wheel passes through the preview point by controlling the corner of the front wheel. From the bicycle model we can get:
Figure BDA0003908778340000161
according to the sine theorem, it can be obtained that:
Figure BDA0003908778340000162
substituting equation (16) into (15) yields:
Figure BDA0003908778340000163
at the moment, the angle delta of the AGV needing to rotate at the current moment can be obtained f . The front wheel steering angle of the AGV is mainly influenced by the forward looking distance l d Should different forward looking be selected at different vehicle speedsDistance, and therefore the forward looking distance can be expressed as a linear function of the longitudinal speed of the vehicle, i.e./ d =kv x Then the formula for the angle of rotation of the front wheel becomes:
Figure BDA0003908778340000164
the adjustment of the pureprursuit controller at this time becomes the adjustment coefficient k. The track tracing is smoother when k is larger, but the situation of understeer occurs at a sharp turn; trajectory tracking is more accurate with smaller k, but can also lead to instability or even oscillation of vehicle control. The look-ahead distance is usually constrained using a maximum and a minimum k, with the appropriate k being selected by constant debugging.
The Stanley algorithm is described next. In the Purepurpuit algorithm, a controller is mainly designed by taking a rear wheel as a reference point and adopting a transverse tracking error of a vehicle, and a controller is designed by taking a front wheel as a reference point and combining the transverse tracking error and a heading tracking error.
As shown in FIG. 9, path is the planned trajectory, e is the front wheel to the nearest point (p) on the planned trajectory x ,p y ) Distance of (d), lateral tracking error δ e It is desirable that the smaller this distance, the better; the vehicle intersects the tangent of the closest point at a distance d from the front wheel, v being the speed of the front wheel, d being linearly related to v. Under the condition that the course tracking error is not considered and only the transverse tracking error is considered, the larger the transverse tracking error is, the larger the steering angle of the front wheels is. Therefore we have:
Figure BDA0003908778340000171
where k is the scaling factor.
And theta in FIG. 9 e Is a car body and (p) x ,p y ) Angle formed by tangent direction of point, course tracking error
Figure BDA0003908778340000172
The desired direction of the vehicle body is (p) x ,p y ) The tangential direction of the points is the same so that the AGV can move in the direction of the planned trajectory. Front wheel direction and (p) without considering lateral tracking error x ,p y ) In the same direction, that is:
Figure BDA0003908778340000173
finally, the sum of these two deviations is the total angle that the front wheel should rotate:
Figure BDA0003908778340000174
the following equation is an AGV kinematics model derived from a bicycle model according to the front wheel angle δ f And rear wheel speed v b The speed of the AGV centroid can be obtained:
Figure BDA0003908778340000175
wherein v is the linear velocity of the center of mass, β is the slip angle, i.e. the direction of motion of the AGV, and ω is the angular velocity of the center of mass.
And the result of tracking calculation of the preview point is a speed vector of the AGV body, and the speed is issued to the controller in a geometry _ msgs/Twist format.
When the AGV judges that the current field belongs to the open field, the AGV receives the local cost map sent by the cloud side so as to complete the motion planning. The open field allows the AGV to operate at will, and for the situation, the framework is compatible with different path planning algorithms, such as a Dynamic Window Approach (DWA), an HRVO (Hybrid reliable traffic objective algorithm) or an A-algorithm; when the AGV judges that the current path belongs to the single-path channel, if the actual running route of the AGV deviates from the center of the path, the AGV receives the quasi-offset distance sent by the cloud side. Like the multipath channel, the AGV may also generate a vehicle body velocity vector by using the process of the local bezier curve generation, the preview point generation, and the preview point tracking, and issue the vehicle body velocity vector to the controller.
As shown in fig. 2, the motion planning module runs on the APU of the PS side and is developed based on the Linux system. The APU has low power consumption, high energy efficiency and high program execution efficiency, and can calculate the Bezier curve in real time and track the preview point. In addition, the motion planning module interacts with the cloud side behavior level planning module through a 5G or WIFI interface of the Zynq development board.
In a further embodiment of the method according to the invention,
the method comprises the steps of storing a dense point cloud map at a server side, and storing a sparse map based on voxel down sampling and a map for positioning at an AGV side.
For the problem that the storage space of a single AGV computing device is limited, the system stores a dense point cloud map at a server end, and stores a sparse map based on voxel down-sampling and a map for positioning at an AGV end, so that a large amount of point cloud data are prevented from being stored on the AGV.
In another embodiment, the single sensing module, as shown in fig. 10, designs a detailed software flowchart of the single sensing module based on an ROS (Robot Operation System) framework, and messages transmitted between the software nodes all adopt a message format of the ROS standard except for some specific descriptions. The single sensing module mainly realizes self-positioning of the single AGV and environment sensing of the single AGV.
And for self-positioning of the single AGV, a tightly-coupled iterative kalman filter is utilized, and the sensing results of multiple sensors are fused for positioning. The filter comprises a prediction part and a measurement updating part, wherein the input of the prediction part is odometer information obtained by calculating a wheel speed meter and an inertial sensor through dead reckoning, the output is a primary AGV positioning result, and the positioning result is issued in a geometry _ msgs/Pose format and is used for subsequent map registration. The input of the measurement updating is the positioning result of the map registration, and the output is the final positioning result after the fusion. And the final positioning result is issued in a geometry _ msgs/Pose format and used for positioning the environment sensing result of the single AGV.
Secondly, the method adopts a Voxelgrid voxel filter to perform point cloud downsampling, and reduces the number of points. The voxel filter of the VoxelGrid creates a three-dimensional cube (voxel) according to the point clouds, and all point cloud gravity center points in the cube are calculated to represent the cube in each voxel, so that the purpose of downsampling is achieved. The down-sampling can effectively improve the running speed of the subsequent map registration program. Here, PCL in PCL library is called: : the VoxelGrid function realizes point cloud down-sampling, the input of the point cloud information is in a sensor _ msgs/PointCloud2 format sent by a laser radar, the output is a down-sampling point cloud result, and the result is issued in a sensor _ msgs/PointCloud2 message format.
And then, realizing map registration to obtain a high-accuracy positioning result. And an Iterative Closest Point (ICP) algorithm is adopted to realize Point cloud registration, and the algorithm solves the corresponding relation and the relative position between two Point clouds through iteration so as to carry out accurate matching of the Point clouds. The method adopts registration _ icp function in open3d library to realize point cloud registration, inputs point cloud after down sampling, preliminary positioning result of Kalman filtering prediction and global static environment sparse point cloud drawn in advance, and outputs high-accuracy positioning result which is used for Kalman filtering updating process.
For environment perception of a single AGV, firstly, a camera and a laser radar are subjected to combined calibration, and the laser radar point cloud is converted into a camera coordinate system, so that spatial synchronization of multiple sensors is realized. The calibration method can adopt a 2D-3D corresponding method, a plurality of rectangular paper boards are adopted as marking boards, a RanSac-based PnP (passive-n-Point) algorithm is adopted to solve 6-DoF of the camera and the laser radar, the output is external parameters of the camera and the laser radar and internal parameters of the camera, and the ratio of cv: : mat format record.
Secondly, the camera sends RGB Image information in a sensor _ msgs/Image message format, the laser radar sends point cloud information in a sensor _ msgs/PointCloud2 message format, and data alignment of the point cloud and the RGB Image is achieved by combining messages sent by the camera and the laser radar. The alignment procedure was as follows: 1) Matching the RGB image with the closest timestamp and the point cloud message, wherein one image corresponds to one point cloud message; 2) And projecting the point cloud onto the RGB image to form a depth map.
The data alignment result is a sparse depth map, which is issued in the message format of sensor _ msgs/Image. And then carrying out deep completion, converting the deep completion problem into a regression problem of deep learning, and training the CNN by adopting an automatic supervision method. The input of the system is a sparse depth map and an RGB image, the output is dense point cloud, and the point cloud is issued in a sensor _ msgs/PointCloud2 message format.
And removing the background map, wherein the removal of the background map is realized by adopting an octree _ change _ detection algorithm in a PCL library, and the spatial change detection of the point cloud is carried out based on an octree, so that the background map is filtered, and the point cloud of the obstacles in the scene is obtained. The method comprises the steps of inputting dense point clouds obtained by depth completion, high-precision positioning results obtained by map registration and global static environment sparse point clouds, outputting the point clouds of obstacles in a scene, and issuing background map removing results in a message format of sensor _ msgs/PointCloud 2.
And finally, performing 3D target detection on the obstacle point cloud, firstly realizing 2D target detection on RGB images based on data alignment by adopting a YOLOv3 method, then reversely searching the 3D point cloud according to a 2D detection frame to obtain a quadrangular pyramid area where the 3D point cloud is located, and finally realizing the 3D target detection by adopting an Euclidean distance clustering and scoring mechanism. The method inputs obstacle point clouds and RGB images, and outputs target detection results including the position, length, width and height, rotation angle, category, boundary box descriptor, point cloud number, detection score and other information of a target object boundary box. The detection result is uploaded to the perception fusion module in the message format of Obj _ info, which is defined as table 3:
obj info message content Obj _ info message format
Position of bounding box geometry_msgs/Pose[10]
Length, width and height uint16
Angle of rotation geometry_msgs/Twist[10]
Categories uint16
Bounding box descriptor uint16
Number of point clouds uint16
Detection scoring uint16
TABLE 3
As shown in fig. 2, data alignment, obstacle positioning, positioning of the AGV itself, and communication between the cloud side and the end side in the cell sensing module are performed on the APU of the PS side, and the deep completion and target detection are performed on the PL side.
In another embodiment, in the perceptual fusion module, the part of the detection and tracking function that needs to adopt deep learning runs on the HiPU chip at the server end, while the traditional target detection function and the sensor data association and fusion function mainly run on the CPU.
In another embodiment, the system is specially designed for the problems of communication safety, storage limitation and the like. Decision planning and sensing of the AGV both need to interact with the cloud, so that low communication delay and high communication bandwidth are needed to guarantee safety. However, due to fluctuation of network environments such as 5G or WIFI, it is difficult to ensure communication quality; in addition, the perception of the AGV end is that the laser radar is adopted, which needs to store and process a large amount of point clouds, and the storage space on the AGV is limited. Therefore, in order to solve the above problems, the following architecture design is proposed:
firstly, in order to solve the problem of limited information transmission bandwidth, monomer positioning and environment sensing are carried out on a single AGV, so that information uploaded to a cloud is not point cloud, but processed middle-level and high-level semantic information, namely three-dimensional position, size, speed, covariance matrix, category, uncertainty, timestamp and the like of an obstacle. This greatly reduces the amount of data transmitted; in addition, centralized lane change decision planning is carried out at the cloud end, and finally a space-time channel with less data volume is issued, so that a large amount of point cloud data is also avoided being issued.
Secondly, in order to solve the problem of AGV decision planning failure caused by communication delay, the invention provides that when the information of the cooperative AGV cannot be received in the time limit window, the cooperative AGV is treated as a non-cooperative AGV to complete decision. And as a prediction process of the filtering operation, after receiving the cooperative AGV information, correcting a prediction result. The method not only embodies the information interaction advantage of the cooperative AGV, but also can complete reasonable decision planning operation under the condition of low communication condition guarantee degree.
Although the embodiments of the present invention have been described above with reference to the accompanying drawings, the present invention is not limited to the above-described embodiments and application fields, and the above-described embodiments are illustrative, instructive, and not restrictive. Those skilled in the art, having the benefit of this disclosure, may effect numerous modifications thereto without departing from the scope of the invention as defined by the appended claims.

Claims (10)

1. A multiple AGV collaborative work system, comprising: the system comprises a plurality of AGV, controllers, a motion planning module, a behavior level planning module, a path resource allocation module, a sensing module and a scheduling module, wherein the sensing module comprises a single sensing module and a sensing fusion module; wherein the content of the first and second substances,
the controller receives an instruction from the motion planning module and feeds back the running state of the AGV to the motion planning module;
the motion planning module receives the space-time channel sent by the behavior-level planning module and the motion mode sent by the scheduling module, and sends the vehicle body speed to the controller;
the behavior level planning module receives the semantic information level behaviors sent by the path resource allocation module, determines whether each AGV immediately starts lane changing or directly moves for a distance and then lane changing, generates a space-time channel and sends the space-time channel to the motion planning module;
the path resource allocation module receives the scene real-time information flow from the perception fusion module, reasonably allocates path resources according to a preset rule, and sends a semantic information level behavior to the behavior level planning module;
the single sensing module detects the sizes and the types of the AGV, the pedestrian and the goods in the scene and transmits the obtained detection result to the sensing fusion module;
the perception fusion module fuses environment static scenery, AGV real-time perception information and a sensor real-time monitoring result installed in the environment, so that a global dynamic scene map is obtained, and a scene real-time information stream is transmitted to the path resource distribution module;
and the scheduling module is responsible for task allocation, multi-AGV cooperative path planning and traffic management control.
2. The system of claim 1, wherein: in a preferred embodiment of the method of the invention,
the AGV hardware architecture of the multi-AGV cooperative work system adopts a Zynq-based UltraScale + development platform design and comprises a processing system PS and a programmable logic PL, wherein the processing system PS comprises a quad-core APU and a dual-core RPU, and the programmable logic PL is a programmable logic device FPGA; the hardware architecture of the server side of the multi-AGV cooperative work system comprises a CPU and a HiPU, wherein the HiPU is used for neural network reasoning acceleration and is suitable for processing a large number of images and point cloud data in real time.
3. The system of claim 1, wherein:
the controller, the motion planning module and the single sensing module run on a single AGV, and the behavior level planning module, the path resource distribution module and the sensing fusion module run on a server.
4. The system of claim 1, the controller comprising a motion resolution module, a manual manipulation module, and an obstacle avoidance module.
5. The system of claim 4, wherein the motion resolution module converts the speed into specific driver commands based on a motion model of the vehicle body for different types of AGVs.
6. The system of claim 4, wherein the manual control module is used for AGV testing and handling emergency, and realizes AGV mode switching and speed adjustment through manual operation.
7. The system of claim 4, wherein when an unexpected event occurs, the obstacle avoidance module brakes in time when the distance between the AGV and the obstacle is smaller than a threshold value, so as to ensure the safety of the AGV.
8. The system of claim 1, wherein:
the motion mode comprises an open field, a multi-path channel and a single-path channel; when the AGV judges that the current area belongs to the open area, the motion planning module receives a local cost map from a cloud so as to complete motion planning; if the channel is a multipath channel, the motion planning module receives a space-time channel from the cloud so as to determine the time for completing the lane change or the turning and the limit of the operation area; if the single-path is used, the AGV receives the quasi-offset distance, generates a Bezier curve as a track, and then generates a pre-aiming point.
9. The system of claim 1, wherein:
and storing a dense point cloud map at a server end, and storing a sparse map based on voxel down-sampling and a map for positioning at an AGV end.
10. The system of claim 2, wherein:
in the perception fusion module, the detection and tracking function adopting deep learning is operated on the equipment represented by the HiPU chip at the server end, and the traditional target detection function and the sensor data association and fusion function are operated on the equipment represented by the CPU.
CN202211323178.0A 2022-10-26 2022-10-26 Many AGV collaborative work system Pending CN115629609A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211323178.0A CN115629609A (en) 2022-10-26 2022-10-26 Many AGV collaborative work system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211323178.0A CN115629609A (en) 2022-10-26 2022-10-26 Many AGV collaborative work system

Publications (1)

Publication Number Publication Date
CN115629609A true CN115629609A (en) 2023-01-20

Family

ID=84907156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211323178.0A Pending CN115629609A (en) 2022-10-26 2022-10-26 Many AGV collaborative work system

Country Status (1)

Country Link
CN (1) CN115629609A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115951691A (en) * 2023-03-13 2023-04-11 江西丹巴赫机器人股份有限公司 Trajectory planning method and system for shuttle vehicle of dense warehouse under 5G communication

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115951691A (en) * 2023-03-13 2023-04-11 江西丹巴赫机器人股份有限公司 Trajectory planning method and system for shuttle vehicle of dense warehouse under 5G communication

Similar Documents

Publication Publication Date Title
Kato et al. Autoware on board: Enabling autonomous vehicles with embedded systems
Brown et al. Coordinating tire forces to avoid obstacles using nonlinear model predictive control
US20210192748A1 (en) Prediction on top-down scenes based on object motion
US20200211394A1 (en) Collision avoidance system
Wen et al. CL-MAPF: Multi-agent path finding for car-like robots with kinematic and spatiotemporal constraints
CN113015664A (en) Perception anticollision
CN116323353A (en) Allocation of security system resources based on probability of intersection
US10832439B1 (en) Locating entities in a mapped environment
WO2021243033A1 (en) Trajectory generation using lateral offset biasing
CN111665738A (en) In-loop simulation system and information processing method and device thereof
CN117794803A (en) Vehicle trajectory control using tree search and/or proxy filtering with responsive proxy prediction
AU2021334408B2 (en) Method and apparatus for coordinating multiple cooperative vehicle trajectories on shared road networks
US20230063845A1 (en) Systems and methods for monocular based object detection
CN114578772B (en) AGV cluster control system design framework and method
US20230008285A1 (en) Vehicle control using directed graphs
CN115629609A (en) Many AGV collaborative work system
Huang et al. Trajectory planning in Frenet frame via multi-objective optimization
WO2024049925A1 (en) Trajectory prediction based on a decision tree
Cui et al. An enhanced safe and reliable autonomous driving platform using ROS2
Du et al. A Dynamic collaborative planning method for multi-vehicles in the autonomous driving platform of the DeepRacer
Sankalprajan et al. Analysis of Computational Need of 2D-SLAM Algorithms for Unmanned Ground Vehicle
WO2021225822A1 (en) Trajectory classification
CN112230659A (en) Method for accurately planning movement track, intelligent control equipment and automatic driving vehicle
CN115443233A (en) Teleoperation for cooperative vehicle guidance
YU et al. Vehicle Intelligent Driving Technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination