CN115065718A - Multi-agent cooperative control algorithm verification system based on optical indoor positioning - Google Patents

Multi-agent cooperative control algorithm verification system based on optical indoor positioning Download PDF

Info

Publication number
CN115065718A
CN115065718A CN202210472362.5A CN202210472362A CN115065718A CN 115065718 A CN115065718 A CN 115065718A CN 202210472362 A CN202210472362 A CN 202210472362A CN 115065718 A CN115065718 A CN 115065718A
Authority
CN
China
Prior art keywords
agent
module
data
control
subsystem
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210472362.5A
Other languages
Chinese (zh)
Inventor
张利国
许世聪
邓恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202210472362.5A priority Critical patent/CN115065718A/en
Publication of CN115065718A publication Critical patent/CN115065718A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/164Adaptation or special uses of UDP protocol

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a multi-agent cooperative control algorithm verification system based on optical indoor positioning, belonging to the field of multi-agent control, and the system comprises: the optical indoor positioning subsystem is used for acquiring the position information of the target intelligent agent in the space and sending the position information to the local area network through a UDP (user Datagram protocol) protocol; the communication subsystem is used for realizing the sharing of data and the sending of control instructions; and the control subsystem is used for receiving the position information of the target agent and realizing the design of the multi-agent cooperative control algorithm to be verified and the execution of the control instruction. The method can reduce the interference of the external environment on the algorithm, can safely, conveniently and efficiently verify the feasibility, the real-time performance and the stability of the multi-agent cooperative control algorithm, and has great significance for improving the performance of the multi-agent cooperative control algorithm and accelerating the theoretical achievement of a multi-agent system to the actual speed of engineering.

Description

Multi-agent cooperative control algorithm verification system based on optical indoor positioning
Technical Field
The invention discloses a multi-agent cooperative control algorithm verification system based on optical indoor positioning, and relates to the field of multi-agent control.
Background
In recent years, with the continuous development of technologies such as computer science and network communication, multi-agent cooperative control is one of research hotspots in the control field, and has a wide application prospect in the fields of military affairs, traffic, multiple robots, sensor networks and the like. At present, research on an intelligent system is mostly focused on aspects such as an autonomous cooperative control algorithm, a cluster distributed interaction technology and the like, and evaluation and verification research on the algorithm is less. And the developed multi-agent cooperative control demonstration and verification adopts outdoor environment mostly, so that the experiment cost is high, the influence of environmental change is large, and repeated experiments are difficult to carry out. Compared with the prior art, the indoor test has incomparable advantages of outdoor test, such as low cost, convenient observation and real-time display of experimental data, convenient and quick iterative design, low dependence on surrounding and hardware environments and the like, and further can repeatedly verify the related algorithm of the cooperative control of the multi-agent system and optimize and adjust the algorithm. The method has great significance in accelerating the theoretical achievement of the multi-agent system to the actual speed of the project, improving the performance of the multi-agent cooperative control algorithm, evaluating the feasibility of algorithm research and the like.
Disclosure of Invention
Based on the problems in the background art, the invention designs a multi-agent cooperative control algorithm verification system based on optical indoor positioning. Therefore, the feasibility, the real-time performance and the stability of the multi-agent cooperative control algorithm are verified, and the theoretical achievement of the multi-agent system is accelerated to the speed of engineering practical application. The multi-agent cooperative control problem mainly comprises a consistency problem, a formation problem, a clustering problem, a bee-holding problem, an aggregation problem and the like. The consistency problem is the most basic and important problem of system coordination control, and is the basis of coordination and cooperation among intelligent agents. By consistency, it is meant that as time goes on, all agents in the multi-agent system communicate and interact with each other according to some control rule, so that the state quantities of each agent approach the same value.
The invention designs a multi-agent cooperative control algorithm verification system based on optical indoor positioning, which is shown in figure 1 and mainly comprises an optical indoor positioning subsystem, a communication subsystem and a control subsystem.
The functions of each subsystem are roughly as follows: the optical indoor positioning subsystem acquires the position information of the intelligent agent in the space and broadcasts the information in a local area network through a UDP (user Datagram protocol) protocol; the communication subsystem is a bridge for data transmission in the system, and mainly comprises an optical indoor positioning subsystem and a control subsystem which are connected to the same local area network, so that data sharing is realized, and meanwhile, a control instruction is sent in the control subsystem; the control subsystem mainly receives the position information of the intelligent agent and realizes the design of the multi-intelligent-agent cooperative control algorithm to be verified. The optical indoor positioning subsystem provides position information of the multi-agent for the multi-agent cooperative control algorithm to be verified, and the output of the multi-agent cooperative control algorithm to be verified, namely a control instruction is sent to a controlled object through the communication subsystem to complete execution of control execution.
The optical indoor positioning subsystem comprises a data acquisition module, a data identification module and a data sending module. The hardware structure mainly comprises a plurality of external optical cameras, a switch and a server. The data acquisition module runs in the processor of the infrared optical camera, and the data identification module and the data transmission module run in the server.
The data acquisition module mainly acquires the position information of a target point in a space, and comprises an image acquisition unit and an image processing unit, and the specific implementation steps are as follows:
s1: a two-dimensional image is obtained using an image acquisition unit. The acquisition of the two-dimensional image is a precondition for establishing three-dimensional coordinate information of the target point in space. Through the pinhole model, the infrared optical camera captures a target point in a three-dimensional space (world coordinate system) and projects the target point into a two-dimensional plane (pixel coordinate system) of a camera picture, wherein the corresponding relation is as follows:
p=K[R T]P W =MP W (1)
wherein, P W (X W ,Y W ,Z W ) Is the coordinate of a certain target point in the world coordinate system, and P (u, v) is the target point P W (X W ,Y W ,Z W ) The coordinates of the corresponding point in the pixel coordinate system. K is camera reference, [ R T ]]Is an external parameter of the camera; m is called a projection matrix, which describes the mapping of points in the actual coordinate system to points in the pixel coordinate system.
In particular, before this step is performed, calibration and camera calibration work needs to be performed on the experimental site. The field calibration aims at establishing a coordinate origin for an experimental field and establishing a world coordinate system; camera calibration is actually the process of solving for internal and external parameters of the camera. The system uses the traditional camera calibration method, a calibration reference object is placed in front of a camera to be calibrated, and the internal and external parameters of the camera are obtained by a mathematical theory calculation method by utilizing the corresponding relation between the image coordinate of a specific point in the calibration reference object and the world coordinate.
S2: the specific operation flow of the process of performing feature extraction and finishing image processing on the two-dimensional image information obtained by the image acquisition unit through the image processing unit is shown in fig. 2, and comprises the steps of reading the RGB values of each pixel of the image, converting the RGB values into a gray level image, thresholding the gray level, smoothing gaussian, acquiring a contour, extracting feature points and outputting feature point coordinates. And the output characteristic point coordinates are the position information of the target point. Thus, the position information acquisition function is completed.
The data identification module is used for identifying and packaging the position information of the target point obtained by the data acquisition module and converting the position information into the position information of the intelligent agent. To facilitate control of a multi-agent system, the present system requires that each agent be labeled. In the invention, a plurality of reflective mark points (Marker points) in different arrangement forms are fixed at the top of each agent body, namely the target points, as shown in fig. 3. The data identification module can distinguish the intelligent agent numbers according to different arrangement forms of the identification points.
The data sending module sends the position information of each intelligent agent packaged by the data identification module to the local area network at a fixed frequency by serving as a UDP (user Datagram protocol) server and a UDP broadcast protocol.
The control subsystem consists of an upper computer provided with an ROS software framework + MATLAB/Simulink environment, a plurality of Turtlebot3 mobile robot hardware parts and algorithm verification module software. The PC is designed to control the host computer through network configuration, and the mobile robot is set as the slave computer, so that the master-slave mode control is realized. In particular, an ROS Master node runs on the control host, and the mobile robot is an intelligent agent in the system. The control flow of the control subsystem is shown in fig. 4 and mainly comprises a Data Receive node, a Simulink Mode node and an Agent GUI node.
And the Data Receive node receives the position information of the intelligent agent transmitted by the indoor positioning subsystem, converts the ROS message and issues the ROS message in the ROS network through topics.
The Simulink Mode node and the Agent GUI node form algorithm verification module software. The method comprises the steps that a multi-agent cooperative control algorithm to be verified is designed at a Simulink Mode node, the input of the node is agent position information, and the output is agent angle control quantity, specifically angular velocity data and linear velocity data. The algorithm design process is as follows:
considering a multi-agent system consisting of n agents, and the system comprising a virtual leader, the dynamic model of each agent is as follows:
Figure BDA0003623313440000031
wherein,
Figure BDA0003623313440000032
respectively, the position, velocity and acceleration of agent i at time t. u. of i (t) is also referred to as the control input of the system. i is 1,2, … n is the number of agents in the system.
Figure BDA0003623313440000041
Representing a set of n-dimensional real vectors
The model of the system virtual leader is:
Figure BDA0003623313440000042
wherein x is 0 (t) location of virtual leader, v 0 Is the speed value of the virtual leader, is a fixed constant, also referred to as the multi-agentDesired speed of the system. By virtual leader is meant that it is not necessarily an agent, but may be a marker whose role is embodied only in the coherence protocol for constraining to follow agent speed.
For the multi-agent system (2), the consistency protocol adopted by the invention, namely the multi-agent cooperative control algorithm to be verified, is as follows:
Figure BDA0003623313440000043
wherein, alpha is control gain and a normal number; a is ij A neighbor matrix element corresponding to a mathematical topology structure diagram representing the multi-agent system abstraction; agent j represents a neighbor of agent i, i.e., agent i can obtain information of agent j; n is a radical of i A neighbor set representing agent i; s i (t) represents the distance of agent i from the agent in front of it at time t, and S represents the desired distance for the multi-agent system.
In the above coherency protocol, the first item (S) i (t) -S) is used to ensure that all agents are at a predetermined desired distance from the agent in front of them, and the second term (v) j -v i ) The role of (v) is to ensure the consistency of the velocities of all followers in a multi-agent system, item three 0 -v i ) The function of (a) is to ensure that the speed of all agents in the multi-agent system is consistent with the speed of the virtual leader, i.e., the desired speed.
The Agent GUI node has the functions that firstly, a plurality of Simulink Mode nodes can be called, and the efficiency of algorithm verification is improved; and secondly, the actual effect of the multi-agent cooperative control algorithm to be verified in the system can be visually displayed in a graphical mode, and the algorithm is convenient to adjust, so that the algorithm to be verified is optimal.
The structural framework of the Agent GUI node is shown in FIG. 5, which comprises a state monitoring module, a live-action display module, a data storage module, a parameter management module, a communication module and a start-stop control module. The main interface of the Agent GUI node is shown in FIG. 6.
And the state monitoring module is used for detecting and displaying state data of the multiple intelligent agents, wherein the state data comprises speed information, acceleration information and adjacent intelligent agent distance information. The system simultaneously monitors and displays the data of the intelligent agent in a graphical and tabular form.
And the live-action display module is used for abstracting the position information of the multi-agent into graphic information and displaying the effect obtained by the multi-agent cooperative control algorithm to be verified in a two-dimensional scene.
And the data storage module is used for storing data information of the multiple intelligent agents in the time period from the beginning to the end of the experiment, wherein the data information comprises acceleration information, speed information and intelligent agent spacing information, and the data is convenient to analyze after the experiment.
And the parameter management module is used for managing all parameters of the system operation, including selection of a control algorithm (used when a plurality of cooperative control algorithms are verified), setting of the control parameters and setting of communication parameters.
And the communication module is used for establishing communication connection with the ROS Master so as to acquire the state information of the intelligent object and send a control command.
And the start-stop control module is used for starting, stopping and quitting the system.
The communication subsystem establishes a local area network of the algorithm verification system through a router, the optical indoor positioning subsystem and the control subsystem control to be accessed into the local area network through an Ethernet data line, and the control subsystem slave machines, namely the multi-agent agents are accessed into the local area network through the vehicle-mounted Bluetooth module, so that mutual data transmission and reception are realized by utilizing the local area network.
The communication subsystem can be functionally divided into a data transmission layer, an ROS network layer and an agent bottom layer, as shown in fig. 7, which is briefly described below:
and S1, the data transmission layer comprises a data sending module, namely a UDP server side, in the optical indoor positioning subsystem and a data receiving node, namely a UDP client side, in the control subsystem.
S2, in the optical indoor positioning subsystem, the data sending module sends the acquired position information of the intelligent agent to the local area network through a UDP protocol.
S3, creating a data receiving node among the control hosts of the control subsystem, as shown in fig. 4, the data receiving node acting as a UDP client, receiving the location information of the agent via UDP protocol, and packaging the status information into ROS message, and publishing the message into ROS network layer via the corresponding topic (/ robot ID _ pos,/robot ID _ ang).
And S4, subscribing the position information of the agent from the topic in the Simulink Mode node as the input of the algorithm to be verified. And meanwhile, packing output quantities of the algorithm into ROS messages, and then packaging the ROS messages into different ROS topics (/ robot1/cmd _ vel, …/robot9/cmd _ vel) according to different controlled objects to realize control of the intelligent agent.
And S5, in the ROS network layer, the ROS Master Master node is maintained by the control host of the control subsystem.
And S6, communication of the bottom layer of the intelligent agent. The structure of the intelligent body comprises an upper layer and a lower layer, and the structure of the intelligent body is shown in figure 8. The upper Raspberry Pi 3b intelligent edition is used as a main control board, is provided with a Ubuntu Mate system and carries a ROS operating system. The upper layer Raspberry Pi 3b intelligent board completes communication with the control host by subscribing the ROS topic; and the lower-layer action control plate Open CR is used as a drive plate to directly control the motion of the intelligent steering engine so as to realize the control of actions. The lower-layer action control board OpenCR drive board is communicated with a Raspberry Pi 3b main control board carrying the ROS operating system through a serial port, and is responsible for executing a control instruction received by the ROS operating system on hardware and finally converting the control instruction into an electric signal for driving an intelligent agent to move.
The invention provides a multi-agent cooperative control algorithm verification system based on optical indoor positioning. Compared with the prior art, the invention has the beneficial effects that:
1. compared with most of the experiments in outdoor environment, the invention has the advantages of low cost, controllable experimental environment, easy observation and display of experimental results and the like, can conveniently and rapidly carry out repeated experiments, and improves the efficiency of optimizing and adjusting the algorithm to be verified.
2. The infrared optical camera used by the optical indoor positioning subsystem has the positioning accuracy of a submillimeter level, can capture Marker points on an intelligent body at a shooting rate of more than 100 frames per second, and can accurately construct three-dimensional position information of the intelligent body in real time.
3. The invention realizes the sending and receiving of the position information of the intelligent agent by using the UDP protocol, and has higher data transmission speed and higher real-time property.
4. The Turtlebot3 mobile robot is used as an intelligent controlled object, is a small-sized, programmable and ROS-based high-cost-performance mobile robot, has high flexibility, supports secondary development, and is very suitable for indoor scenes.
5. The invention designs a multi-agent cooperative control algorithm to be verified into a Simulink Mode node independently according to a modular design idea. Particularly, the method is convenient for a user to operate the system, simplifies debugging of an algorithm to be verified and visually displays an experimental result, and an Agent GUI node is designed. When a plurality of algorithms to be verified exist, namely a plurality of Simulink Mode nodes exist, the Agent GUI nodes can call the algorithms simultaneously, and the verification efficiency of the algorithms is greatly improved.
Drawings
FIG. 1 is a schematic diagram of a multi-agent cooperative control algorithm verification system based on optical indoor positioning.
Fig. 2 is a flowchart of the image processing unit.
Fig. 3 is a schematic diagram of an intelligent agent fixed with reflective identification points in different arrangement forms.
Fig. 4 is a flow chart of the control subsystem.
FIG. 5 is a structural framework diagram of an Agent GUI node.
FIG. 6 is an Agent GUI node home interface.
Fig. 7 is a system communication flow diagram.
Fig. 8 is a schematic diagram of the structure of the agent of the system.
Fig. 9 is a hardware connection diagram of the indoor optical positioning subsystem.
Fig. 10 is a schematic diagram of the experimental site layout of the system.
Fig. 11 is a diagram of a communication topology employed in an example of the present invention.
FIG. 12 is a graph showing the velocity profile of an agent during an experiment.
FIG. 13 is a graph showing the variation of the spacing between agents during the experiment.
FIG. 14 is a schematic diagram of an acceleration curve of an agent during an experiment.
Detailed Description
The multi-agent cooperative control algorithm verification system based on optical indoor positioning is further described in detail with reference to the accompanying drawings and embodiments.
The embodiment of the invention is to apply a specific multi-agent cooperative control algorithm to the invention and evaluate the performance of the invention through experimental results. The present system needs to be designed before the experiment can be performed.
S1 construction of optical indoor positioning subsystem
Accurate acquisition of agent location information is the basis of the system. The invention realizes the positioning function by adopting an infrared optical three-dimensional motion capture system, the selected motion capture system has the characteristics of high speed, high precision, high resolution and the like, and the position information of the mobile robot can be accurately acquired, thereby realizing the real-time positioning and tracking of the mobile robot.
The infrared optical three-dimensional motion capture system mainly comprises an infrared optical camera, a switch, a server and an Ethernet data line for data transmission, and the connection of the hardware is shown in FIG. 9.
The test site used in this example was 4 meters long by 4 meters wide by 3 meters high, and the layout of the test site is shown in fig. 10. The infrared optical camera is arranged above a field through a fixing device, the visual field of the camera can cover a capture area of an experimental field, and a data acquisition module in the camera acquires position information of an intelligent body. And summarizing the position information of the intelligent agent acquired by all the cameras to the switch through an Ethernet data line. The switch similarly transmits the position information to the server through the Ethernet data line, and the data identification module in the server identifies and encapsulates the position information into single agent position information. And finally, a data sending module in the server acquires the position information of the packaged single intelligent agent and broadcasts the position information to the local area network.
Specifically, the data identification module is a user-defined UDP server, and can broadcast the acquired data to the local area network through UDP wireless network communication at a fixed frequency.
S2, building of control subsystem
In order to realize control operation, the control master and the slave (intelligent agent) of the control subsystem need to be correspondingly configured. For the system, the control host simultaneously carries ROS/media and MATLAB/Simulink environments, the control slave is also provided with the ROS environment, and the ROS environment and the MATLAB/Simulink environment realize master-slave mode control through corresponding network configuration.
In the controlling host, a UDP client node is customized, which has two main functions: firstly, receiving the position information of an intelligent agent transmitted by a data sending module (UDP server) in an indoor optical positioning subsystem server; secondly, packaging the intelligent agent position information into an ROS message, and issuing the message (position information) to an ROS network through topics for the Simulink Mode node to call.
S3 construction of communication subsystem
The router establishes a local area network of the system, the indoor optical positioning subsystem and the control subsystem host are connected into the local area network through Ethernet data, and the control subsystem slave is connected into the local area network through the vehicle-mounted Bluetooth module, so that data transmission and data reception among the systems are realized.
S4, testing algorithm
In order to test the effectiveness of the multi-agent cooperative control algorithm verification system based on optical indoor positioning, the section applies the cooperative control algorithm (4) to the system, and evaluates the performance of the system by observing the difference between the actual effect and the ideal result of the experiment.
S41, initializing parameters
The communication topology selected for this example is the one shown in fig. 11, where the central origin O represents the central home location of the experimental site and the numbers 1,2 … 9 represent the 9 multi-agents selected for this system. The neighbor matrix corresponding to the topological structure is a matrix A:
Figure BDA0003623313440000091
the initial velocity values v of all agents are set to 0, i.e., v ═ 100000010] T . Due to the field size limitation, at the initial time, 9 agents were placed on an elliptical track in the test field, the positions on the track being random, as shown in fig. 10, it can be seen that the initial inter-vehicle distances of each agent were different.
At the same time, the algorithm (4) expects a velocity v 0 Set to 0.1m/S and the desired spacing S to 0.53m, the desired result of a multi-agent system under the above-described coherence protocol is that the final velocity value for each agent should be 0.1m/S and the inter-vehicle distance for each agent should be 0.53 m.
S42, experimental operation and result analysis
S42-1: and starting the Agent GUI node.
S42-2: the IP address of the control subsystem host is input in the editable text control of the communication module, and the control subsystem of the invention controls the IP of the host to be 10.1.1.100. And then clicking a Connect to ROS button control, setting the control subsystem host as an ROS Master, and establishing ROS communication connection with the control slave.
S42-3: in the parameter management module, a second-order lag-free algorithm is selected, the second-order lag-free algorithm corresponds to the consistency algorithm (4), meanwhile, an expected speed parameter is set to be 0.1m/s, an expected distance parameter is set to be 0.53m, and a parameter a corresponds to a control gain alpha in the algorithm (4) and is set to be 1. Since the parameter b and the communication-related parameters are not involved in the algorithm (4), it is set to 0.
S42-4: after the parameter configuration is finished, the Enable attribute of the Start button control in the Start-stop control module is changed into On. And clicking the button, and automatically calling a Simulink Mode node corresponding to the algorithm (4) by the Agent GUI node to represent the formal start of the experiment. The simulation time is set to be infinite, manual stop is indicated, and only the Pause button control in the start-stop control module needs to be clicked.
S42-5: clicking a Display button control in the live-action Display module, as shown in fig. 5, 9 red beads in the live-action Display module are abstracted intelligent bodies, the circle centers of the beads are the position coordinates of the intelligent bodies in the experimental field, and the positions of the red beads shown in fig. 6 correspond to the initial position coordinates of the intelligent bodies shown in fig. 10.
The speed, acceleration and spacing information of the agent is displayed in the condition monitoring module in a graphical and specific numerical mode.
S42-6: fig. 12, 13 and 14 are graphs of the change of speed, spacing and acceleration of the agent during the experiment. With time, the speed, spacing and acceleration of all agents eventually approach agreement, and the desired speed value 0.1m/s and the desired spacing value 0.53m are reached.
The experimental results show that the method can effectively verify the multi-agent cooperative control algorithm, and has the advantages of simple operation, convenient algorithm design, adjustable parameters, high efficiency and the like.

Claims (8)

1. The multi-agent cooperative control algorithm verification system based on optical indoor positioning is characterized by comprising an optical indoor positioning subsystem, a communication subsystem and a control subsystem, wherein,
the optical indoor positioning subsystem acquires the position information of the intelligent agent in the space and broadcasts the information in a local area network through a UDP (user Datagram protocol) protocol; the communication subsystem is a bridge for data transmission in the system, and the optical indoor positioning subsystem and the control subsystem are accessed into the same local area network to realize data sharing and complete the sending of control instructions in the control subsystem; the control subsystem receives the position information of the intelligent agent and realizes the design of the multi-intelligent-agent cooperative control algorithm to be verified; the optical indoor positioning subsystem provides position information of the multi-agent for the multi-agent cooperative control algorithm to be verified, and the output of the multi-agent cooperative control algorithm to be verified, namely a control instruction is sent to a controlled object through the communication subsystem to complete the execution of control execution;
the optical indoor positioning subsystem comprises a data acquisition module, a data identification module and a data sending module; the hardware structure comprises a plurality of external optical cameras, a switch and a server; the data acquisition module runs in a processor of the infrared optical camera, and the data identification module and the data sending module run in a server;
the data acquisition module functions to acquire position information of a target point in space, and includes an image acquisition unit and an image processing unit.
2. The multi-agent cooperative control algorithm verification system based on optical indoor positioning as claimed in claim 1, wherein the optical indoor positioning subsystem comprises a data acquisition module, a data identification module and a data transmission module, wherein,
the data acquisition module is used for acquiring the position information of a target point in the space;
the data identification module is used for identifying and packaging the position information of the target point acquired by the data acquisition module to obtain the position information of the intelligent agent;
the data sending module is used as a UDP server and used for broadcasting the position information of each intelligent agent packaged by the data identification module to a local area network at a fixed frequency through a UDP protocol.
3. The optical indoor location-based multi-agent cooperative control algorithm verification system as claimed in claim 1, wherein the control subsystem comprises a PC of hardware part, 9 mobile robots and an algorithm verification module of software part, wherein,
the PC is provided with an ROS software framework and an MATLAB/Simulink environment, and runs an ROS Master Master node to serve as a control host of the control subsystem;
the 9 mobile robots are control subsystem slave machines which are controlled objects and are intelligent agents;
the algorithm verification module is used for receiving the position information of the intelligent agent as the input of the algorithm to be verified and simultaneously outputting control quantity, wherein the control quantity comprises the linear velocity and the angular velocity of the intelligent agent.
4. The optical indoor positioning-based multi-agent cooperative control algorithm verification system as claimed in claim 1, wherein the communication subsystem comprises a data transmission layer, an ROS network layer and an agent bottom layer, wherein,
the data transmission layer is used for sending and receiving the position information of the intelligent agent through a UDP protocol;
the ROS network layer is used for maintaining an ROS Master Master node and establishing ROS communication connection with the slave machines of the control subsystem;
and the intelligent body bottom layer is used for converting the control command into an electric signal for driving the intelligent body to move.
5. The optical indoor positioning-based multi-agent cooperative control algorithm verification system as claimed in claim 1, wherein the data acquisition module comprises an image acquisition unit and an image processing unit, wherein,
the image acquisition unit is used for acquiring two-dimensional image information of a target point in a three-dimensional space;
and the image processing unit converts the acquired two-dimensional image information into the three-dimensional position coordinate information of the target point by using an image processing related algorithm.
6. The multi-Agent cooperative control algorithm verification system based on optical indoor positioning as claimed in claim 1, wherein the algorithm verification module comprises Simulink Mode node and Agent GUI node, wherein,
the Simulink Mode node is used for realizing the design of an algorithm to be verified, firstly, the ROS topic containing the position information of the intelligent agent is subscribed as the input of the algorithm, the algorithm output is encapsulated into different topics according to the difference of the controlled object through operation, and the intelligent agent is controlled;
the Agent GUI node is used as a main interface for controlling operation, different Simulink Mode nodes are called to complete verification of a plurality of algorithms, and experimental results are visually displayed.
7. The optical indoor positioning-based multi-agent cooperative control algorithm verification system as claimed in claim 6, wherein: the Agent GUI node comprises a state monitoring module, a live-action display module, a data storage module, a parameter management module, a communication module and a start-stop control module, wherein,
the state monitoring module is used for detecting and displaying state data of the multi-agent, wherein the state data comprises speed information, interval information and acceleration information;
the live-action display module is used for abstracting the position information of the multiple intelligent agents into graphic information and displaying the effect obtained by the algorithm to be verified in a two-dimensional scene;
the data storage module is used for storing the acceleration information, the speed information and the intelligent agent distance information of the multiple intelligent agents in the experimental period, and is convenient for analyzing data after the experiment;
the parameter management module selects different to-be-verified calculations and sets control parameters and communication parameters;
the communication module is used for establishing communication connection with the ROS Master so as to acquire the state information of the intelligent object and send a control command;
and the start-stop control module is used for starting, stopping and quitting the system.
8. The optical indoor positioning-based multi-agent cooperative control algorithm verification system as claimed in claim 4, wherein: the data transmission layer comprises a data sending module in the optical indoor positioning subsystem and a data receiving node in the control subsystem; the data receiving node is used as a client of a UDP protocol, and receives a data sending module through the UDP protocol, namely, the data of the UDP server containing the position information of the agent; and secondly, encapsulating the position information into an ROS message, and sending the ROS message into an ROS network through a topic for the Simulink Mode node to subscribe and acquire.
CN202210472362.5A 2022-04-29 2022-04-29 Multi-agent cooperative control algorithm verification system based on optical indoor positioning Pending CN115065718A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210472362.5A CN115065718A (en) 2022-04-29 2022-04-29 Multi-agent cooperative control algorithm verification system based on optical indoor positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210472362.5A CN115065718A (en) 2022-04-29 2022-04-29 Multi-agent cooperative control algorithm verification system based on optical indoor positioning

Publications (1)

Publication Number Publication Date
CN115065718A true CN115065718A (en) 2022-09-16

Family

ID=83196443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210472362.5A Pending CN115065718A (en) 2022-04-29 2022-04-29 Multi-agent cooperative control algorithm verification system based on optical indoor positioning

Country Status (1)

Country Link
CN (1) CN115065718A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117953018A (en) * 2024-03-26 2024-04-30 深圳康荣电子有限公司 Infrared induction screen following method, device, equipment and storage medium
CN118151523A (en) * 2024-05-09 2024-06-07 天津工业大学 PID-based multi-agent system output hysteresis consistency control method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102681542A (en) * 2012-03-07 2012-09-19 陶重犇 Experimental platform for indoor multipurpose mobile robot
CN107807661A (en) * 2017-11-24 2018-03-16 天津大学 Four rotor wing unmanned aerial vehicle formation demonstration and verification platforms and method in TRAJECTORY CONTROL room
CN109407653A (en) * 2018-12-18 2019-03-01 中国人民解放军陆军装甲兵学院 A kind of indoor universal multiple mobile robot algorithm checking system
CN109839111A (en) * 2019-01-10 2019-06-04 王昕� A kind of indoor multi-robot formation system of view-based access control model positioning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102681542A (en) * 2012-03-07 2012-09-19 陶重犇 Experimental platform for indoor multipurpose mobile robot
CN107807661A (en) * 2017-11-24 2018-03-16 天津大学 Four rotor wing unmanned aerial vehicle formation demonstration and verification platforms and method in TRAJECTORY CONTROL room
CN109407653A (en) * 2018-12-18 2019-03-01 中国人民解放军陆军装甲兵学院 A kind of indoor universal multiple mobile robot algorithm checking system
CN109839111A (en) * 2019-01-10 2019-06-04 王昕� A kind of indoor multi-robot formation system of view-based access control model positioning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117953018A (en) * 2024-03-26 2024-04-30 深圳康荣电子有限公司 Infrared induction screen following method, device, equipment and storage medium
CN118151523A (en) * 2024-05-09 2024-06-07 天津工业大学 PID-based multi-agent system output hysteresis consistency control method

Similar Documents

Publication Publication Date Title
CN103389699B (en) Based on the supervisory control of robot of distributed intelligence Monitoring and Controlling node and the operation method of autonomous system
US12002239B2 (en) Data processing method and device used in virtual scenario
WO2022021739A1 (en) Humanoid inspection operation method and system for semantic intelligent substation robot
CN115065718A (en) Multi-agent cooperative control algorithm verification system based on optical indoor positioning
CN113485392A (en) Virtual reality interaction method based on digital twins
CN106444423A (en) Indoor multi unmanned aerial vehicle formation flight simulation verification platform and achieving method thereof
CN111968262A (en) Semantic intelligent substation inspection operation robot navigation system and method
CN111993424B (en) Interoperation middleware testing system and method for heterogeneous mobile robot
WO2019168886A1 (en) System and method for spatially mapping smart objects within augmented reality scenes
CN109327797A (en) Mobile robot indoor locating system based on WiFi network signal
CN116414081A (en) Intelligent workshop real-time monitoring method based on digital twinning
De Croce et al. DS-PTAM: distributed stereo parallel tracking and mapping SLAM system
Li et al. Depth camera based remote three-dimensional reconstruction using incremental point cloud compression
CN116522570A (en) Intelligent unmanned cluster system area coverage relay communication application simulation and test system
Feng et al. S3E: A large-scale multimodal dataset for collaborative SLAM
Pan et al. Sweeping robot based on laser SLAM
Lam et al. Bluetooth mesh networking: An enabler of smart factory connectivity and management
CN112461123A (en) Multi-transmitting station implementation method and device of space positioning system
Yan et al. Intergrating UAV development technology with augmented reality toward landscape tele-simulation
Hou et al. Octree-based approach for real-time 3d indoor mapping using rgb-d video data
CN116450092A (en) Multi-display terminal simulation data synchronization and interaction method and system
Sun et al. Design and implementation of a high-speed lidar data reading system based on FPGA
CN114373016A (en) Method for positioning implementation point in augmented reality technical scene
Huang et al. Design and application of intelligent patrol system based on virtual reality
Ai et al. Design of an indoor surveying and mapping robot based on SLAM technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination