Disclosure of Invention
The robot inspection method based on 5G communication can effectively improve the identification efficiency of the robot.
In order to achieve the purpose, the invention adopts the following technical scheme:
a robot inspection method based on 5G communication is based on a robot inspection system, wherein the robot inspection system comprises a motion control module, a navigation module, a task inspection module, a path planning module, a station control system module and a cloud vision identification module which are arranged on a robot body;
the robot inspection system adopts a 5G communication module for communication;
the method comprises the following steps:
s100, starting a robot map scanning function, establishing a three-dimensional coordinate system, and storing and backing up a map;
s200, performing fixed point and alignment on an area to be inspected by the robot, and setting an inspection route of the robot;
s300, adjusting the position and posture of the robot in inspection so that a camera of the robot can shoot each device;
s400, issuing a task through a station control system, enabling a robot control system to control the robot to move to a corresponding position according to the pose of a task point, taking a picture and returning the picture;
and S500, uploading the picture to a cloud identification system to obtain an identification result.
Further, the adjusting of the robot polling pose in the S300 includes controlling the robot to move to a position of the equipment to be polled, adjusting a pan-tilt of the robot, aligning a camera in the pan-tilt to the equipment, adjusting a magnification and a focal length of the camera, so that an image reaches a set effect, recording coordinates of the robot and coordinates of the pan-tilt, and values of the magnification and the focal length of the camera, and storing the recorded values in a file.
Further, S400 a station control system is arranged to issue a task, the task content comprises the equipment information to be inspected, the inspection system inquires the pose file stored in S300 according to the equipment information to find the pose information of the robot when the equipment is inspected, the pose information of the robot comprises the position coordinate of the robot, the coordinate of a holder and the multiplying power and the focal length of a camera, the control system controls the robot to move to the coordinate position, then the holder and the camera are adjusted, a picture is shot, and the picture is transmitted back to the station control system.
Further, the step S500 specifically includes that after the station control system receives the returned picture of the robot, the picture is uploaded to the cloud via a 5G signal, and the visual identification system at the cloud identifies the number of the meter in the picture, and returns the result to the station control system.
Further, the robot patrols and examines 5G communication module of system and station control system and pass through 5G antenna connection, station control system and high in the clouds vision recognition system pass through 5G antenna connection.
According to the technical scheme, the robot inspection method based on the 5G communication has the following beneficial effects:
compared with the inspection mode of the traditional robot, the whole system is high in transmission rate, especially in the aspect of identification technology, the identification rate and accuracy can be remarkably improved through the 5G + cloud vision identification system, and the system maintenance cost is effectively reduced.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention.
As shown in fig. 1 to 5, the robot inspection control method based on 5G communication according to this embodiment is based on a robot inspection system, and the system includes a motion control module, a navigation module, a task inspection module, a path planning module, a station control system module, and a cloud vision recognition module of a robot body;
further comprising:
the three-dimensional laser module is related to the navigation module, and the upper-layer control program module and the 5G communication module are used for finishing robot inspection and path planning.
The robot monitoring system further comprises a computer terminal station control module for monitoring the robot inspection in real time, and the module is connected and communicated with the robot body through a 5G module.
The station control module can preview a real-time picture of the robot body camera and detect alarm information of the robot body, such as sensor faults, obstacle avoidance radar triggering and the like.
The station control module can issue a polling task to the robot, receive polling pictures returned by the robot and monitor the task progress of the robot.
And the station control module uploads the pictures returned by the robot to the cloud vision recognition system through 5G communication.
The cloud visual recognition system recognizes the meter reading in the picture through the uploaded picture, the model file in the visual library and the recognition model which is continuously learned and accumulated, and then returns the meter reading to the station control system.
Meanwhile, the inspection method of the embodiment of the invention specifically comprises the following steps:
s100, starting a robot map scanning function, establishing a three-dimensional coordinate system, and storing and backing up a map;
s200, performing fixed point and alignment on an area to be inspected by the robot, and setting an inspection route of the robot;
s300, adjusting the position and posture of the robot in inspection so that a camera of the robot can shoot each device;
s400, issuing a task through a station control system, enabling a robot control system to control the robot to move to a corresponding position according to the pose of a task point, taking a picture and returning the picture;
and S500, uploading the picture to a cloud identification system to obtain an identification result.
The following is a detailed description:
the operation control module 4-3 comprises a motor module and a motor driver module, wherein the motor module and the motor driver module are used for controlling the robot to move forward and backward and turn, the motor driver module receives a motion instruction sent by an upper layer application, and a motor is controlled to operate in a set direction. The module has two operation control modes, namely a manual control mode and an autonomous movement mode.
Under the manual control mode, the implementation tool 2-1 controls the robot to walk to the target position to complete the manual photographing and inspection functions
And under a manual control mode, the robot is controlled to walk in a set area, and a laser module is combined to complete a scanning task of the robot in an actual field.
The laser module 4-2 detects the surrounding environment, and establishes a three-dimensional coordinate system according to the moving distance of the vehicle body and the returned laser point information.
In the manual control mode, after the robot completes the establishment of a coordinate system, the implementation tool is used for controlling the robot to move in the inspection area, the robot stops at the position of an inspection target, the angle of a cloud platform of the robot is adjusted to be aligned with equipment to be photographed, then the magnification and the focal length of a camera on the cloud platform are adjusted to enable the image to be clear enough, then the information including the current position coordinate of the robot, the angle coordinate of the cloud platform and the magnification and the focal length value of the camera is stored in the implementation tool, then the robot is controlled to move to the next inspection target, and the operation is repeated.
After the steps are finished, the information is stored in an xml file, and the file is uploaded to the station control module, so that the information is sent to a module in the robot when the station control module issues a task.
Under the autonomous movement mode, the operation control module of the robot autonomously moves to a designated position according to a command sent by upper application, and the task inspection function is completed.
And arranging polling tasks on the station control module 1-2, wherein the polling tasks comprise the number of point positions to be polled by each task, the polling period of each task and the polling time point, so that the station control module can be communicated with the robot, and the operation control mode of the robot is switched into an automatic mode.
And sending a task to the robot (figure 3) under the station control, and checking a real-time inspection interface, wherein the task state is in progress.
The task inspection module 3-1 receives a series of inspection tasks issued by station control at a robot end, stores the tasks in a database in advance, and then takes one task from the database to inspect each time according to the task sequence. After a task is taken, the target position, the holder angle, the multiplying power and the focal length value of the camera of the task are inquired according to the map information, and then the information is transferred to the path planning module.
The path planning module 4-1 calculates intermediate points and intermediate routes through which the robot moves to the target position according to the current position of the robot and the point position of the next target, and searches for an optimal path.
And the path planning module sends coordinate information of points passed by the robot to the navigation module according to the calculated path, so that the robot moves to a target position according to an expected path.
The navigation module 4-2 receives the coordinate information sent by the path planning module, positions the current position of the trolley in real time through the laser radar, sends an instruction to the operation control module of the robot, enables the robot to move towards the position of the target point, and continuously adjusts the posture of the robot in the moving process to enable the robot to run along a fixed route.
After the robot reaches the target position, the inspection module takes over the control right again, adjusts the position of the holder and the multiplying power and the focal length of the camera, enables the position to move to the distribution point, and then shoots the picture.
The vehicle-mounted 5G network signal transceiver module 1-5 of the robot is communicated with the station control system in real time through the vehicle-mounted 5G network signal transceiver module, uploads pictures shot by the robot and receives other instructions of the station control system.
The station control system displays an interface in real time, can preview real-time pictures of a robot body camera in real time, and displays the inspection state of the robot.
The 5G network signal transceiving module 1-1 of the robot station control system is communicated with the visual identification system 1-3 of the cloud end through 5G signals, and the station control system transmits pictures to the cloud end identification system.
The cloud system is composed of a master node 5-1 and a plurality of slave nodes, each node is an independent visual identification system, the master node is responsible for managing the slave nodes and distributing tasks to the slave nodes for processing, and after the slave nodes are processed, results are returned to the master node.
The visual identification functions of all the slave nodes are different, the slave nodes are respectively provided with 5-2 responsible for collecting visual models, 5-3 responsible for machine learning, 5-4 responsible for big data processing and the like, and the nodes can communicate with each other to realize data sharing.
And after the picture processing is finished, the cloud system returns the final result to the station control system, and the station control system writes the result into the database and displays the result on the front-end page.
In conclusion, the whole set of system has high transmission rate, and particularly in the aspect of identification technology, the identification rate and accuracy can be remarkably improved and the system maintenance cost can be effectively reduced through the 5G + cloud vision identification system.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.