CN115661966A - Inspection system and method based on augmented reality - Google Patents

Inspection system and method based on augmented reality Download PDF

Info

Publication number
CN115661966A
CN115661966A CN202211130799.7A CN202211130799A CN115661966A CN 115661966 A CN115661966 A CN 115661966A CN 202211130799 A CN202211130799 A CN 202211130799A CN 115661966 A CN115661966 A CN 115661966A
Authority
CN
China
Prior art keywords
inspection
real
augmented reality
virtual
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211130799.7A
Other languages
Chinese (zh)
Inventor
李旺
魏明
吴振威
余军
熊云飞
徐文君
刘佳宜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Wuhan Fiberhome Technical Services Co Ltd
Original Assignee
Wuhan University of Technology WUT
Wuhan Fiberhome Technical Services Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT, Wuhan Fiberhome Technical Services Co Ltd filed Critical Wuhan University of Technology WUT
Priority to CN202211130799.7A priority Critical patent/CN115661966A/en
Publication of CN115661966A publication Critical patent/CN115661966A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Manipulator (AREA)

Abstract

The invention relates to the field of auxiliary operation and maintenance, in particular to an inspection system and method based on augmented reality. The system comprises: the inspection robot comprises a navigation component, a motion component and a display component, wherein the navigation component is used for planning an inspection track, the motion component is used for inspecting and acquiring a real image of a site, and the display component is used for displaying an image after virtual-real fusion; the detection part detects the real image by using a convolutional neural network model to acquire detection information; and the augmented reality auxiliary part is used for carrying out virtual-real fusion on the real image and the detection information and sending the fused image to the inspection robot for displaying. The invention solves the problem that the existing augmented reality equipment is inconvenient to use, can finish the daily unmanned inspection work, and can provide inspection assistance when someone inspects, thereby improving the operation and maintenance efficiency.

Description

Inspection system and method based on augmented reality
Technical Field
The invention relates to the field of auxiliary operation and maintenance, in particular to an inspection system and method based on augmented reality.
Background
Augmented Reality (abbreviated as AR) is a hotspot technology in the field of human-computer interaction at present. The augmented reality technology can utilize key technologies such as virtual-real registration, virtual-real shielding, model rendering and the like to superimpose a virtual model or animation at a specific position in a real scene, so that visual human-computer interaction of virtual-real fusion is realized, and a user is guided to understand the current scene. The augmented reality technology is applied to the inspection work, the virtual fault information can be fused with the real scene video through the augmented reality technology, and the field inspection personnel can be assisted to carry out operation and maintenance operation according to the fault content; meanwhile, when a complex operation and maintenance problem occurs on the site, a remote technical expert can also apply the augmented reality technology to guide the site personnel to carry out operation and maintenance operation.
The current augmented reality technology is applied to operation and maintenance processes of scenes such as transformer substations and communication machine rooms, and generally augmented reality glasses or portable computers are adopted to display fault or patrol information in the operation and maintenance processes to assist manual operation. At present, the mainstream augmented reality glasses have the problems of short endurance time and low computing power in the practical application process, and have invasive influence on the human body, and the long-term wearing of the glasses easily causes fatigue and visual deterioration of inspection personnel. And adopt portable computer to carry out augmented reality and show, when patrolling and examining personnel carry out the operation and maintenance operation, portable computer will occupy operation and maintenance personnel's both hands and influence its operation and maintenance operation.
In view of this, how to overcome the defects existing in the prior art, and to solve the problem that the current operation and maintenance system based on augmented reality is inconvenient to use is a problem to be solved in the technical field.
Disclosure of Invention
Aiming at the defects or improvement requirements of the prior art, the invention solves the problem of inconvenient use of hardware in the existing inspection system based on augmented reality.
The embodiment of the invention adopts the following technical scheme:
in a first aspect, the invention provides an inspection system based on augmented reality, which specifically comprises: including patrolling and examining robot 10, detection part 20 and augmented reality auxiliary part 30, it is specific: the inspection robot 10 comprises a navigation unit 11, a motion unit 12 and a display unit 13, wherein the navigation unit 11 is used for planning an inspection track according to position information, an environment image and inspection contents, the motion unit 12 is used for performing inspection according to the inspection track provided by the navigation part and acquiring a real image of a scene, and the display unit 13 is used for displaying a virtual-real fused image acquired by the augmented reality auxiliary part 30; the detection part 20 detects the real image acquired by the inspection robot 10 by using a convolutional neural network model to acquire detection information, wherein the detection information comprises fault information and operation and maintenance guide information; the augmented reality auxiliary part 30 is used for fusing the real image and the detection information, and sending the fused image to the inspection robot 10 for displaying.
Preferably, still contain cloud platform structure 15 on patrolling and examining robot 10, it is specific: cloud platform structure 15 fixes on the upper portion of patrolling and examining robot 10, carries image acquisition equipment on cloud platform structure 15 to in the real image of the equipment of different height of acquireing through the lift of cloud platform structure 15.
On the other hand, the invention provides a routing inspection method based on augmented reality, which specifically comprises the following steps: the inspection method is applied to the inspection system of the first aspect, and the inspection method includes: planning the routing inspection track in real time according to the actual scene, and acquiring a real image of a routing inspection target according to the planned routing inspection track; detecting a real image acquired by the inspection robot 10 by using a convolution neural network model improved based on a YOLO v3 backbone network to acquire detection information; and carrying out virtual-real fusion on the detection information and the real image through an augmented reality technology, and sending the fused image to the inspection robot 10 for displaying so as to assist operation and maintenance personnel in carrying out operation and maintenance operation.
Preferably, the inspection robot 10 plans the inspection track in real time according to the actual scene, and specifically includes: expressing the nodes in the topological space corresponding to all the inspection target points by using an active field, and realizing full-area coverage by adopting an improved biostimulation neural network algorithm to complete the global trajectory planning of multi-task point inspection; when an obstacle is found or the global trajectory deviates, local trajectory planning is carried out again by adopting a local path planning algorithm, and the optimal trajectory reaching the next target point is planned again.
Preferably, the detection of the real image acquired by the inspection robot 10 is performed by using a convolution neural network model improved based on a YOLO v3 backbone network, and specifically includes: and YOLO v3 is used as a backbone network, at least two multi-scale detection network branches with different data network structures are adopted for Darknet-53 to carry out output, and each multi-scale detection network branch uses an attention mechanism module.
Preferably, the acquiring of the detection information specifically includes: carrying out object identification on a real image acquired by the inspection robot 10 by using a convolutional neural network model, judging whether the real image conforms to the normal running state of the equipment, and acquiring corresponding fault information and operation and maintenance guide information if the real image does not conform to the normal running state of the equipment; when the judgment can not be directly carried out, the detection information is obtained through a specialist system and/or a remote specialist.
Preferably, the detection information and the real image are subjected to virtual-real fusion through an augmented reality technology, and the method specifically includes: and constructing a three-dimensional model of the real routing inspection space as a virtual scene, assigning a pose matrix of the real camera in the real routing inspection space to the virtual camera, and performing virtual-real fusion on the virtual scene shot by the virtual camera and a real image obtained by the real camera to complete virtual-real registration and virtual-real shielding processing.
Preferably, the assignment of the pose matrix of the real camera in the real routing inspection space to the virtual camera specifically comprises: the position of the inspection robot 10 in the real inspection space is determined through the map of the real inspection space, the position coordinates of the inspection robot 10 in the inspection space are converted to the coordinates of the image acquisition equipment 14 according to the relative position of the image acquisition equipment 14 carried by the inspection robot 10 and the inspection robot 10, and the pose matrix of the real camera is solved according to the coordinates of the image acquisition equipment 14.
Preferably, the method for converting the coordinates of the inspection robot 10 in the inspection space to the coordinates of the image capturing device 14 further includes: and determining the origin coordinate position of the real world coordinate system according to the origin when the inspection robot 10 creates the image, and aligning the position of the origin coordinate of the world coordinate system in the virtual model with the origin of the real world coordinate when performing virtual and real registration.
Preferably, before the virtual scene shot by the virtual camera and the real image obtained by the real camera are subjected to virtual-real fusion, the method further includes: and setting the external parameters of the virtual camera as a pose matrix of the real camera, and assigning the internal parameters obtained after the real camera is calibrated as the internal parameters of the virtual camera.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: the augmented reality technology is combined with the inspection robot 10, the operation and maintenance information can be displayed on the display part of the inspection robot 10 by using a real image captured by the inspection robot 10, and the operation and maintenance operation of field operation and maintenance personnel is assisted according to fault contents. Use augmented reality technique not only can solve augmented reality glasses power of calculation, continuation of the journey not enough on patrolling and examining the robot to and the problem of the invasive influence of human body, can also solve the portable computer and occupy the problem of patrolling and examining personnel's both hands, not only can accomplish daily unmanned work of patrolling and examining, can also provide when someone patrols and examines and patrol and examine the supplementary, improve the efficiency of fortune dimension.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic structural diagram of a system for routing inspection based on augmented reality according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of another inspection system based on augmented reality according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for routing inspection based on augmented reality according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a process of acquiring detection information by using an augmented reality-based inspection method according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a neural network structure used in the method for routing inspection based on augmented reality according to the embodiment of the present invention;
fig. 6 is a schematic diagram of a process of fusing a real image and detection information by using an augmented reality-based inspection method according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a process of constructing a map by using an augmented reality-based inspection method according to an embodiment of the present invention;
wherein the reference numerals used are as follows:
10: inspection robot, 11: navigation unit, 12: moving part, 13: display section, 14: image capturing apparatus, 15: a holder structure, a holder and a holder,
20: the detection section is provided with a detection section,
30: and an augmented reality assistance part.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
The present invention is a system structure of a specific function system, so the functional logic relationship of each structural module is mainly explained in the specific embodiment, and the specific software and hardware implementation is not limited.
In addition, the technical features involved in the respective embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other. The invention will now be described in detail with reference to the figures and examples.
Example 1:
in order to solve the problem of inconvenient hardware use in the inspection system of the existing augmented reality, the embodiment provides an inspection system based on the augmented reality. The system adopts the inspection robot 10 to automatically inspect, and dynamically monitors an inspection target point in real time through the image acquisition equipment 14 on the inspection robot 10 without manual field watching and inspection, so that the inspection requirements of indoor or outdoor equipment in the fields of electric power, communication, petrifaction, security and the like can be met. On the other hand, the augmented reality image obtained by fusing the real image, the fault information and the operation and maintenance guidance information is displayed on the display part 13 of the inspection robot 10, and the operation and maintenance personnel can be guided to complete the operation and maintenance of the machine room through the augmented reality technology.
Fig. 1 is a schematic diagram of a device architecture according to an embodiment of the present invention. The system includes an inspection robot 10, a detection section 20, and an augmented reality assistance section 30, where lines with arrows indicate the presence of data interaction, and the arrows indicate the direction of data transmission. The inspection robot 10 adopts technologies such as autonomous navigation, deep learning and cloud computing, and intelligent unmanned inspection is achieved; and by means of the laser radar and the vision system, the inspection robot 10 can achieve autonomous mapping, positioning and path planning. The detection part 20 utilizes the real images of the field collected by the inspection robot 10 to realize unmanned intelligent detection of the inspection field equipment by means of edge computing or cloud computing technology. The augmented reality auxiliary part 30 performs virtual-real fusion on the intelligent detection result and the real image, displays the result by the inspection robot 10, and provides auxiliary information for inspection personnel when someone inspects the result.
As shown in fig. 1, the inspection robot 10 includes a navigation part 11, a moving part 12, and a display part. The inspection robot 10 performs a cruising function in an inspection area through the navigation part 11 and the moving part 12, and presents an augmented reality information display to an operation and maintenance person through the display part 13 without manually inspecting or manually carrying an augmented reality display device.
The navigation part 11 in the inspection robot 10 plans an inspection trajectory based on the position information, the environment image, and the inspection contents. In the system provided by this embodiment, the navigation component 11 obtains the environmental information or the positioning mark by using data of various sensors such as a laser radar, a vision system, a distance meter, an edge sensor, a two-dimensional code scanning device, and an RFID reader on the inspection robot 10, and implements the functions of image construction, positioning, cruising, and obstacle avoidance of the inspection robot 10 through a navigation algorithm. Preferably, in order to avoid data delay or communication interruption caused by remote communication between the navigation unit 11 and the moving unit 12, the navigation unit 11 is deployed on an industrial personal computer of the inspection robot 10 or integrated with a control system of the moving unit 12, so as to ensure that the inspection robot 10 can realize an autonomous navigation function.
The moving part 12 in the inspection robot 10 inspects according to the inspection trajectory provided by the navigation part 11 and acquires a real image of the scene. The moving part 12 is a main body part of the inspection robot 10, the sensing part, the display part 13 and the image acquisition device 14 are installed on the moving part 12, and the industrial personal computer bearing the navigation part 11 is also installed on the moving part 12. The specific size, structure and motion mode of the moving part 12 can be selected according to the requirements of the actual scene, so as to ensure that the structure of the inspection robot 10 is suitable for the inspection environment. For example, when the inspection robot is applied to a flat indoor scene such as a communication machine room, a wheel structure may be used, and since the communication machine room belongs to an indoor environment, flat obstacles on the ground of the machine room are few, and the differential chassis of the inspection robot 10 may adopt a wheel structure. On the other hand, because the space between the cabinets of the communication room is limited, the length and the width of the inspection robot 10 are kept within 500mmx500mm, and the inspection robot 10 is ensured to have better passing performance in the environment of the communication room. For example, in outdoor scenes such as power inspection, a shock absorbing mechanism can be added to the wheel structure, a tire structure with higher trafficability characteristic can be used, or a crawler-type structure can be used to meet trafficability requirements of different terrains.
The image acquisition equipment 14 is used for acquiring equipment and scene images of an inspection scene, and in the image processing of augmented reality, the image acquisition equipment 14 is used as a real camera of augmented reality, and the acquired scene image is used as a real image of augmented reality. Specifically, the image capturing device 14 may use a common image capturing device such as an image sensor, a camera, a video camera, an infrared camera, or a dedicated image capturing device according to actual needs.
Further, as shown in fig. 2, the inspection robot 10 may further include a pan-tilt structure 15, the pan-tilt structure 15 is fixed on the upper portion of the inspection robot 10, and the pan-tilt structure 15 carries the image collecting device 14, so that the real images of the devices with different heights can be obtained through the lifting of the pan-tilt structure 15. For example, in the scene of communication computer lab patrolling and examining, the rack height reaches 2000mm, adopts cloud platform structure 15, ensures through the lift cloud platform that patrol and examine the equipment operation condition that in-process image acquisition equipment 14 can effectual shooting rack top. In the scene that needs high altitude detection such as the power supply line is patrolled and examined, can also set up unmanned aerial vehicle on patrolling and examining robot 10 and hold storehouse and corresponding remote control device, accomplish higher position through unmanned aerial vehicle and patrol and examine.
Different from the current robot of patrolling and examining, the robot 10 is patrolled and examined to the system of patrolling and examining based on augmented reality that this embodiment provided installs display element 13 on, shows the image after the reality of reality fusion that augmented reality auxiliary part 30 obtained, and in someone scene of watching, the operation and maintenance personnel can directly look over the information on the display element 13, and need not additionally to carry display equipment such as AR glasses or panel computer. Furthermore, the motion trail of the inspection robot 10 can be set to a following mode by matching with a corresponding navigation algorithm, so that the inspection robot 10 can move along with the operation and maintenance personnel, and the information of the operation and maintenance personnel can be checked more conveniently.
The detection section 20 detects the real image acquired by the inspection robot 10 using the convolutional neural network model to acquire detection information. When the inspection is carried out, the detection information required to be acquired comprises fault information and operation and maintenance guide information. In the specific implementation, because the artificial intelligence calculation needs a large calculation resource, the hardware of the detection part 20 generally cannot be directly carried by the inspection robot. Therefore, in actual implementation, a trained neural network is usually deployed at a server end as a main control part, a real image shot by the inspection robot 10 is transmitted to the server end, the real image is used as input of the neural network, the real image is detected through the neural network deployed at the server end, fault features or abnormal operation are obtained through image recognition, and fault information or abnormal operation information is sent to the augmented reality auxiliary part 30 to be fused with the real image or sent to other management system platforms for operation and maintenance personnel to check. The specific detection result comprises a real image, the type of the object in the real image and a corresponding report of fault or operation and maintenance indication. Further, if the fault type cannot be determined after the neural network is used for analysis, an expert system can be further used for detection, or an online technical expert can be used for manual detection.
The augmented reality auxiliary part 30 performs virtual-real fusion on the real image, the fault information and the operation and maintenance guidance information, and sends the fused image to the inspection robot 10 for display. The augmented reality auxiliary part 30 fuses the real image shot by the inspection robot 10 and the detection result of the detection part 20, virtual-real registration and virtual-real shielding processing of virtual information and the real image are realized at the server end, and finally the virtual-real fused image is transmitted to the inspection robot 10 end and displayed on the display part of the inspection robot 10. When the inspection robot 10 assists in manual operation and maintenance, the virtual information to be displayed mainly includes fault information of equipment or guidance information of a remote expert. In practical implementation, the implementation of the augmented reality technology requires a large amount of image processing and also consumes a large amount of computing resources, so that the augmented reality related algorithm in practical implementation is also deployed on the server side and can be integrated with the detection part 20.
If the detection part 20 and the augmented reality auxiliary part 30 are deployed at the server, data interaction between the inspection robot 10 and the server is also required through remote communication. In actual implementation, in order to avoid the influence of wired communication on the movement of the inspection robot 10, the inspection robot 10 uses wireless communication, such as WiFi, 4G/5G, etc., for communication with the detection part 20 and the augmented reality assistance part 30.
In the system provided by this embodiment, the augmented reality technology is combined with the inspection robot 10, and the operation and maintenance information can be displayed on the display portion of the inspection robot 10 by using the real image captured by the inspection robot 10, so as to assist the on-site operation and maintenance personnel to perform operation and maintenance operations according to the fault content. Use augmented reality technique not only can solve augmented reality glasses power of calculation, continuation of the journey not enough on patrolling and examining robot 10 to and the problem of human invasive influence, can also solve the portable computer and occupy the problem of patrolling and examining personnel's both hands, not only can accomplish daily unmanned work of patrolling and examining, can also provide when someone patrols and examines and patrol and examine the supplementary efficiency that improves fortune dimension.
Example 2:
on the basis of the inspection system based on augmented reality provided by the embodiment 1, the invention also provides an inspection method based on augmented reality, which can be used for realizing the inspection system.
As shown in fig. 3, the routing inspection method based on augmented reality provided by the embodiment of the present invention includes the following specific steps.
Step 101: the inspection robot 10 plans the inspection track in real time according to the actual scene, and acquires the real image of the inspection target according to the planned inspection track.
In the method provided by this embodiment, the inspection robot 10 in the inspection system provided in embodiment 1 is used to complete the acquisition of the inspection real image, and the acquired real data is used for subsequent fault detection and augmented reality processing. In order to facilitate the completion of the real image acquisition in an unattended scene, the inspection robot 10 can achieve autonomous mapping, positioning and path planning by means of a laser radar and a vision system. Even in an unfamiliar environment, the inspection robot 10 can realize autonomous navigation and automatic obstacle avoidance of a new scene by using a path planning learning algorithm integrated in the navigation unit 11.
Step 102: and detecting the real image acquired by the inspection robot 10 by using a convolution neural network model improved based on the YOLO v3 backbone network to acquire detection information.
The detection algorithm provided by the embodiment is carried on the detection part 20 in the inspection system provided by the embodiment 1, the daily inspection content in the inspection site is learned through a deep learning algorithm, and the daily unmanned automatic inspection of the equipment is realized through an image processing technology. In the method of the embodiment, unmanned intelligent detection on the inspection field equipment can be realized by utilizing the field real image; the on-site audio can be used for realizing the unmanned monitoring of the alarm sound of the on-site equipment; meanwhile, by means of multi-mode data acquired by various sensors, intelligent operation and maintenance detection is carried out on whether a machine room breaks down on site or not. In order to improve the flexibility of the identification algorithm and enable the identification algorithm to be capable of self-learning according to different field conditions, the algorithm provided by the embodiment adopts a convolutional neural network algorithm to complete various tasks in the routing inspection process, utilizes the neural network to identify objects in real images acquired by the camera of the routing inspection robot 10, and automatically judges whether the real images meet the normal running state of routing inspection space equipment. Further, depending on the edge computing or cloud computing technology, the inspection robot 10 can deploy a complex deep learning computing model on the server side to improve the processing efficiency.
As shown in fig. 4, the tasks to be completed in daily inspection are mainly various types of data alarm lamps, alarm tones, statistics of cabinet equipment types and serial number codes, on-off state recognition, face recognition in a machine room, and the like, and all the tasks can be completed through image recognition and audio recognition. For example, in the routine inspection process of a communication room, inspection personnel mainly need to record various types of data alarm lamps, types of cabinet equipment and code statistics on different communication equipment, and also need to record personnel who enter and exit the communication room. In the method provided by this embodiment, the inspection robot 10 can perform image detection by using an inspection video through a deep learning image processing algorithm, analyze various fault information in the daily inspection process of the communication equipment room, and automatically produce a daily inspection table of the communication equipment room for the operation and maintenance personnel of the equipment room to check.
Step 103: and performing virtual-real fusion on the detection information and the real image through an augmented reality technology, and sending the fused image to the inspection robot 10 for display so as to assist operation and maintenance personnel in operation and maintenance operation.
When a complex fault task occurs and operation and maintenance personnel are required to enter a machine room to carry out equipment maintenance and processing on site, the operation and maintenance personnel can carry out corresponding operation and maintenance operation according to the detection information displayed on the inspection robot 10.
The augmented reality auxiliary inspection is that inspection content, fault information and real images are subjected to virtual-real fusion display through an augmented reality technology and are used as inspection assistance in manual inspection. The augmented reality auxiliary inspection method provided in this embodiment is carried on the augmented reality auxiliary part 30 in the inspection system provided in embodiment 1, an image acquisition device of the inspection robot 10 is used to acquire an image of an inspection site, fault information or operation and maintenance guidance information is used as virtual information, and virtual information and a real image of a machine room site are subjected to virtual-real fusion by using augmented reality virtual-real registration and virtual-real occlusion processing technologies. And finally, displaying the virtual-real fused picture on a display part of the inspection robot 10 to assist the operation and maintenance personnel in performing inspection space operation and maintenance. In specific implementation, according to actual needs, the virtual-real fused picture can be presented in the form of a static picture, a dynamic picture or a video, and can be matched with a corresponding alarm or guidance audio information.
After the steps 101 to 103 provided in this embodiment, automatic inspection and operation and maintenance assistance based on augmented reality can be completed, the working intensity of operation and maintenance personnel is reduced, and the operation and maintenance efficiency is improved.
In step 101, in order to implement autonomous navigation, nodes in a topology space corresponding to all inspection target points of inspection need to be represented by active fields, and an improved biostimulation neural network algorithm is adopted to implement full-area coverage, thereby completing global trajectory planning of multi-task point inspection. Aiming at unstructured scenes and changeable complex environments in a machine room inspection task, the inspection robot 10 needs to be capable of achieving optimal movement and meanwhile effectively and autonomously avoiding obstacles, aiming at the requirements, through the adoption of an improved biostimulation neural network algorithm to achieve full-area coverage and multi-task point inspection global trajectory planning, a target point is represented to a node in a topological space by an active field, and the robot moves from an initial position to a target position according to a certain selection rule. Further, when the robot performs a routing inspection task, if a static or dynamic obstacle is found or a global path is deviated, local path planning is performed again by using a local path planning algorithm, and an optimal path to the next target point is planned again. Specifically, the current position of the inspection robot 10 is used as a starting point of the local trajectory planning, the closest next inspection target point is used as an end point of the local trajectory planning, an obstacle is added into the trajectory planning scene, and the motion trajectory capable of bypassing or avoiding the obstacle is recalculated. Preferably, a TEB (time elastic band) algorithm can be used to plan the local trajectory of the inspection robot 10, so that the local trajectory can be quickly planned, and the optimal trajectory to the next inspection point can be obtained under the condition of avoiding the obstacle.
In step 102, under the condition that the fault image is single or has obvious characteristics, the convolutional neural network model can be used to perform object identification on the real image acquired by the inspection robot 10, and determine whether the real image meets the normal operation state of the equipment, and if not, acquire corresponding detection information. In an actual implementation scenario, the daily inspection content may include a data alarm lamp, cabinet equipment, and various objects of different sizes, such as people entering a machine room. In step 102, the invention adopts an improved YOLO v3 network to perform inspection space object detection, wherein the YOLO v3 network is a convolutional neural network and is an object detection algorithm based on computer vision, and can realize rapid detection of various target objects. Since the YOLO v3 network is designed mainly for general life scenes, the detection capability of smaller-sized devices and alarm lights that may exist in the patrol scenes is poor. In addition, the accuracy of the YOLO v3 network in identifying the position of an object is poor, and it is difficult to locate the position of the equipment fault warning lamp.
In order to overcome the defects of the existing YOLO v3 network, in the method provided by the embodiment, a convolutional neural network model improved based on a YOLO v3 backbone network is adopted to identify equipment, alarm light and a human face in an inspection space. Specifically, YOLO (all called as You Only Look one) v3 is used as a backbone network, at least two multi-scale detection network branches with different data network structures are adopted for Darknet-53 to output, and each multi-scale detection network branch uses an Attention Channel attachment (ECA). The network structure is shown in fig. 5. The YOLO v3 is used as a backbone network, and is remarkably characterized in that a plurality of multi-scale detection network branches are adopted for Darknet-53 to carry out output so as to adapt to detection of assembly objects with different sizes. And each branch of the improved network passes through the ECA attention mechanism module, so that the accuracy of the position detection of the assembly object is improved. In a particular implementation scenario, four multi-scale branches are used, respectively denoted as: y1, y2, y3, y4 network branches. The multi-scale detection branch y1 is suitable for detection of a larger equipment object, and the output network structure is as follows: 13 × 13 × 255. The multi-scale detection branch y2 is suitable for detecting the medium-sized object, and the output network structure is as follows: 26 × 26 × 255. The multi-scale detection branch y3 is suitable for small-size object detection, and the output network structure is as follows: 52 × 52 × 255. The multi-scale detection branch y4 is suitable for detecting a very small alarm light object, and the output network structure of the multi-scale detection branch is as follows: 104 × 104 × 255. For each output layer, as the output layer contains 255 feature matrixes, the detection of a plurality of objects of different classes can be realized simultaneously. In actual implementation, the number of the multi-scale detection branches and the output network structure of each multi-scale detection branch can be set according to actual detection needs.
Furthermore, when the fault is complex and cannot be judged directly through the neural network, the detection information can be obtained through an expert system or a remote expert. By using the real image or video data of the site acquired by the inspection robot 10, the remote expert sends the relevant fault content or fault processing method as virtual data to the inspection robot 10, the virtual data and the real scene image are fused and displayed on the display part of the inspection robot 10, and the site operation and maintenance personnel are guided to perform further detection, debugging or fault processing.
As shown in fig. 6, in step 103, in order to perform virtual-real fusion between the real image and the detection information, virtual-real registration, virtual-real occlusion, and virtual-real rendering need to be completed.
Before augmented reality virtual-real registration, firstly, a real scene of the inspection area needs to be abstracted into a virtual scene represented by computable coordinate data. Specifically, a three-dimensional model of a real routing inspection space is constructed to serve as a virtual scene, a pose matrix of a real camera in the real routing inspection space is assigned to the virtual camera, the virtual scene shot by the virtual camera and a real image obtained by the real camera are subjected to virtual-real fusion, and virtual-real registration and virtual-real shielding processing are completed. In specific implementation, the mapping of the inspection robot 10 is implemented by a multi-sensor-based method, and specifically used sensors can be set for different unstructured scenes. As shown in fig. 7, in a specific scene, the lidar point cloud data carried by the inspection robot 10 and the RGB-D camera image data are fused by a lidar-visual-inertial odometer (LVIO), an environment map is constructed based on multi-sensor fusion, and a semantic map of a scene environment is constructed based on image data of an identification target such as a two-dimensional code and a preset marker acquired by a camera in combination with a deep learning algorithm yollov 3. And then, fusing the environment map and the semantic map to complete the construction of the high-precision machine room map. In addition, the traditional loop detection is optimized by adopting a method of combining deep learning and semantic labels, so that the map modeling is further optimized, and the accuracy of scene map construction is improved.
When augmented reality virtual-real registration is carried out, a pose matrix of a real camera needs to be determined, then the pose matrix of the real camera is assigned to the virtual camera, and virtual-real fusion is carried out on a virtual scene shot by the virtual camera and a real image obtained by the real camera to complete virtual-real registration. Specifically, the position of the inspection robot 10 in the real inspection space is determined through a map of the real inspection space, the position coordinates of the inspection robot 10 in the inspection space are converted to the coordinates of the image acquisition device 14 according to the relative position of the image acquisition device 14 carried by the inspection robot 10 and the inspection robot 10, and the pose matrix of the real camera is solved according to the coordinates of the image acquisition device 14. When the augmented reality technology is applied to inspection, the pose coordinates of the real camera carried by the inspection robot 10 can be acquired through the position coordinates of the inspection robot 10. In the inspection process, the position of the inspection robot 10 in the inspection space can be determined through a map constructed by positioning sensors such as a 2D laser radar. The position coordinates of the inspection robot 10 in the real inspection space are converted to a real camera carried by the inspection robot 10, and a pose matrix of the real camera can be obtained through solving.
When performing virtual-real registration, the origin of coordinates of the virtual world needs to be aligned with the origin of coordinates of the real world, specifically, the origin coordinate position of the real world coordinate system is determined according to the origin of the inspection robot 10 during drawing, and when performing virtual-real registration, the position of the origin coordinates of the world coordinate system in the virtual model is aligned with the origin of coordinates of the real world. In the virtual world, the origin coordinate of the world coordinate system is O v ,O v Is determined at the time of manual construction of the virtual model. The origin coordinate of the real world coordinate system is O r ,O r The position of (2) is determined as an origin when the inspection robot 10 performs instant positioning and Mapping (simple Localization and Mapping, abbreviated as SLAM) Mapping.
The realization of the virtual-real registration processing function also relates to the parameter setting of a virtual camera, the pose determination of a virtual model and the fusion rendering of virtual and real images besides the matrix solving algorithm of the real camera pose. The virtual camera parameters mainly comprise the setting of external parameters and internal parameters of the virtual camera, the external parameters of the virtual camera are set to be a pose matrix of the real camera, and the internal parameters are solved through camera calibration. And the internal parameters of the virtual camera are consistent with the internal parameters of the real camera, and the internal parameters obtained after the real camera is calibrated are assigned as the internal parameters of the virtual camera.
After the virtual and real registration is completed, the 3D rendering engine renders the virtual information, converts the detection information and the like into corresponding image information, renders the corresponding image information onto a real image, and displays the rendered image on the display unit 13 of the inspection robot 10, thereby completing the implementation of the augmented reality technology.
The method provided by the embodiment can complete daily inspection under an unattended condition through the inspection robot 10, and can also guide operation and maintenance personnel to complete operation and maintenance operation through an augmented reality technology, so that the workload of the operation and maintenance personnel is reduced, and the operation and maintenance efficiency and the accuracy of fault handling are improved. In practical implementation, the method provided by the embodiment can be applied to inspection robots in the fields of communication rooms, electric power, petrochemical industry, security and the like, and automatic daily inspection and expert remote auxiliary field personnel fault processing are realized.
The above description is intended to be illustrative of the preferred embodiment of the present invention and should not be taken as limiting the invention, but rather, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.

Claims (10)

1. The utility model provides a system of patrolling and examining based on augmented reality which characterized in that, including patrolling and examining robot (10), detecting part (20) and augmented reality auxiliary part (30), specific:
the inspection robot (10) comprises a navigation component (11), a motion component (12) and a display component (13), wherein the navigation component (11) is used for planning an inspection track according to position information, an environment image and inspection contents, the motion component (12) is used for inspecting according to the inspection track provided by the navigation part and acquiring a real image of a site by using an image acquisition device (14) carried by the motion component, and the display component (13) is used for displaying a virtual-real fused image acquired by the augmented reality auxiliary part (30);
the detection part (20) detects a real image acquired by the inspection robot (10) by using a convolutional neural network model to acquire detection information, wherein the detection information comprises fault information and operation and maintenance guide information;
the augmented reality auxiliary part (30) is used for carrying out virtual-real fusion on the real image and the detection information and sending the fused image to the inspection robot (10) for displaying.
2. The augmented reality-based inspection system according to claim 1, wherein the inspection robot (10) further comprises a holder structure (15), specifically:
cloud platform structure (15) are fixed on the upper portion of patrolling and examining robot (10), carry image acquisition equipment (14) on cloud platform structure (15) to in the lift through cloud platform structure (15) acquires the true image of not co-altitude equipment.
3. An inspection method based on augmented reality, which is applied to the inspection system according to any one of claims 1-2, and comprises the following steps:
planning the routing inspection track in real time according to the actual scene, and acquiring a real image of a routing inspection target according to the planned routing inspection track;
detecting a real image acquired by the inspection robot (10) by using a convolution neural network model improved based on a YOLO v3 backbone network to acquire detection information;
the detection information and the real image are subjected to virtual-real fusion through an augmented reality technology, and the fused image is sent to the inspection robot (10) to be displayed so as to assist operation and maintenance personnel in operation and maintenance operation.
4. The inspection method based on augmented reality according to claim 3, wherein the inspection robot (10) plans the inspection track in real time according to the actual scene, and specifically comprises:
expressing the nodes in the topological space corresponding to all the inspection target points by using an active field, and realizing full-area coverage by adopting an improved biostimulation neural network algorithm to complete the global trajectory planning of multi-task point inspection;
when an obstacle is found or the global trajectory deviates, local trajectory planning is carried out again by adopting a local path planning algorithm, and the optimal trajectory reaching the next target point is planned again.
5. The augmented reality-based inspection method according to claim 3, wherein the detection of the real image acquired by the inspection robot (10) by using the improved convolutional neural network model based on the YOLO v3 backbone network specifically comprises the following steps:
and YOLO v3 is used as a backbone network, at least two multi-scale detection network branches with different data network structures are adopted for Darknet-53 to output, and each multi-scale detection network branch uses an attention mechanism module.
6. The inspection method based on augmented reality according to claim 3, wherein the acquiring of the detection information specifically includes:
carrying out object identification on a real image acquired by the inspection robot (10) by utilizing a convolutional neural network model, judging whether the real image conforms to the normal running state of equipment or not, and acquiring corresponding fault information and operation and maintenance guide information if the real image does not conform to the normal running state of the equipment;
when the direct judgment cannot be carried out, the detection information is obtained through a specialist system and/or a remote expert.
7. The inspection method based on augmented reality according to claim 3, wherein the detection information and the real image are subjected to virtual-real fusion through an augmented reality technology, and the method specifically comprises the following steps:
and constructing a three-dimensional model of the real routing inspection space as a virtual scene, assigning a pose matrix of the real camera in the real routing inspection space to the virtual camera, and performing virtual-real fusion on the virtual scene shot by the virtual camera and a real image obtained by the real camera to complete virtual-real registration and virtual-real shielding processing.
8. The inspection method based on augmented reality according to claim 7, wherein the assigning of the pose matrix of the real camera in the real inspection space to the virtual camera specifically comprises:
the position of the inspection robot (10) in the real inspection space is determined through a map of the real inspection space, the position coordinate of the inspection robot (10) in the inspection space is converted to the coordinate of the image acquisition equipment (14) according to the relative position of the image acquisition equipment (14) carried by the inspection robot (10) and the inspection robot (10), and the pose matrix of the real camera is solved according to the coordinate of the image acquisition equipment (14).
9. The augmented reality-based inspection method according to claim 8, wherein converting the coordinates of the position of the inspection robot (10) in the inspection space to the coordinates of the image capture device (14), further comprises:
and determining the origin coordinate position of the real world coordinate system according to the origin when the inspection robot (10) creates the image, and aligning the position of the origin coordinate of the world coordinate system in the virtual model with the origin of the real world coordinate when performing virtual and real registration.
10. The inspection method based on augmented reality according to claim 7, wherein before the virtual scene shot by the virtual camera and the real image obtained by the real camera are subjected to virtual-real fusion, the method further comprises:
and setting the external parameters of the virtual camera as a pose matrix of the real camera, and assigning the internal parameters obtained after the real camera is calibrated as the internal parameters of the virtual camera.
CN202211130799.7A 2022-09-16 2022-09-16 Inspection system and method based on augmented reality Pending CN115661966A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211130799.7A CN115661966A (en) 2022-09-16 2022-09-16 Inspection system and method based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211130799.7A CN115661966A (en) 2022-09-16 2022-09-16 Inspection system and method based on augmented reality

Publications (1)

Publication Number Publication Date
CN115661966A true CN115661966A (en) 2023-01-31

Family

ID=84983362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211130799.7A Pending CN115661966A (en) 2022-09-16 2022-09-16 Inspection system and method based on augmented reality

Country Status (1)

Country Link
CN (1) CN115661966A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117213468A (en) * 2023-11-02 2023-12-12 北京亮亮视野科技有限公司 Method and device for inspecting outside of airplane and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117213468A (en) * 2023-11-02 2023-12-12 北京亮亮视野科技有限公司 Method and device for inspecting outside of airplane and electronic equipment
CN117213468B (en) * 2023-11-02 2024-04-05 北京亮亮视野科技有限公司 Method and device for inspecting outside of airplane and electronic equipment

Similar Documents

Publication Publication Date Title
CN111897332B (en) Semantic intelligent substation robot humanoid inspection operation method and system
CN109977813B (en) Inspection robot target positioning method based on deep learning framework
CN111968262B (en) Semantic intelligent substation inspection operation robot navigation system and method
CN108496129B (en) Aircraft-based facility detection method and control equipment
CN111958592A (en) Image semantic analysis system and method for transformer substation inspection robot
CN112633535A (en) Photovoltaic power station intelligent inspection method and system based on unmanned aerial vehicle image
JP7337654B2 (en) Maintenance activity support system and maintenance activity support method
CN110865917A (en) AR technology-based electric power machine room inspection operation method, system and application
CN113325837A (en) Control system and method for multi-information fusion acquisition robot
CN108491758A (en) A kind of track detection method and robot
US20220362939A1 (en) Robot positioning method and apparatus, intelligent robot, and storage medium
CN106168805A (en) The method of robot autonomous walking based on cloud computing
CN103941746A (en) System and method for processing unmanned aerial vehicle polling image
CN110347153A (en) A kind of Boundary Recognition method, system and mobile robot
CN104122891A (en) Intelligent robot inspection system for city underground railway detection
CN110223413A (en) Intelligent polling method, device, computer storage medium and electronic equipment
CN115393566A (en) Fault identification and early warning method and device for power equipment, storage medium and equipment
CN102566552B (en) Road tunnel intelligent overhaul robot facing Internet of things and 3D GIS
CN115649501B (en) Unmanned aerial vehicle night lighting system and method
CN116494201A (en) Monitoring integrated power machine room inspection robot and unmanned inspection method
CN115661966A (en) Inspection system and method based on augmented reality
CN115686014A (en) Subway inspection robot based on BIM model
CN111813125A (en) Indoor environment detection system and method based on wheeled robot
CN111770450A (en) Workshop production monitoring server, mobile terminal and application
CN113084776B (en) Intelligent epidemic prevention robot and system based on vision and multi-sensor fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination