CN115272840A - Artificial intelligence mechanical dog system of patrolling and examining based on machine vision and degree of depth study - Google Patents

Artificial intelligence mechanical dog system of patrolling and examining based on machine vision and degree of depth study Download PDF

Info

Publication number
CN115272840A
CN115272840A CN202210245411.1A CN202210245411A CN115272840A CN 115272840 A CN115272840 A CN 115272840A CN 202210245411 A CN202210245411 A CN 202210245411A CN 115272840 A CN115272840 A CN 115272840A
Authority
CN
China
Prior art keywords
module
robot body
path
navigation
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210245411.1A
Other languages
Chinese (zh)
Inventor
潘锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Jingxin Intelligent Technology Co ltd
Original Assignee
Haizhongjia Hainan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haizhongjia Hainan Technology Co ltd filed Critical Haizhongjia Hainan Technology Co ltd
Priority to CN202210245411.1A priority Critical patent/CN115272840A/en
Publication of CN115272840A publication Critical patent/CN115272840A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses an artificial intelligent mechanical dog inspection system based on machine vision and deep learning, which comprises a bottommost layer, an intermediate communication layer, a decision-making layer and a cloud management platform; the bottommost layer comprises a robot body, a motor driving module, a control module and a sensing module, wherein the motor driving module is arranged on the robot body, and the control module is in control connection with the motor driving module; the decision layer comprises a mapping positioning module and a navigation module, wherein the navigation module is connected with the mapping positioning module, and the mapping positioning module is connected with the sensing module; the intermediate communication layer comprises a communication module; after the map is established by the map establishing and positioning module, the navigation module plans a traveling path and sends the traveling path to the control module, the control module issues a control instruction to the motor driving module according to the path information, and the motor driving module executes the control instruction to drive the corresponding motor to operate so that the robot body moves along the planned path. This application has realized replacing the manpower to carry out the task of patrolling and examining under the complex environment, has higher efficiency of patrolling and examining.

Description

Artificial intelligence mechanical dog system of patrolling and examining based on machine vision and degree of depth study
Technical Field
The application relates to the technical field of robots, in particular to an artificial intelligent mechanical dog inspection system based on machine vision and deep learning.
Background
The power grid inspection is an important task, at present, most transformer substation inspection work is mainly performed by manpower, efficiency is low, and the defects of untimely inspection, narrow coverage, poor result accuracy and the like exist. Along with the progress of science and technology, the appearance of patrolling and examining the robot can solve the manpower to a certain extent and patrol and examine the physical power that exists limited, the human cost is big, patrol and examine efficiency subalternation problem.
However, most of the conventional inspection robots have a wheel-type structure, have poor maneuvering performance, cannot cope with complex environments, and cannot replace manpower to perform inspection or dangerous operations, so that the inspection robots cannot meet the requirements of increasingly improved inspection precision and intelligent inspection route planning in power inspection, and therefore, a new inspection system is needed to solve the problem.
Disclosure of Invention
The application aims to provide an artificial intelligence machinery dog system of patrolling and examining based on machine vision and degree of depth study to satisfy the requirement that improves on patrolling and examining the task day by day.
In order to achieve the purpose, the application provides an artificial intelligent mechanical dog inspection system based on machine vision and deep learning, which comprises a bottommost layer, an intermediate communication layer, a decision layer and a cloud management platform;
the bottom layer comprises a robot body, a motor driving module for driving the robot body to move in a joint manner, a control module for issuing a control instruction to the motor driving module, and a sensing module for sensing and acquiring environmental information, wherein the motor driving module is arranged on the robot body, and the control module is in control connection with the motor driving module;
the decision layer comprises a mapping positioning module for positioning the robot body and performing mapping according to image data detected by the robot body, and a navigation module for planning a path according to the mapping established by the mapping positioning module and the positioning of the robot body, wherein the navigation module is connected with the mapping positioning module, and the mapping positioning module is connected with the sensing module;
the middle communication layer comprises a communication module in signal connection with the control module, the mapping positioning module, the navigation module and the cloud management platform; after the map building and positioning module builds a map, the navigation module plans a traveling path and sends the traveling path to the control module, the control module issues a control instruction to the motor driving module according to the path information, and the motor driving module executes the control instruction to drive a corresponding motor to run so as to enable the robot body to move along the planned path.
Preferably, the robot body is a four-legged wisdom dog.
Preferably, the mapping and positioning module comprises an AMCL packet positioning unit for navigation and positioning and a mapping unit for mapping based on the surrounding environment condition.
Preferably, the navigation module comprises a Move _ Base packet navigation unit for acquiring information around the robot body, generating a global and local cost map, and enabling the robot body to safely reach a designated position by bypassing an obstacle according to the cost map, wherein the path planning of the Move _ Base packet navigation unit comprises a global planning and a local planning, and the local planning adopts an a algorithm and a local planning DWA algorithm.
Preferably, the operation method of the navigation module comprises the following steps:
s1: planning a path;
s2: obtaining a destination;
s3: planning a global path;
s4: acquiring the width of a path;
s5: replanning the global path according to the shortest path or replanning the global path according to the central path;
s6: if the destination is reached, the path is terminated to be planned continuously, otherwise, the step S4 is repeated after 2 seconds.
Preferably, the sensing module comprises a laser radar, a binocular camera, a time synchronization board, an IMU inertial component and a NUC computing platform, wherein the laser radar, the binocular camera, the time synchronization board, the IMU inertial component and the NUC computing platform are arranged on the robot body; the time synchronization board is in signal connection with the laser radar and the IMU inertial element and is used for sending a synchronization signal to the laser radar and the IMU inertial element to synchronously acquire data; the laser radar, the binocular camera and the IMU inertial element are respectively connected with the NUC computing platform, the NUC computing platform detects the body posture, the angular velocity and the acceleration in multiple directions of the robot body according to data transmitted by the laser radar, the binocular camera and the IMU inertial element, and the NUC computing platform is in signal connection with the control module so as to transmit a computing result to the decision layer through the communication module.
Preferably, the robot body is provided with 2 IMU inertial elements, 1 lidar and 3 binocular cameras, and the lidar is a high-precision 16-line lidar.
Preferably, the artificial intelligent mechanical dog inspection system based on machine vision and deep learning further comprises an inspection module, wherein the inspection module comprises an image recognition unit for recognizing surrounding environment, an environment sensing unit for sensing the surrounding environment, a behavior recognition unit for recognizing the behavior of personnel in an inspection area, and a remote control unit for remotely controlling the action of the robot body, the image recognition unit, the environment sensing unit, the behavior recognition unit and the remote control unit are respectively connected with the control module, the remote control unit is in signal connection with the cloud management platform, a transformer substation plane graph is established through the image recognition unit, the environment sensing unit and the behavior recognition unit, an inspection route is arranged on the robot body according to the running characteristics of transformer substation equipment, so that the robot body can perform inspection according to a preset route and/or fixed time, behavior monitoring and supervision warning and personnel driving-off on the behavior of personnel in the inspection area, and equipment parameters and pressure plate state recognition are performed on the transformer substation equipment.
Preferably, the image recognition module is a Faster R CNN network.
Has the advantages that:
the artificial intelligent mechanical dog inspection system based on machine vision and deep learning realizes construction of a 3D-Map of a complex environment based on the bottommost layer and the decision layer, and has the functions of autonomous positioning, navigation, human body identification, following and the like; an artificial intelligence algorithm is adopted to realize accurate environment perception, intelligent planning and decision and good man-machine interaction; when various road obstacles (such as stairs, slopes, doors, dog holes, railings, scattered batteries, bricks and the like) are encountered, the automatic positioning and navigation can be realized, and the obstacles can be avoided, so that the inspection task in a complex environment instead of manpower can be met;
furthermore, the functions of video inspection and robot combined inspection of the transformer substation can be realized through the components of the robot and the cloud management platform, the functions of intelligently processing the obtained images, discriminating equipment types, defect types and the like can be expanded, the inspection efficiency is high, and the safe and stable operation of the transformer substation can be guaranteed.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example (b): an artificial intelligence mechanical dog system of patrolling and examining based on machine vision and deep learning, includes bottom layer, middle communication layer, decision-making layer, cloud management platform.
The bottom layer comprises a robot body, a motor driving module used for driving the robot body to move in a joint mode, a control module used for issuing a control instruction to the motor driving module, and a sensing module used for sensing and acquiring environmental information, the motor driving module is installed on the robot body, and the control module is in control connection with the motor driving module. The robot body is a four-foot wisdom dog, and the motion mode of a foot type animal can be expressed by gait by referring to a foot type robot biological control method and application published by Zhangxiu et al of Qinghua university. Gait refers to a walking pattern in which the legs have a fixed phase relationship when walking. When the walking speed is slow, the quadruped animals are in a stable three-leg supporting state when walking, such as tortoise. This movement pattern is called the locomotor gait, which is a four-beat gait. The more rapid travel speeds are the sprint gait (Tort) and the hoof gait (Pace), both of which are two-beat gait. To describe gait in a mathematical language, the following parameters are defined:
(1) gait cycle T: the time taken for one complete cycle of motion;
(2) step length S: in a gait cycle, the supporting leg drives the body mass center to move relative to the ground by the distance;
(3) leg lifting height h: the maximum distance between the inner foot end and the ground in one step;
(4) phase difference phi i \ varphi _ i phi i: the ratio of the delay to the period of the ith leg at the time of landing relative to the reference leg;
(5) support (stance): the legs contact with the ground to support the body and push the body to move forwards;
(6) swing (swing): the legs are lifted up and swing in the air;
(7) load factor β \ beta β: the time that the legs in the supporting phase are supported on the ground accounts for the proportion of the whole movement period.
The parameters are animal gait parameters, and the gait of the quadruped robot is specified by referring to the gait of the animal, so the parameter definition is similar to that of the animal, and other data can be specifically consulted, and details are not expanded here.
The decision layer comprises a mapping positioning module for positioning the robot body and performing map building according to image data detected by the robot body, and a navigation module for performing path planning according to the map built by the mapping positioning module and the positioning of the robot body, wherein the navigation module is connected with the mapping positioning module, and the mapping positioning module is connected with the sensing module.
In this embodiment, the mapping and positioning module includes an AMCL package positioning unit for navigation and positioning, and a mapping unit for mapping based on the surrounding environment conditions. The robot body needs to constantly determine the current position of the robot body in the Navigation process, an amcl package is used in a Navigation stack for positioning, the amcl is a probability positioning system, the robot body is positioned in a 2D mode, a self-adaptive (or KLD-sampling) Monte Carlo positioning method is realized, and the pose of the robot in a known map is tracked by using particle filtering.
The navigation module comprises a Move _ Base packet navigation unit which is used for acquiring information around the robot body, generating a global cost map and a local cost map and enabling the robot body to safely reach a designated position by bypassing an obstacle according to the cost map, the path planning of the Move _ Base packet navigation unit comprises global planning and local planning, and the local planning adopts an A algorithm and a local planning DWA algorithm. The working method of the navigation module comprises the following steps:
s1: planning a path;
s2: acquiring a destination;
s3: planning a global path;
s4: acquiring the width of a path;
s5: replanning the global path according to the shortest path or replanning the global path according to the central path;
s6: if the destination is reached, the path is terminated to be planned continuously, otherwise, the step S4 is repeated after 2 seconds.
The middle communication layer comprises a communication module in signal connection with the control module, the map building positioning module, the navigation module and the cloud management platform; after the map is established by the map establishing and positioning module, the navigation module plans a traveling path and sends the traveling path to the control module, the control module issues a control instruction to the motor driving module according to the path information, and the motor driving module executes the control instruction to drive the corresponding motor to operate so that the robot body moves along the planned path.
3. The machine vision and deep learning based artificial intelligence mechanical dog inspection system of claim 1,
6. the artificial intelligence mechanical dog inspection system based on machine vision and deep learning of claim 1, wherein the perception module comprises a laser radar, a binocular camera, a time synchronization board, an IMU inertial element and a NUC computing platform which are arranged on the robot body; the time synchronization board is in signal connection with the laser radar and the IMU inertial element and is used for sending a synchronization signal to the laser radar and the IMU inertial element to synchronously acquire data; the laser radar, the binocular camera and the IMU inertial component are respectively connected with the NUC computing platform, the NUC computing platform detects the body posture, the angular velocity and the acceleration in multiple directions of the robot body according to data transmitted by the laser radar, the binocular camera and the IMU inertial component, and the NUC computing platform is in signal connection with the control module so as to transmit a computing result to the decision layer through the communication module.
In this embodiment, install 2 IMU inertial element, 1 lidar, 3 two mesh cameras on the robot, lidar is 16 line lidar of high accuracy.
Based on various sensors and by matching with an SLAM algorithm, the robot body in the embodiment can construct a 3D-Map in a complex environment, and has the functions of autonomous positioning, navigation, human body recognition, following and the like; an artificial intelligence algorithm is adopted to realize accurate environment perception, intelligent planning and decision and good man-machine interaction; when various road surface obstacles (such as stairs, slopes, doors, dog holes, railings, scattered batteries, bricks and the like) are encountered, the autonomous positioning and navigation can be realized, and the obstacles can be avoided autonomously, so that the routing inspection task under the complex environment can be met.
As a preferred implementation manner of this embodiment, the artificial intelligence mechanical dog inspection system based on machine vision and deep learning further includes an inspection module, where the inspection module includes an image recognition unit for recognizing a surrounding environment, an environment sensing unit for sensing the surrounding environment, a behavior recognition unit for recognizing a behavior of a person in an inspection area, and a remote control unit for remotely controlling an action of the robot body, where the image recognition unit, the environment sensing unit, the behavior recognition unit, and the remote control unit are respectively connected to the control module, the remote control unit is in signal connection with the cloud management platform, a substation plan is established through the image recognition unit, the environment sensing unit, and the behavior recognition unit, and a route is arranged for the robot body according to an operation characteristic of the substation equipment, so that the robot body performs inspection according to a preset route and/or fixed time, and performs behavior monitoring and supervision warning, person driving, and device parameter and pressing plate state recognition for the substation equipment. A video inspection and intelligent robot inspection combined command platform is built through a cloud management platform, an inspection module and a camera installed at a transformer substation, tasks are issued together, and cooperation is completed. A substation plan is established through the inspection module, inspection routes of the robot body are freely arranged according to operation characteristics of substation equipment, meanwhile, inspection can be carried out according to preset routes for fixed time, and equipment inspection or specific scene inspection can also be carried out according to arrangement of field workers. On the other hand, a power equipment defect type analysis technology based on image analysis and artificial intelligence is adopted, a deep learning technology is combined with a massive defect image training model, a positioning algorithm for acquiring defect positions and types in images is developed and implemented by software, and intelligent analysis of the power equipment defect types is achieved through the discrimination success rate of a double-blind experimental image verification algorithm and the calculation efficiency of the software.
And in combination with the path planning of the navigation module, after the target location is set on the operation interface corresponding to the robot body, the robot body plans a global path in advance. In the actual process, the control module automatically plans and generates a path which is required to be followed by the operation of the robot body according to the operation destination, the established scene map and the positioning result, namely the current position, wherein the path is usually the shortest path reaching the destination, and the path is generated along the center of the path at the narrow position of the path; normally, the robot will travel to the target point along the path; in the operation process, the control module replans the global path every 2s so as to avoid the fact that the actual operation route of the robot has larger deviation with the original plan.
In the application process, a laser radar on the robot body is used for scanning the field environment and modeling the scene by combining an SLAM technology, the performance test of autonomous positioning and navigation of the robot is carried out on the basis of the scene, environmental noise points are removed, recognizable features are artificially added to positions with few feature points, and path deviation and regression are researched and debugged in the continuous test process so as to ensure that the field inspection result is optimal. The method includes the steps that a defect image/video acquired by a four-foot robot on site is marked, defect positions and defect types are marked, based on the marked defects, an image recognition algorithm based on machine learning is adopted by a project group, a defect recognition model is trained, the trained model is deployed on a four-foot intelligent inspection platform, and the accuracy of the model is gradually improved along with the gradual enrichment of a defect image library.
In this embodiment, the image recognition module is a Faster R CNN network. The fast R CNN network comprises the following structure:
1. a dataset, image input;
2. extracting features to obtain feature maps by using the convolutional layer CNN and other basic networks;
3-1.RPN layer, traversing the whole feature map by using a 3x3 slide window on the feature map extracted by the convolution layer, generating 9 anchors by using the center of each window according to the ratio, scale (1, 2,1, 2;
3-2, fixing the input dimensionality of the full connection layer by using ROI pooling through the convolution layer feature map;
4. then mapping rois output by RPN to feature map of ROIpooling to perform bbox regression and classification;
SPP-NET, because the general network structure is accompanied with the all connection layer, the parameter of the all connection layer is relevant to the size of input image, because it will connect all pixel points input, need appoint input layer neuron number and output layer neuron number, so need to stipulate the size of the feature input, for this reason, the fast R CNN network of this embodiment has increased SPP-NET, as the network inputs a picture of any size, we can carry on convolution, pooling all the time, until several layers of reciprocal of the network, namely when we will be connected with the all connection layer, will use pyramid to pool, make the feature map of any size can be converted into the feature vector of the fixed size;
ROI posing, ROI posing layer is actually a reduced version of SPP-NET, the SPP-NET uses pyramid mapping with different sizes for each posisal, and the ROI posing layer only needs to be down-sampled to a 7x7 feature map, so that the weight can be shared, 512 feature maps exist for VGG16 network conv5_3, and thus all region posisal corresponds to a 77512-dimensional feature vector as the input of the full connection layer. When all the RoIs are Pooling into (512 × 7 × 7) feature maps, they are processed into a one-dimensional vector, and the first two layers of full connection can be initialized by using the weights pre-trained by VGG 16.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments or portions thereof without departing from the spirit and scope of the invention.

Claims (9)

1. An artificial intelligent mechanical dog inspection system based on machine vision and deep learning is characterized by comprising a bottommost layer, an intermediate communication layer, a decision layer and a cloud management platform;
the bottom layer comprises a robot body, a motor driving module for driving the robot body to move in a joint manner, a control module for issuing a control instruction to the motor driving module, and a sensing module for sensing and acquiring environmental information, wherein the motor driving module is arranged on the robot body, and the control module is in control connection with the motor driving module;
the decision layer comprises a mapping positioning module for positioning the robot body and performing mapping according to image data detected by the robot body, and a navigation module for planning a path according to the mapping established by the mapping positioning module and the positioning of the robot body, wherein the navigation module is connected with the mapping positioning module, and the mapping positioning module is connected with the sensing module;
the middle communication layer comprises a communication module in signal connection with the control module, the mapping positioning module, the navigation module and the cloud management platform; after the map building and positioning module builds a map, the navigation module plans a traveling path and sends the traveling path to the control module, the control module issues a control instruction to the motor driving module according to path information, and the motor driving module executes the control instruction to drive a corresponding motor to run so as to enable the robot body to move along the planned path.
2. The machine vision and deep learning based artificial intelligence mechanical dog routing inspection system of claim 1, wherein the robot body is a four-footed smart dog.
3. The artificial intelligence mechanical dog inspection system based on machine vision and deep learning of claim 1, wherein the mapping and positioning module comprises an AMCL packet positioning unit for navigation and positioning, and a mapping unit for map building based on surrounding environment conditions.
4. The machine vision and deep learning based artificial intelligence mechanical dog routing inspection system of claim 1, wherein the navigation module comprises a Move _ Base packet navigation unit for acquiring information around the robot body, generating global and local cost maps, and enabling the robot body to safely reach a designated position around an obstacle according to the cost maps, the path planning of the Move _ Base packet navigation unit comprises a global planning and a local planning, the local planning adopts an a algorithm, and the local planning DWA algorithm.
5. The machine vision and deep learning based artificial intelligence mechanical dog inspection system according to claim 4, wherein the working method of the navigation module includes the following steps:
s1: planning a path;
s2: acquiring a destination;
s3: planning a global path;
s4: acquiring the width of a path;
s5: replanning the global path according to the shortest path or replanning the global path according to the central path;
s6: and if the destination is reached, terminating the path planning, otherwise repeating the step S4 after 2 seconds.
6. The machine vision and deep learning based artificial intelligence mechanical dog inspection system of claim 1, wherein the perception module includes a laser radar, a binocular camera, a time synchronization board, an IMU inertial element, a NUC computing platform mounted on the robot body; the time synchronization board is in signal connection with the laser radar and the IMU inertial element and is used for sending a synchronization signal to the laser radar and the IMU inertial element to synchronously acquire data; the laser radar, the binocular camera and the IMU inertial element are respectively connected with the NUC computing platform, the NUC computing platform detects the body posture, the angular velocity and the acceleration in multiple directions of the robot body according to data transmitted by the laser radar, the binocular camera and the IMU inertial element, and the NUC computing platform is in signal connection with the control module so as to transmit a computing result to the decision layer through the communication module.
7. The machine vision and deep learning-based artificial intelligence mechanical dog system of patrolling and examining of claim 6, characterized in that install 2 on the robot body IMU inertial element, 1 lidar, 3 binocular camera, lidar is 16 line lidar of high accuracy.
8. The system according to claim 1, further comprising a patrol module, wherein the patrol module comprises an image recognition unit for recognizing surrounding environment, an environment sensing unit for sensing surrounding environment, a behavior recognition unit for recognizing behavior of a person in a patrol area, and a remote control unit for remotely controlling actions of the robot body, the image recognition unit, the environment sensing unit, the behavior recognition unit, and the remote control unit are respectively connected with the control module, the remote control unit is in signal connection with the cloud management platform, a substation plane diagram is established through the image recognition unit, the environment sensing unit, and the behavior recognition unit, a patrol route is arranged for the robot body according to substation equipment operation characteristics, so that the robot body performs behavior monitoring and warning on the behaviors of the person in the patrol area, personnel leave the substation equipment, and patrol status recognition parameters and pressure plate parameters of the substation equipment.
9. The machine vision and deep learning based artificial intelligence mechanical dog inspection system of claim 7, wherein the image recognition module is a Faster R CNN network.
CN202210245411.1A 2022-03-14 2022-03-14 Artificial intelligence mechanical dog system of patrolling and examining based on machine vision and degree of depth study Pending CN115272840A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210245411.1A CN115272840A (en) 2022-03-14 2022-03-14 Artificial intelligence mechanical dog system of patrolling and examining based on machine vision and degree of depth study

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210245411.1A CN115272840A (en) 2022-03-14 2022-03-14 Artificial intelligence mechanical dog system of patrolling and examining based on machine vision and degree of depth study

Publications (1)

Publication Number Publication Date
CN115272840A true CN115272840A (en) 2022-11-01

Family

ID=83758555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210245411.1A Pending CN115272840A (en) 2022-03-14 2022-03-14 Artificial intelligence mechanical dog system of patrolling and examining based on machine vision and degree of depth study

Country Status (1)

Country Link
CN (1) CN115272840A (en)

Similar Documents

Publication Publication Date Title
US11753039B2 (en) Collaborative autonomous ground vehicle
KR101121763B1 (en) Apparatus and method for recognizing environment
WO2020192000A1 (en) Livestock and poultry information perception robot based on autonomous navigation, and map building method
Lee et al. Deep learning-based monocular obstacle avoidance for unmanned aerial vehicle navigation in tree plantations: Faster region-based convolutional neural network approach
Roucek et al. System for multi-robotic exploration of underground environments ctu-cras-norlab in the darpa subterranean challenge
Arul et al. LSwarm: Efficient collision avoidance for large swarms with coverage constraints in complex urban scenes
CN114407030A (en) Autonomous navigation distribution network live working robot and working method thereof
Pan et al. GPU accelerated real-time traversability mapping
CN113532461A (en) Robot autonomous obstacle avoidance navigation method, equipment and storage medium
Brandao et al. GaitMesh: controller-aware navigation meshes for long-range legged locomotion planning in multi-layered environments
Bartoszyk et al. Terrain-aware motion planning for a walking robot
CN117246425A (en) Navigation obstacle avoidance and stair climbing method and system for quadruped robot
Bartolomei et al. Autonomous emergency landing for multicopters using deep reinforcement learning
Chen et al. Deep reinforcement learning of map-based obstacle avoidance for mobile robot navigation
Iida et al. Navigation in an autonomous flying robot by using a biologically inspired visual odometer
Xiao et al. Autonomous inspection and construction of civil infrastructure using robots
De Luca et al. Autonomous navigation with online replanning and recovery behaviors for wheeled-legged robots using behavior trees
CN115272840A (en) Artificial intelligence mechanical dog system of patrolling and examining based on machine vision and degree of depth study
Schnaubelt et al. Autonomous assistance for versatile grasping with rescue robots
Li et al. Vision-based obstacle avoidance algorithm for mobile robot
Zhao et al. Design and its visual servoing control of an inspection robot for power transmission lines
CN115690343A (en) Robot laser radar scanning and mapping method based on visual following
Wang et al. Obstacle detection and obstacle-surmounting planning for a wheel-legged robot based on Lidar
Wang et al. Autonomous obstacle avoidance algorithm of UAVs for automatic terrain following application
JP2021196487A (en) Map conversion system and map conversion program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230323

Address after: 110000 room 247-13681, floor 2, No. 109-1 (No. 109-1), quanyun Road, Shenyang area, China (Liaoning) pilot Free Trade Zone, Shenyang, Liaoning

Applicant after: Liaoning Jingxin Intelligent Technology Co.,Ltd.

Address before: 570100 A370, Heima Industrial Park, 15/F, Qiaohui Building, No. 21, Yilong West Road, Datong Street, Longhua District, Haikou, Hainan

Applicant before: Haizhongjia (Hainan) Technology Co.,Ltd.

TA01 Transfer of patent application right