CN112131936A - Inspection robot image identification method and inspection robot - Google Patents

Inspection robot image identification method and inspection robot Download PDF

Info

Publication number
CN112131936A
CN112131936A CN202010812180.9A CN202010812180A CN112131936A CN 112131936 A CN112131936 A CN 112131936A CN 202010812180 A CN202010812180 A CN 202010812180A CN 112131936 A CN112131936 A CN 112131936A
Authority
CN
China
Prior art keywords
neural network
network model
feature map
visible light
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010812180.9A
Other languages
Chinese (zh)
Other versions
CN112131936B (en
Inventor
张继勇
刘鑫
庄浩
连理国
王世杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huarui Xinzhi Baoding Technology Co ltd
Huarui Xinzhi Technology Beijing Co ltd
Original Assignee
Huarui Xinzhi Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huarui Xinzhi Technology Beijing Co ltd filed Critical Huarui Xinzhi Technology Beijing Co ltd
Priority to CN202010812180.9A priority Critical patent/CN112131936B/en
Publication of CN112131936A publication Critical patent/CN112131936A/en
Application granted granted Critical
Publication of CN112131936B publication Critical patent/CN112131936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J5/0096Radiation pyrometry, e.g. infrared or optical thermometry for measuring wires, electrical contacts or electronic systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01KMEASURING TEMPERATURE; MEASURING QUANTITY OF HEAT; THERMALLY-SENSITIVE ELEMENTS NOT OTHERWISE PROVIDED FOR
    • G01K13/00Thermometers specially adapted for specific purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/20Checking timed patrols, e.g. of watchman
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J2005/0077Imaging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an inspection robot image identification method and an inspection robot, which comprises the steps of collecting visible light images and infrared light images of a plurality of devices in a transformer substation; identifying corresponding equipment from the visible light image according to a pre-trained depth target identification neural network model, determining position coordinates of the identified corresponding equipment in the visible light image, and determining whether the visible light image comprises a first hidden danger category equipment image; when the first hidden danger type device image is not included in the visible light image, an infrared light image corresponding to the visible light image is identified based on the position coordinates of the device to determine whether the infrared light image includes a second hidden danger type device image. According to the method, the efficiency and the accuracy of routing inspection are improved by preferentially detecting the visible light images of the equipment and then identifying, positioning and measuring the infrared light images.

Description

Inspection robot image identification method and inspection robot
Technical Field
The application relates to the technical field of inspection equipment, in particular to an inspection robot image identification method and an inspection robot.
Background
With the development of the inspection robot, places such as machine rooms and base stations need to be inspected in the fields of communication, transformer substations and the like, so that hidden equipment hazards can be found in time, and the inspection robot becomes an indispensable means for maintaining safe operation of equipment at present. Since the equipment needs to operate for a long period of time, various failures may occur, particularly, abnormal heat generation of the equipment is a main factor. The current main inspection mode is that the infrared equipment is manually held for detection, so that a lot of manpower and material resources are consumed, the position of the hidden danger equipment in an image cannot be accurately distinguished by the traditional image analysis technology, and the abnormal information of the equipment cannot be effectively acquired.
Disclosure of Invention
The embodiment of the application provides an inspection robot image identification method and an inspection robot, and solves the problem that the inspection robot has low accuracy in identifying target areas corresponding to a plurality of hidden danger devices in infrared images.
On the one hand, the embodiment of the application provides an image identification method of an inspection robot. Collecting visible light images and infrared light images of a plurality of devices in a transformer substation; the visible light images and the infrared light images correspond to each other one by one, and the visible light images and the infrared light images respectively comprise images of a plurality of devices in the transformer substation; identifying corresponding equipment from the visible light image according to a pre-trained depth target identification neural network model, determining position coordinates of the identified corresponding equipment in the visible light image, and determining whether the visible light image comprises a first hidden danger category equipment image; and under the condition that the first hidden danger type equipment image is not included in the visible light image, identifying an infrared light image corresponding to the visible light image based on the position coordinates of the equipment to determine whether the infrared light image includes a second hidden danger type equipment image.
The inspection robot in the embodiment of the application preferentially identifies and positions a plurality of devices in the visible light image, determines the first hidden danger category device image, and improves inspection efficiency. In addition, in the embodiment of the application, when the first hidden danger type equipment image is not detected in the visible light image, and under the condition that the equipment types and the position coordinates of a plurality of pieces of equipment in the visible light image are known, the inspection robot identifies the plurality of pieces of equipment in the infrared image corresponding to the visible light image to obtain a second hidden danger type equipment image in the transformer substation. Therefore, the embodiment of the application realizes accurate temperature measurement of a plurality of equipment areas in the infrared image, can quickly and accurately obtain the identification result, and improves the inspection accuracy.
In one example, the server trains the initial neural network model according to a training data set to obtain a deep target recognition neural network model; the direct connection layer of the initial neural network model comprises a convolution block attention module, and the convolution block attention module comprises a channel attention module and a space attention module.
According to the embodiment of the application, after the convolution block attention module is added into the direct connection layer of the initial neural network model, the shallow information is transmitted to the deep layer, the information degradation is restrained, the multi-level information of the combined output is strengthened, the situation that the size difference of various target devices in the transformer substation environment is large can be effectively handled, and the detection precision of the deep target recognition neural network model is improved.
In one example, an original input feature map is obtained based on an initial neural network model direct connection layer; obtaining a channel attention feature map according to the original input feature map; carrying out feature weighting on the channel attention feature map, carrying out feature weighting on the original input feature map, and combining the weighted features of the channel attention feature map and the weighted features of the original input feature map to obtain a feature map of a channel attention module; taking the feature map of the channel attention module as an input feature map of the space attention module to obtain a space attention feature map; carrying out feature weighting on the channel attention module feature map; carrying out feature weighting on the space attention feature map, and combining the weighted features of the space attention feature map and the weighted features of the channel attention module feature map to obtain a feature map of the space attention module; carrying out feature weighting on the spatial attention module feature map; carrying out feature weighting on the original input feature map, and combining the weighted features of the feature map of the spatial attention module and the weighted features of the original input feature map to obtain a target recognition neural network model of basic training; and obtaining a deep target recognition neural network model based on the target recognition neural network model of the basic training.
In one example, performing global maximum pooling on an original input feature map to obtain global maximum pooling features of the original input feature map; performing global average pooling on the original input feature map to obtain global average pooling features of the original input feature map; inputting the global maximum pooling characteristic of the original input characteristic graph and the global average pooling characteristic of the original input characteristic graph into the convolutional layer, and activating by using an activation function to obtain a channel attention characteristic graph; performing channel maximum pooling on the feature map of the channel attention module to obtain channel maximum pooling features of the feature map of the channel attention module; performing channel average pooling on the feature map of the channel attention module to obtain channel average pooling features of the feature map of the channel attention module; splicing the channel maximum pooling characteristics of the characteristic diagram of the channel attention module and the channel average pooling characteristics of the characteristic diagram of the channel attention module to obtain splicing characteristics of the channel maximum pooling characteristics and the channel average pooling characteristics; and inputting the splicing characteristics of the maximum pooling characteristics and the average pooling characteristics of the channels into the convolutional layer, and activating by using an activation function to obtain a spatial attention characteristic diagram.
In one example, deriving a deep target recognition neural network model based on the base-trained target recognition neural network model comprises: and pruning the basic training target recognition neural network model to obtain a deep target recognition neural network model.
According to the embodiment of the application, the pruning compression is carried out on the target recognition neural network model of the basic training, so that the parameter quantity and the operation quantity of the basic training model are reduced under the condition that the model recognition accuracy rate is not influenced, the recognition efficiency of the deep target recognition neural network is improved, and the deep target recognition neural network model is more suitable for the inspection robot.
In one example, pruning the basic trained target recognition neural network model to obtain a deep target recognition neural network model specifically includes: carrying out sparse training on the basic training target recognition neural network model to obtain a sparse deep target recognition neural network model; channel pruning is carried out on the sparse deep target recognition neural network model; and fine-tuning the pruned deep target recognition neural network model to obtain the deep target recognition neural network model.
According to the method and the device, the target neural network of the basic training is sparsely trained, and the operation efficiency of the deep target recognition neural network model is improved. In addition, the embodiment of the application deletes unimportant channels by carrying out channel pruning on the sparse deep target recognition neural network, and improves the detection effect of the model by carrying out fine adjustment on the pruned deep target neural network model. Under the degree that the accuracy of the model is not reduced, the size of the model is reduced, and meanwhile, the detection efficiency of the model is improved.
In one example, the server increases the sparse training factor of the basic training target recognition neural network model, and determines the sparse rate of the basic training target recognition neural network model; reducing the learning rate of the target recognition neural network model of the basic training, and determining the learning rate of the target recognition neural network model of the basic training; determining a total iteration batch value according to the complexity and the precision of a target recognition neural network model of basic training; obtaining a depth target recognition neural network model for sparse training according to the total iteration batch value; obtaining the weight of each convolution layer channel of the sparsely trained deep target recognition neural network model according to the global threshold; wherein the global threshold is a certain percentage of respective scaling factor values in the sparsely trained deep target recognition neural network model dataset; and performing channel pruning on the sparsely trained deep target recognition neural network model according to the weight of the channel of each convolution layer of the sparsely trained deep target recognition neural network model.
According to the method and the device, the sparsity of the target recognition neural network model of the basic training is increased, so that the target recognition neural network model of the basic training is sparse and fast. And the learning rate of the basic training target recognition neural network model is reduced, which is beneficial to the accuracy increase of the basic training target recognition neural network model. And the weight channel of each convolution layer is found out according to the global threshold value, model pruning is carried out, and the learning capability and the detection precision of the deep target recognition neural network model feature are ensured.
In one example, before the server trains the initial neural network model according to the training data set, the method includes: the server receives a plurality of visible light images and a plurality of infrared light images from the infrared thermal imager; labeling the equipment images in the plurality of visible light images and the plurality of infrared light images; performing data enhancement on the marked visible light image and the marked infrared light image to obtain an enhanced data set; filtering the enhanced data set by using a multi-scale Gaussian kernel function to obtain a filtered data set; comparing the filtered data set with the enhanced data set to obtain the detail information of the enhanced data set; the detail information is different parts of the enhanced data set and the filtered data set; weighting the detail information to obtain a weighted data set; the weighted data set is fused with the enhanced data set.
According to the method and the device, the marked visible light image and the marked infrared light image are subjected to data enhancement, so that the characteristic learning effect of the deep target recognition neural network is improved, and the accuracy of detection of the deep target recognition neural network is improved. In addition, the embodiment of the application carries out multi-scale fusion algorithm processing on the enhanced data set, solves the problem that the textures of a plurality of devices in the visible light and infrared light images collected by the infrared thermal imager are not clear, and improves the image details of the plurality of devices in the visible light and the image details of the plurality of devices in the infrared light images.
In one example, the first hazard category is a hazard category associated with an exterior surface of the equipment, and the second hazard category is a hazard category associated with a temperature of the equipment.
According to the embodiment of the application, the appearance hidden danger of the plurality of devices of the transformer substation is determined by identifying the visible light images, the temperature abnormity hidden danger of the plurality of devices of the transformer substation is determined by identifying the infrared images, and the inspection efficiency and the inspection accuracy of the inspection robot are improved.
In one example, acquiring visible light images and infrared light images of a plurality of devices specifically includes: collecting a video stream for the thermal infrared imager; performing image fetching on the video stream to obtain a video stream bare stream; extracting frames of the video stream bare stream to obtain key frames; the key frame includes a visible light image and an infrared light image.
On the other hand, this application embodiment provides a robot patrols and examines, includes: the infrared thermal imager is used for acquiring visible light images and infrared light images of a plurality of devices in the transformer substation; the visible light images and the infrared light images correspond to each other one by one, and the visible light images and the infrared light images respectively comprise images of a plurality of devices in the transformer substation; the mobile computing board identifies corresponding equipment from the visible light image according to a pre-trained depth target identification neural network model, determines position coordinates of the identified corresponding equipment in the visible light image, and determines whether the visible light image comprises a first hidden danger category equipment image; and under the condition that the first hidden danger type equipment image is not included in the visible light image, identifying the infrared light image corresponding to the visible light image based on the position coordinates of the equipment.
According to the inspection robot image identification method and the inspection robot, the mobile computing board card is matched with the infrared thermal imager, a plurality of devices in the visible light image are preferentially identified and positioned, the first hidden danger category device image is determined, and inspection efficiency is improved. In addition, in the embodiment of the application, when the first hidden danger type equipment image is not detected in the visible light image, and under the condition that the equipment types and the position coordinates of a plurality of pieces of equipment in the visible light image are known, the inspection robot identifies the plurality of pieces of equipment in the infrared image corresponding to the visible light image to obtain a second hidden danger type equipment image in the transformer substation. Therefore, the embodiment of the application realizes accurate temperature measurement of a plurality of equipment areas in the infrared image, can quickly and accurately obtain the identification result, and improves the inspection accuracy.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a structure of an inspection robot according to an embodiment of the present disclosure;
fig. 2 is a flowchart of an image recognition method for an inspection robot according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of a method for training a deep target neural network model according to an embodiment of the present disclosure;
fig. 4 is a flowchart of a training method of a basic training target neural network model according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a block diagram of a structure of an inspection robot according to an embodiment of the present disclosure.
As shown in fig. 1, the inspection robot 100 includes at least: an infrared thermal imager 110 and a mobile computing board 120. The infrared thermal imager 110 and the mobile computing board 120 are adapted to each other through an interface, so that the mobile computing board 120 can obtain the visible light image and the infrared light image acquired by the infrared thermal imager 110.
It should be noted that the infrared thermal imaging camera 110 in the embodiment of the present application may obtain visible light images and infrared light images of several devices in a substation at the same time. Such as the FLIR infrared thermal imager 110.
In an embodiment of the application, after the inspection robot 100 is powered on, the inspection robot 100 photographs a plurality of devices in a transformer substation through the infrared thermal imager 110 mounted on the inspection robot according to a preset inspection path, so as to obtain video streams of visible light images and video streams of infrared light images of the devices in the transformer substation. The mobile computing board 120 invokes a video coding and decoding tool command line through a video decoding tool and through pipeline interactive operation according to a programming language, completes image stream fetching through hard decoding, and obtains a video stream bare stream of a visible light image and a video stream bare stream of an infrared light image.
For example, the mobile computing board 120 calls an FFmpeg command line through a video codec FFmpeg (fast forward mpeg) and a subprocess pipeline technology according to Python language, and performs hard decoding through a real-time streaming protocol RSTP to obtain a video stream bare stream of the visible light image and a video stream bare stream of the infrared light image.
The mobile computing board 120 performs frame extraction on the video stream bare stream according to a preset picture resolution and a preset frame extraction interval to obtain a key frame, and therefore the key frame includes visible light image information and infrared light image information. The visible light images and the infrared light images correspond to each other one to one, that is, the infrared thermal imager 110 simultaneously captures the visible light images of the devices and the corresponding infrared light images of the devices, and the visible light images and the infrared light images respectively include images of a plurality of devices in the substation. The position coordinates of each device image in the visible light image and the position coordinates of each device image in the infrared light image correspond one-to-one.
The mobile computing board 120 preferentially identifies and positions the visible light image according to the pre-trained depth target recognition neural network model to obtain the classes and position coordinates of the plurality of devices in the visible light image, and judges whether the visible light image includes a first hidden danger class device image. In one example, the first potential hazard category device image is a potential hazard category associated with an exterior surface of a number of devices in the substation.
And under the condition that the visible light image does not include the first hidden danger type equipment image, the depth target recognition neural network model positions a plurality of pieces of equipment in the infrared light image corresponding to the visible light to obtain position coordinates of the plurality of pieces of equipment in the infrared light image.
According to the types and position coordinates of the devices in the visible light image, measuring the temperature of the position areas corresponding to the devices in the infrared light image corresponding to the visible light image, and judging whether the infrared light image comprises a second hidden danger type device image. In one example, the second potential trouble type device image is a potential trouble type related to the temperature of the device, and therefore whether the temperature of the device in the substation is abnormal or not can be identified according to the infrared light image.
For example, the inspection robot 100 detects a plurality of devices in the transformer substation under the dim light and night condition, and the infrared light image contrast is low this moment, and the power equipment detail is unclear in the infrared light image. The inspection robot 100 identifies and positions a plurality of devices in the visible light image preferentially, and finds devices with hidden surface troubles in the transformer substation in time. Under the condition that the hidden danger of the outer surfaces of the devices in the visible light image is not found, the temperature of the target area of the devices in the infrared area is obtained by measuring the temperature of the relevant area of the infrared light image corresponding to the visible light image according to the position coordinates and the device types of the devices in the visible light image, and therefore the devices with abnormal temperature in the transformer substation are found in time.
According to the embodiment of the application, the depth target recognition neural network model is compressed and then runs on the inspection robot 100, the pictures are processed in real time, data are transmitted back in real time, and a user can process the alarm in real time conveniently. And the infrared thermal imager 110 measures the temperature of the target type area, so that multi-target temperature measurement is realized, and the required result can be obtained quickly and accurately. And the identification rate of the inspection robot 100 is greatly improved by the method of preferentially detecting the visible light image in the key frame and then identifying, positioning and measuring the temperature of the infrared light image.
Based on the inspection robot provided by the embodiment of the application, the embodiment of the application also provides an image identification method of the inspection robot, which is suitable for the inspection robot in the figure 1 and is described in detail through the figure 2.
Fig. 2 is a flowchart of an image identification method for an inspection robot according to an embodiment of the present disclosure.
S201, the inspection robot 100 collects visible light images and infrared light images of a plurality of devices in the transformer substation.
In one example, the inspection robot 100 simultaneously captures visible light images and infrared light images corresponding to the visible light images of a plurality of devices in the substation through the infrared thermal imager 110, and obtains video streams of the visible light images and video streams of the infrared light images corresponding to the visible light images of the plurality of devices in the substation. In addition, the inspection robot 100 performs image fetching on the video stream of the visible light image and the video stream of the infrared light image respectively through the mobile computing board 120 to obtain a video stream bare stream of the visible light image and a video stream bare stream of the infrared light image. And extracting frames from the video stream bare stream of the visible light image and the video stream bare stream of the infrared light image to obtain a key frame, wherein the key frame comprises visible light image information and infrared light image information.
S202, the inspection robot 100 identifies a corresponding device image from the visible light image according to the pre-trained depth target identification neural network model, determines the position coordinates of the identified corresponding device in the visible light image, and determines whether the visible light image includes a device image of the first hidden danger category.
How to obtain the pre-trained deep target neural network model will be described in detail with reference to fig. 3.
Fig. 3 is a flowchart of a training method for a deep target neural network model according to an embodiment of the present disclosure.
As shown in fig. 3, in S301, a server obtains visible light images and infrared light images of a plurality of devices in a substation to obtain a training data set.
In the embodiment of the present application, the server acquires data from the infrared thermal imager 110 to perform data acquisition on a plurality of devices in the substation from different angles. And simultaneously obtaining visible light images and infrared light images of a plurality of devices in the transformer substation. It should be noted that the plurality of devices include normal devices and hidden danger devices. The visible light images correspond to the infrared light images one to one, that is, one transformer substation scene needs to shoot the visible light images and the infrared light images.
And the server marks the equipment image in the visible light image, and performs data enhancement on the marked visible light image to obtain an enhanced visible light data set. And the server marks the image in the infrared light image and performs data enhancement on the marked infrared light image to obtain an enhanced infrared light data set.
For example, a plurality of devices appearing in the visible light image and the infrared light image are framed by rectangular frames respectively, and the corresponding device types are labeled. The marked visible light image and the marked infrared light image are zoomed according to a certain proportion, then all the images are rotated at different angles, and all the images are horizontally turned over, so that the aim of enhancing a data set is finally achieved, and the detection effect of the deep target neural network model is improved.
It should be noted that, the device category is labeled on the collected data set, and then the hidden danger type is labeled. And for equipment without hidden danger, marking the equipment category.
And the server finally obtains an enhanced data set according to the enhanced visible light data set and the enhanced infrared light data set. And the server randomly divides the enhanced data set into a training data set and a testing data set according to a certain proportion. For example, the training data set: test data set 9: 1.
S302, the server preprocesses the training data set.
In the embodiment of the application, the server performs multi-scale Gaussian function filtering on the training data set to obtain a filtered data set. And the server compares the filtered data set with the enhanced data set to obtain the detail information of the enhanced data set. It should be noted that the detail information is different parts of the enhancement data set and the filtering data set.
And the server multiplies the detail information by different weights to obtain a weighted data set. And then the server fuses the weighted data set into the enhanced data set to obtain a fused data set. In addition, the server performs image defogging algorithm processing on the fused data set. Such as OpenCV image processing algorithms.
According to the method and the device, the problem that the texture of equipment in the infrared light image is not clear can be effectively solved through a multi-scale fusion algorithm integrated by a depth target recognition neural network model, and the image details are improved. The image defogging algorithm based on the naive OpenCV image processing algorithm solves the problem that the recognition rate is reduced in the haze weather, and enhances the adaptability of the deep target recognition neural network model in the actual environment.
S303, the server trains the initial neural network model to obtain a basic training target recognition neural network model.
In one example, the initial neural network model is the yolov3-spp neural network model. The embodiment of the application is not limited to yolov3-spp network model.
In one embodiment of the present application, a volume Block Attention Module (CBAM) is added after the direct layer short of yolov3-spp neural network model, and the CBAM includes a channel Attention Module and a spatial Attention Module. In the embodiment of the present application, how to train the initial neural network model to obtain the target recognition neural network model of the basic training is specifically described in detail with reference to fig. 4.
Fig. 4 is a flowchart of a training method of a basic training target neural network model according to an embodiment of the present disclosure.
S401, the server obtains an original characteristic diagram according to a shortcut layer of the yolov3-spp neural network model.
S402, the server obtains a channel attention feature map according to the original feature map.
In the embodiment of the application, the server performs global maximum pooling on the original input feature map to obtain global maximum pooling features of the original input feature map. And the server performs global average pooling on the original input feature map to obtain global average pooling features of the original input feature map.
And the server inputs the global maximum pooling characteristic of the original input characteristic graph and the global average pooling characteristic of the original input characteristic graph into the convolutional layer, and activates the convolutional layer by using an activation function to obtain a channel attention characteristic graph.
In one example, the server inputs the global maximum pooling feature of the original input feature map and the global average pooling feature of the original input feature map into two 3 × 3 convolutional layers, and the activation function is a Sigmoid function.
S403, the server performs feature weighting on the channel attention feature map, performs feature weighting on the original input feature map, and combines the weighted features of the channel attention feature map and the weighted features of the original input feature map to obtain a feature map of the channel attention module.
It should be noted that, the server performs feature weighting on the channel attention feature map and performs feature weighting on the original input feature map, and the weights of the two weightings are the same or different.
S404, the server obtains a spatial attention feature map according to the feature map of the channel attention module.
The server takes the feature map of the channel attention module as the input feature map of the spatial attention module.
In the embodiment of the application, the server performs channel maximum pooling on the feature map of the channel attention module to obtain the channel maximum pooled feature of the feature map of the channel attention module. And the server performs channel average pooling on the feature map of the channel attention module to obtain channel average pooling features of the feature map of the channel attention module. And the server splices the channel maximum pooling characteristics of the characteristic diagram of the channel attention module and the channel average pooling characteristics of the characteristic diagram of the channel attention module to obtain splicing characteristics of the channel maximum pooling characteristics and the channel average pooling characteristics.
And the server inputs the splicing characteristics of the maximum pooling characteristics and the average pooling characteristics of the channels into the convolutional layer and activates the splicing characteristics by using an activation function to obtain a space attention characteristic diagram.
In one example, the server inputs the concatenation feature of the maximum pooling feature of the channel and the average pooling feature of the channel into the 1 × 1 convolutional layer, and the activation function is a Sigmoid function.
S405, the server performs feature weighting on the channel attention module feature map; and carrying out feature weighting on the spatial attention feature map, and combining the weighted features of the spatial attention feature map and the weighted features of the channel attention module feature map to obtain the feature map of the spatial attention module.
It should be noted that, the server performs feature weighting on the channel attention module feature map and performs feature weighting on the spatial attention feature map, and the weights of the two weightings are the same or different.
S406, the server performs feature weighting on the space attention module feature map; and carrying out feature weighting on the original input feature map, and combining the weighted features of the feature map of the spatial attention module and the weighted features of the original input feature map to obtain a target recognition neural network model for basic training.
It should be noted that, the server performs feature weighting on the feature map of the spatial attention module and performs feature weighting on the original input feature map, and the weights of the two weightings are the same or different.
The following proceeds to describe the training method of the deep target neural network model of fig. 3.
S304, pruning the basic training target recognition neural network model by the server to obtain a deep target recognition neural network model.
In the embodiment of the application, the server firstly conducts sparse training on the target recognition neural network model of the basic training, then conducts channel pruning on the target recognition neural network model of the sparse training, and finally conducts fine tuning on the pruned target recognition neural network model to obtain the deep target recognition neural network model.
Specifically, the server increases the sparsification factor of the basic training target recognition neural network model, reduces the learning rate, and determines the total iteration batch value epochs. The total epochs value is obtained by comprehensively comparing the feature complexity of the data set of the target recognition neural network model based on the basic training and the precision and time required for training the target recognition neural network model based on the basic training. In one example, the sparsification factor is increased, the determined sparsity rate is 0.005, the learning rate is decreased to 0.001, and the total iteration batch value is 300.
According to the embodiment of the application, the target recognition neural network model of basic training is thinned, the thinning training is the game process of precision and sparsity, large sparse factors are generally sparse and fast, but precision is fast, and the later-stage small learning rate is helpful for precision recovery. Therefore, the embodiment of the application can effectively reduce the storage amount and the calculation amount by increasing the sparsification factor of the target recognition neural network model of the basic training and reducing the learning rate, thereby realizing acceleration while maintaining the precision.
In one embodiment of the present application, the pruning rate of sparsely trained target recognition neural network models is controlled by a global threshold. And based on the sparsity training target recognition neural network model, obtaining small weight channels in each convolution layer of the sparsity training model according to a global threshold, and pruning channels of each group of small weight channels in each convolution layer.
In another embodiment of the present application, a scaling factor is introduced for each channel to weigh the importance of the network channel, the closer the scaling factor is to zero, the less important the corresponding channel is to the network. Therefore, in the embodiment of the application, channels with scaling factors close to zero are deleted, and meanwhile, the input and output connected with the channels and the related weights are correspondingly removed, so that the pruned target recognition neural network model is obtained.
It should be noted that the global threshold is a percentage of the respective scaling factor value in the sparse training model dataset.
In one embodiment of the present application, the server fine-tunes the pruned target recognition neural network model. For example, the server fine-tunes the pruned target recognition neural network model through finetune. According to the embodiment of the application, the accuracy of network detection is improved by training the pruned network model.
In an embodiment of the application, after the target recognition neural network model is finely tuned, the detection performance of the deep target recognition neural network model is evaluated, and if the requirement is not met, sparse training, channel deletion and fine tuning are continuously performed in a circulating manner. If the performance of the deep target recognition neural network model reaches the requirement, S305 is executed.
S305, the inspection robot 100 tests the deep target recognition neural network model.
In the embodiment of the present application, the inspection robot 100 performs a deployment test on the deep target recognition neural network model.
The depth target recognition neural network model is deployed on the mobile computing board 120, the mobile computing board 120 obtains video streams of a plurality of devices in the transformer substation, which are shot by the infrared thermal imager 110 carried by the inspection robot 100, and frames of the video streams are extracted to obtain key frame images. The key frame thus contains visible light image information as well as infrared light image information. The visible light images and the infrared light images correspond to each other one to one, that is, the infrared thermal imager 110 simultaneously captures the visible light images of the devices and the corresponding infrared light images of the devices, and the visible light images and the infrared light images respectively include images of a plurality of devices in the substation. The position coordinates of each device image in the visible light image and the position coordinates of each device image in the infrared light image correspond one-to-one.
In addition, the depth target recognition neural network model detects the visible light image preferentially and then detects the infrared light image. In the embodiment of the present application, please refer to S202-S203 in fig. 2 and the related content for describing how the deep target recognition neural network model specifically recognizes a plurality of devices, which will not be described repeatedly herein. And under the condition that the error rate of the deep target recognition neural network model is lower than a preset threshold value, the inspection robot 100 is put into daily work and operation.
The following continues to illustrate other steps in fig. 2. It has been described above that step 202 in fig. 2 includes determining whether the device image of the first potential hazard class is included in the visible light image.
S203, when the visible light image does not include the first hidden danger type device image, based on the position coordinates of the device, the inspection robot 100 identifies the infrared light image corresponding to the visible light image to determine whether the infrared light image includes the second hidden danger type device image. In one example, the second concern category device image is a concern category related to the temperature of the device.
In the embodiment of the present application, when the first hidden danger type image is not included in the visible light image, the returned result is None. The depth target recognition neural network model positions a plurality of devices in the infrared light image corresponding to the visible light to obtain position coordinates of the devices in the infrared light image.
The temperature acquisition module of the infrared thermal imager 110 acquires the category and the position coordinates of a plurality of devices in the visible light image from the mobile computing board 120, and the temperature acquisition module reads the temperature value of the relevant area corresponding to the position coordinates in the infrared light image according to the coordinate points of the plurality of devices in the visible light image.
According to the method and the device, the position coordinates of the equipment and the types of the equipment in the visible light image are detected preferentially, and the equipment with the hidden danger on the outer surface is found timely. Under the condition that the visible light image does not include the appearance hidden trouble equipment, the corresponding relevant area in the infrared light image is measured according to the position coordinates of the equipment and the types of the equipment in the visible light image, the temperature measurement of multiple target types of the equipment is achieved, the hidden trouble equipment with abnormal temperature is obtained, and the identification accuracy is improved.
In addition, in the embodiment of the application, whether the deep target recognition neural network model needs to be updated or not is judged according to the error rate of the inspection robot 100 in recognizing the hidden danger equipment. Under the condition that the error rate of the inspection robot 100 in hidden danger identification exceeds a preset threshold value, the current deep target identification neural network model is updated, and therefore the accuracy rate of the inspection robot in hidden danger identification of a plurality of devices in the transformer substation is improved.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. An inspection robot image recognition method is characterized by comprising the following steps:
collecting visible light images and infrared light images of a plurality of devices in a transformer substation; the visible light images correspond to the infrared light images one by one, and the visible light images and the infrared light images respectively comprise images of a plurality of devices in the transformer substation;
identifying corresponding equipment from the visible light image according to a pre-trained depth target identification neural network model, determining position coordinates of the identified corresponding equipment in the visible light image, and determining whether the visible light image comprises a first hidden danger category equipment image;
when the first hidden danger type device image is not included in the visible light image, an infrared light image corresponding to the visible light image is identified based on the position coordinates of the device to determine whether the infrared light image includes a second hidden danger type device image.
2. The inspection robot image recognition method according to claim 1, further comprising:
the server trains an initial neural network model according to a training data set to obtain the deep target recognition neural network model; wherein a direct connection layer of the initial neural network model comprises a convolution block attention module, and the convolution block attention module comprises a channel attention module and a space attention module.
3. The inspection robot image recognition method according to claim 2, further comprising:
obtaining an original input characteristic diagram based on the initial neural network model direct connection layer;
obtaining the channel attention feature map according to the original input feature map;
carrying out feature weighting on the channel attention feature map, carrying out feature weighting on the original input feature map, and combining the weighted features of the channel attention feature map and the weighted features of the original input feature map to obtain a feature map of the channel attention module;
taking the feature map of the channel attention module as an input feature map of the spatial attention module to obtain the spatial attention feature map;
feature weighting the channel attention module feature map; performing feature weighting on the spatial attention feature map, and combining the weighted features of the spatial attention feature map and the weighted features of the channel attention module feature map to obtain a feature map of the spatial attention module;
feature weighting the spatial attention module feature map; carrying out feature weighting on the original input feature map, and combining the weighted features of the feature map of the spatial attention module and the weighted features of the original input feature map to obtain a target recognition neural network model of basic training;
and obtaining the deep target recognition neural network model based on the target recognition neural network model of the basic training.
4. The inspection robot image recognition method according to claim 3, further comprising:
performing global maximum pooling on the original input feature map to obtain global maximum pooling features of the original input feature map; performing global average pooling on the original input feature map to obtain global average pooling features of the original input feature map; inputting the global maximum pooling feature of the original input feature map and the global average pooling feature of the original input feature map into a convolutional layer, and activating by using an activation function to obtain the channel attention feature map;
performing channel maximum pooling on the feature map of the channel attention module to obtain channel maximum pooling features of the feature map of the channel attention module; performing channel average pooling on the feature map of the channel attention module to obtain channel average pooling features of the feature map of the channel attention module; splicing the channel maximum pooling feature of the feature map of the channel attention module and the channel average pooling feature of the feature map of the channel attention module to obtain a splicing feature of the channel maximum pooling feature and the channel average pooling feature; inputting the splicing characteristics of the channel maximum pooling characteristics and the channel average pooling characteristics into a convolutional layer, and activating by using an activation function to obtain the spatial attention characteristic diagram.
5. The inspection robot image recognition method according to claim 3, wherein obtaining the deep target recognition neural network model based on the base-trained target recognition neural network model comprises:
and pruning the basic training target recognition neural network model to obtain the deep target recognition neural network model.
6. The inspection robot image recognition method according to claim 5, wherein pruning is performed on the basis-trained target recognition neural network model to obtain the deep target recognition neural network model, and specifically comprises:
carrying out sparse training on the basic training target recognition neural network model to obtain a sparse deep target recognition neural network model;
performing channel pruning on the sparse deep target recognition neural network model;
and fine-tuning the pruned deep target recognition neural network model to obtain the deep target recognition neural network model.
7. The inspection robot image recognition method according to claim 6, wherein pruning is performed on the basis-trained target recognition neural network model, further comprising:
increasing the sparse training factor of the basic training target recognition neural network model, and determining the sparse rate of the basic training target recognition neural network model;
reducing the learning rate of the basic training target recognition neural network model, and determining the learning rate of the basic training target recognition neural network model;
determining a total iteration batch value according to the complexity and the precision of a target recognition neural network model of basic training;
obtaining the depth target recognition neural network model of the sparse training according to the total iteration batch value;
obtaining the weight of each convolution layer channel of the sparsely trained deep target recognition neural network model according to a global threshold; wherein the global threshold is a certain percentage of respective scale factor values in the sparsely trained deep target recognition neural network model dataset;
and performing channel pruning on the sparsely trained deep target recognition neural network model according to the weight of the channel of each convolution layer of the sparsely trained deep target recognition neural network model.
8. The inspection robot image recognition method according to claim 2, wherein before the server trains the initial neural network model according to a training data set, the method includes:
the server receives a plurality of visible light images and a plurality of infrared light images from an infrared thermal imager;
labeling the equipment images in the plurality of visible light images and the plurality of infrared light images; performing data enhancement on the marked visible light image and the marked infrared light image to obtain an enhanced data set;
filtering the enhanced data set by a multi-scale Gaussian kernel function to obtain a filtered data set; comparing the filtered data set with the enhanced data set to obtain the detail information of the enhanced data set; the detail information is different parts of the enhanced data set and the filtered data set;
weighting the detail information to obtain a weighted data set;
fusing the weighted data set with the enhanced data set.
9. The inspection robot image identification method according to claim 1, wherein the first potential hazard category is a potential hazard category related to an outer surface of the equipment, and the second potential hazard category is a potential hazard category related to a temperature of the equipment.
10. An inspection robot, comprising:
the infrared thermal imager is used for acquiring visible light images and infrared light images of a plurality of devices in the transformer substation; the visible light images correspond to the infrared light images one by one, and the visible light images and the infrared light images respectively comprise images of a plurality of devices in the transformer substation;
the mobile computing board is used for identifying corresponding equipment from the visible light image according to a pre-trained depth target identification neural network model, determining position coordinates of the identified corresponding equipment in the visible light image, and determining whether the visible light image comprises a first hidden danger category equipment image;
and when the visible light image does not comprise the first hidden danger type equipment image, identifying an infrared light image corresponding to the visible light image based on the position coordinates of the equipment.
CN202010812180.9A 2020-08-13 2020-08-13 Inspection robot image recognition method and inspection robot Active CN112131936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010812180.9A CN112131936B (en) 2020-08-13 2020-08-13 Inspection robot image recognition method and inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010812180.9A CN112131936B (en) 2020-08-13 2020-08-13 Inspection robot image recognition method and inspection robot

Publications (2)

Publication Number Publication Date
CN112131936A true CN112131936A (en) 2020-12-25
CN112131936B CN112131936B (en) 2023-07-21

Family

ID=73851622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010812180.9A Active CN112131936B (en) 2020-08-13 2020-08-13 Inspection robot image recognition method and inspection robot

Country Status (1)

Country Link
CN (1) CN112131936B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767433A (en) * 2021-03-15 2021-05-07 北京玄马知能科技有限公司 Automatic deviation rectifying, segmenting and identifying method for image of inspection robot
CN112864947A (en) * 2021-03-02 2021-05-28 山东鲁能软件技术有限公司智能电气分公司 Visual monitoring system and method
CN112966552A (en) * 2021-01-29 2021-06-15 山东信通电子股份有限公司 Routine inspection method and system based on intelligent identification
CN113014773A (en) * 2021-03-02 2021-06-22 山东鲁能软件技术有限公司智能电气分公司 Overhead line video visual monitoring system and method
CN113030638A (en) * 2021-03-02 2021-06-25 山东鲁能软件技术有限公司智能电气分公司 Overhead line image visual monitoring system and method
CN113191336A (en) * 2021-06-04 2021-07-30 绍兴建元电力集团有限公司 Electric power hidden danger identification method and system based on image identification
CN113449767A (en) * 2021-04-29 2021-09-28 国网浙江省电力有限公司嘉兴供电公司 Multi-image fusion transformer substation equipment abnormity identification and positioning method
CN114166358A (en) * 2021-11-19 2022-03-11 北京图菱视频科技有限公司 Robot inspection system, method, equipment and storage medium for epidemic prevention inspection
CN116228774A (en) * 2023-05-10 2023-06-06 国网山东省电力公司菏泽供电公司 Substation inspection image defect identification method and system based on image quality evaluation
CN117726958A (en) * 2024-02-07 2024-03-19 国网湖北省电力有限公司 Intelligent detection and hidden danger identification method for inspection image target of unmanned aerial vehicle of distribution line
WO2024108901A1 (en) * 2022-11-21 2024-05-30 深圳供电局有限公司 Power apparatus region detection method and system based on multispectral image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0432680A1 (en) * 1989-12-11 1991-06-19 Fujitsu Limited Monitoring system employing infrared image
CN105511495A (en) * 2016-02-15 2016-04-20 国家电网公司 Control method and system for intelligent unmanned aerial vehicle patrol for power line
CN109596224A (en) * 2018-12-24 2019-04-09 国网山西省电力公司检修分公司 A kind of failure analysis methods, device, equipment and the storage medium of wire-connection point
CN110009530A (en) * 2019-04-16 2019-07-12 国网山西省电力公司电力科学研究院 A kind of nerve network system and method suitable for portable power inspection
CN110567964A (en) * 2019-07-19 2019-12-13 华瑞新智科技(北京)有限公司 method and device for detecting defects of power transformation equipment and storage medium
CN111275759A (en) * 2020-01-16 2020-06-12 国网江苏省电力有限公司 Transformer substation disconnecting link temperature detection method based on unmanned aerial vehicle double-light image fusion
CN111428629A (en) * 2020-03-23 2020-07-17 深圳供电局有限公司 Substation operation monitoring method, state determination method and unmanned aerial vehicle inspection system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0432680A1 (en) * 1989-12-11 1991-06-19 Fujitsu Limited Monitoring system employing infrared image
CN105511495A (en) * 2016-02-15 2016-04-20 国家电网公司 Control method and system for intelligent unmanned aerial vehicle patrol for power line
CN109596224A (en) * 2018-12-24 2019-04-09 国网山西省电力公司检修分公司 A kind of failure analysis methods, device, equipment and the storage medium of wire-connection point
CN110009530A (en) * 2019-04-16 2019-07-12 国网山西省电力公司电力科学研究院 A kind of nerve network system and method suitable for portable power inspection
CN110567964A (en) * 2019-07-19 2019-12-13 华瑞新智科技(北京)有限公司 method and device for detecting defects of power transformation equipment and storage medium
CN111275759A (en) * 2020-01-16 2020-06-12 国网江苏省电力有限公司 Transformer substation disconnecting link temperature detection method based on unmanned aerial vehicle double-light image fusion
CN111428629A (en) * 2020-03-23 2020-07-17 深圳供电局有限公司 Substation operation monitoring method, state determination method and unmanned aerial vehicle inspection system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李祥;崔昊杨;皮凯云;束江;李鑫;许永鹏;盛戈;: "基于STM32的变电站巡检机器人***设计", 现代电子技术, no. 17 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112966552A (en) * 2021-01-29 2021-06-15 山东信通电子股份有限公司 Routine inspection method and system based on intelligent identification
CN112864947A (en) * 2021-03-02 2021-05-28 山东鲁能软件技术有限公司智能电气分公司 Visual monitoring system and method
CN113014773A (en) * 2021-03-02 2021-06-22 山东鲁能软件技术有限公司智能电气分公司 Overhead line video visual monitoring system and method
CN113030638A (en) * 2021-03-02 2021-06-25 山东鲁能软件技术有限公司智能电气分公司 Overhead line image visual monitoring system and method
CN112864947B (en) * 2021-03-02 2022-07-05 山东鲁软数字科技有限公司智慧能源分公司 Visual monitoring system and method
CN113030638B (en) * 2021-03-02 2022-06-14 山东鲁软数字科技有限公司智慧能源分公司 Overhead line image visual monitoring system and method
CN112767433A (en) * 2021-03-15 2021-05-07 北京玄马知能科技有限公司 Automatic deviation rectifying, segmenting and identifying method for image of inspection robot
CN113449767B (en) * 2021-04-29 2022-05-17 国网浙江省电力有限公司嘉兴供电公司 Multi-image fusion transformer substation equipment abnormity identification and positioning method
CN113449767A (en) * 2021-04-29 2021-09-28 国网浙江省电力有限公司嘉兴供电公司 Multi-image fusion transformer substation equipment abnormity identification and positioning method
CN113191336A (en) * 2021-06-04 2021-07-30 绍兴建元电力集团有限公司 Electric power hidden danger identification method and system based on image identification
CN114166358A (en) * 2021-11-19 2022-03-11 北京图菱视频科技有限公司 Robot inspection system, method, equipment and storage medium for epidemic prevention inspection
CN114166358B (en) * 2021-11-19 2024-04-16 苏州超行星创业投资有限公司 Robot inspection method, system, equipment and storage medium for epidemic prevention inspection
WO2024108901A1 (en) * 2022-11-21 2024-05-30 深圳供电局有限公司 Power apparatus region detection method and system based on multispectral image
CN116228774A (en) * 2023-05-10 2023-06-06 国网山东省电力公司菏泽供电公司 Substation inspection image defect identification method and system based on image quality evaluation
CN116228774B (en) * 2023-05-10 2023-09-08 国网山东省电力公司菏泽供电公司 Substation inspection image defect identification method and system based on image quality evaluation
CN117726958A (en) * 2024-02-07 2024-03-19 国网湖北省电力有限公司 Intelligent detection and hidden danger identification method for inspection image target of unmanned aerial vehicle of distribution line
CN117726958B (en) * 2024-02-07 2024-05-10 国网湖北省电力有限公司 Intelligent detection and hidden danger identification method for inspection image target of unmanned aerial vehicle of distribution line

Also Published As

Publication number Publication date
CN112131936B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
CN112131936A (en) Inspection robot image identification method and inspection robot
CN109977813B (en) Inspection robot target positioning method based on deep learning framework
CN111080693A (en) Robot autonomous classification grabbing method based on YOLOv3
CN109544501B (en) Transmission equipment defect detection method based on unmanned aerial vehicle multi-source image feature matching
CN110659397B (en) Behavior detection method and device, electronic equipment and storage medium
CN111814850A (en) Defect detection model training method, defect detection method and related device
WO2021190321A1 (en) Image processing method and device
CN110781836A (en) Human body recognition method and device, computer equipment and storage medium
CN109767422A (en) Pipe detection recognition methods, storage medium and robot based on deep learning
CN111179249A (en) Power equipment detection method and device based on deep convolutional neural network
CN115294117B (en) Defect detection method and related device for LED lamp beads
CN111401418A (en) Employee dressing specification detection method based on improved Faster r-cnn
CN109389105B (en) Multitask-based iris detection and visual angle classification method
CN112528974B (en) Distance measuring method and device, electronic equipment and readable storage medium
CN116092199B (en) Employee working state identification method and identification system
CN113642474A (en) Hazardous area personnel monitoring method based on YOLOV5
WO2024051067A1 (en) Infrared image processing method, apparatus, and device, and storage medium
CN112052730B (en) 3D dynamic portrait identification monitoring equipment and method
CN112801061A (en) Posture recognition method and system
CN111476314B (en) Fuzzy video detection method integrating optical flow algorithm and deep learning
TW202219494A (en) A defect detection method and a defect detection device
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
CN116580330A (en) Machine test abnormal behavior detection method based on double-flow network
CN111093140A (en) Method, device, equipment and storage medium for detecting defects of microphone and earphone dust screen
CN115767424A (en) Video positioning method based on RSS and CSI fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211012

Address after: 3 / F, xindongyuan North building, 3501 Chengfu Road, Haidian District, Beijing 100083

Applicant after: HUARUI XINZHI TECHNOLOGY (BEIJING) Co.,Ltd.

Applicant after: Huarui Xinzhi Baoding Technology Co.,Ltd.

Address before: Room 91818, 9 / F, building 683, zone 2, No. 5, Zhongguancun South Street, Haidian District, Beijing 100083

Applicant before: HUARUI XINZHI TECHNOLOGY (BEIJING) Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant