CN112381797A - Infrared data-based line object information confirmation method - Google Patents

Infrared data-based line object information confirmation method Download PDF

Info

Publication number
CN112381797A
CN112381797A CN202011280729.0A CN202011280729A CN112381797A CN 112381797 A CN112381797 A CN 112381797A CN 202011280729 A CN202011280729 A CN 202011280729A CN 112381797 A CN112381797 A CN 112381797A
Authority
CN
China
Prior art keywords
image
images
infrared
information confirmation
confirmation method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011280729.0A
Other languages
Chinese (zh)
Inventor
原瀚杰
陈泽佳
郑耀华
邓浩光
李焕能
李俊宏
陆勇生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhaoqing Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Zhaoqing Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhaoqing Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Zhaoqing Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority to CN202011280729.0A priority Critical patent/CN112381797A/en
Publication of CN112381797A publication Critical patent/CN112381797A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0006Industrial image inspection using a design-rule based approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for confirming information of a linear object based on infrared data, which comprises the following steps: acquiring original infrared video images, preprocessing the original infrared video images, sequencing the original infrared video images to obtain infrared sequence images, and splicing any two continuous adjacent frame infrared sequence images; calculating an overlapping area and a non-overlapping area of the two frames of infrared sequence images to obtain a fused image, and generating an integral image according to the calculated fused image in a superposition mode; extracting the features of the integral image based on a color image registration algorithm of an improved SURF operator; the areas corresponding to the same characteristic parameters are the areas where the threads are located. The method can extract the integrity of the object, has low error rate of the acquired information, and provides high-quality image information for the defect identification and diagnosis of the subsequent linear object.

Description

Infrared data-based line object information confirmation method
Technical Field
The invention belongs to the technical field of wire maintenance, and particularly relates to a method for confirming information of a linear object based on infrared data.
Background
The infrared image depending on the intelligent unmanned aerial vehicle platform reflects the thermal radiation of the target to be confirmed and the noise background, compared with a visible light image, the infrared image is low in contrast between the target and the background and fuzzy in edge outline, and the existing conventional edge extraction is difficult to effectively extract the outline information of the target to be confirmed.
Specifically, because the flight attitude of the unmanned aerial vehicle platform is very unstable, and the inspection angle and the terrain of the electric power corridor are relatively complex, the infrared video acquired by the unmanned aerial vehicle platform contains a large amount of noise and complex background information, it is difficult to automatically identify the thread information from the shot infrared video, and it is more difficult to automatically diagnose and identify the abnormal heating of the thread. Meanwhile, due to the influence of electronic thermal noise and the error of certain probability of channel transmission, salt and pepper noise, pulse noise and other types of noise can occur in the infrared image, so that the obtained infrared image of the high-voltage line-shaped object path has the characteristics of fuzzy edge, poor contrast, large background noise pollution on a spatial domain and the like; for example, chinese patent publication No. CN108230237A, published No. 2018.6.29, a multispectral image reconstruction method for electrical equipment online detection, in which a visible light image, an infrared image, and an ultraviolet image are multispectral reconstructed to realize online detection of electrical equipment, but the obtained infrared image has the characteristics of blurred edge, poor contrast, and large pollution from background noise in a spatial domain, and is prone to have errors, resulting in low accuracy.
Meanwhile, an optical flow method and a differential image method are adopted to detect a target to be confirmed from an infrared video image. However, the optical flow method requires a large overhead of validation time, and is poor in real-time performance and practicability. The image difference method is relatively simple and easy to be real-time. However, the object detection based on the inter-frame difference in the image difference method has the following disadvantages: the method is not suitable for the situation that the target and the background are in a static state and the detector is in dynamic detection, namely the complete area of the object cannot be extracted; the target detection result depends on the selected frame time interval.
Disclosure of Invention
The invention provides a method for confirming linear object information based on infrared data, which aims to solve the technical problems that the existing linear object information acquisition method cannot extract the integrity of an object and the error rate of the acquired information is high.
In order to achieve the purpose, the technical scheme adopted by the application is as follows: provided is a method for confirming thread information based on infrared data, comprising:
acquiring an original infrared video image of a specified segment domain of the linear object, and preprocessing the original infrared video image;
sequencing the preprocessed original infrared video images according to a time sequence to obtain infrared sequence images, and splicing any two continuous adjacent frame infrared sequence images through a weighted smoothing algorithm;
calculating an overlapping region and a non-overlapping region of any two continuous adjacent frames of infrared sequence images to obtain a fused image, and superposing the fused images to generate an integral image according to the calculated fused image;
extracting the features of the integral image based on a color image registration algorithm of an improved SURF operator;
and judging whether the characteristic parameters of the adjacent images are the same or not, wherein the areas corresponding to the same characteristic parameters are the areas where the linear objects are located.
Preferably, the process of extracting the features of the integral image based on the color image registration algorithm of the improved SURF operator comprises:
detecting characteristic points based on a scale space constructed by a Gaussian pyramid, and extracting the characteristic points through a Hessian matrix;
generating a feature descriptor, determining a main direction for the extracted feature points in order to ensure the size and rotation invariance of SURF features;
after the feature points are extracted, feature matching is carried out according to the similarity measurement among the feature points, and the Euclidean distance among the feature points is selected as the similarity measurement.
Preferably, the calculating the overlapping region and the non-overlapping region of any two consecutive adjacent frames of infrared sequence images to obtain a fused image comprises:
for the overlapping area, according to the weights of the overlapping area in two adjacent frames of images, carrying out summation operation after weighting on the pixels of the overlapping area, and taking the pixels as new pixel values of the overlapping area; for the non-overlapping region, the pixel value is unchanged;
and mapping the pixels of the overlapped area and the non-overlapped area into a new image, and assigning values to the pixels of each point to complete the fusion of the images.
Preferably, the extracting the feature points through the Hessian matrix includes:
and determining the position of the characteristic point in the integral image by using the determinant maximum value of the Hessian matrix.
Preferably, generating the feature descriptor to ensure dimensional and rotational invariance of the SURF features comprises:
and extracting the scale and rotation invariant characteristics of the preprocessed adjacent continuous images by using the accelerated robust feature descriptors.
Preferably, determining the principal direction for the extracted feature points comprises:
and performing Haar wavelet response operation in the characteristic scale circular area with the set radius, wherein the direction of the maximum value of the Haar wavelet response sum is taken as the main direction of the characteristic point.
Preferably, the above-mentioned process of performing Haar wavelet response operation is:
and calculating Haar wavelet responses in the x direction and the y direction by taking the characteristic point as the circle center and 4s as the radius, wherein the size of the Haar wavelet template is 2s multiplied by 2s, and s is the value of the scale of the characteristic point.
Preferably, the original infrared video image is subjected to image enhancement preprocessing and image denoising preprocessing.
Preferably, the process of image enhancement preprocessing and image denoising preprocessing on the original infrared video image is as follows:
compressing the original infrared video image at a high dynamic range to a low dynamic range;
and processing the original infrared video image in the low dynamic range by adopting an optimized CLAHE algorithm and outputting an enhanced low dynamic range infrared image.
Preferably, a foreground point is determined, a parallelogram window parallel to the trend of the linear objects is added at the foreground point, and the size of the parallelogram is determined according to the length of the linear objects in the image and the distance between the adjacent linear objects;
moving the parallelogram windows to two sides along the main direction respectively, and calculating the foreground information filling degree of pixels falling into the overlapping area of the image and the window; the information is retained if the filling level exceeds a predetermined threshold.
The invention has the beneficial effects that:
the invention provides a method for confirming linear object information based on infrared data, which comprises the steps of splicing, fusing and integrating infrared sequence images arranged according to a time sequence, extracting the characteristics of an integral image based on a color image registration algorithm of an improved SURF operator, judging the similarity of characteristic parameters of adjacent images by taking the Euclidean distance between selected characteristic points as similarity measurement, further obtaining the position information of a linear object in the image, weakening random noise mixed in the image through a series of preprocessing and depth processing, simultaneously obtaining target data information through characteristic parameter comparison, and providing high-quality image information for defect identification and diagnosis of subsequent linear objects.
Drawings
Fig. 1 is a flow chart illustrating a method for confirming thread information according to the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present application clearer, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element.
It will be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like, as used herein, refer to an orientation or positional relationship indicated in the drawings that is solely for the purpose of facilitating the description and simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be considered as limiting the present application.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Example 1
A method for confirming the information of the thread based on the infrared data will now be described.
Referring to fig. 1, the method for confirming information of a linear object based on infrared data includes:
s1: acquiring an original infrared video image of a specified segment domain of the linear object, and preprocessing the original infrared video image;
s2: sequencing the preprocessed original infrared video images according to a time sequence to obtain infrared sequence images, and splicing any two continuous adjacent frame infrared sequence images through a weighted smoothing algorithm;
s3: calculating an overlapping region and a non-overlapping region of any two continuous adjacent frames of infrared sequence images to obtain a fused image, and superposing the fused images to generate an integral image according to the calculated fused image;
s4: extracting the features of the integral image based on a color image registration algorithm of an improved SURF operator;
s5: and judging whether the characteristic parameters of the adjacent images are the same or not, wherein the areas corresponding to the same characteristic parameters are the areas where the linear objects are located.
The step S4 of extracting the features of the integral image includes:
detecting characteristic points based on a scale space constructed by a Gaussian pyramid, and extracting the characteristic points through a Hessian matrix;
generating a feature descriptor, determining a main direction for the extracted feature points in order to ensure the size and rotation invariance of SURF features;
after the feature points are extracted, feature matching is carried out according to the similarity measurement among the feature points, and the Euclidean distance among the feature points is selected as the similarity measurement.
The invention provides a method for confirming linear object information based on infrared data, which comprises the steps of splicing, fusing and integrating infrared sequence images arranged according to a time sequence, extracting the characteristics of an integral image based on a color image registration algorithm of an improved SURF operator, judging the similarity of characteristic parameters of adjacent images by taking the Euclidean distance between selected characteristic points as similarity measurement, further obtaining the position information of a linear object in the image, weakening random noise mixed in the image through a series of preprocessing and depth processing, simultaneously obtaining target data information through characteristic parameter comparison, and providing high-quality image information for defect identification and diagnosis of subsequent linear objects.
In this embodiment, calculating an overlapping region and a non-overlapping region of any two consecutive adjacent frames of infrared sequence images to obtain a fused image includes:
for the overlapping area, according to the weights of the overlapping area in two adjacent frames of images, carrying out summation operation after weighting on the pixels of the overlapping area, and taking the pixels as new pixel values of the overlapping area; for the non-overlapping region, the pixel value is unchanged;
and mapping the pixels of the overlapped area and the non-overlapped area into a new image, and assigning values to the pixels of each point to complete the fusion of the images.
In this embodiment, extracting feature points through a Hessian matrix includes: determining positions of feature points in the integral image using determinant maxima of a Hessian matrix.
In this embodiment, in order to ensure the dimensional and rotational invariance of SURF features, the generating of feature descriptors includes: and extracting the scale and rotation invariant characteristics of the preprocessed adjacent continuous images by using the accelerated robust feature descriptors.
In this embodiment, determining the principal direction for the extracted feature points includes: and performing Haar wavelet response operation in the characteristic scale circular area with the set radius, wherein the direction of the maximum value of the Haar wavelet response sum is taken as the main direction of the characteristic point.
Specifically, the above-mentioned process of performing Haar wavelet response operation is:
and calculating Haar wavelet responses in the x direction and the y direction by taking the characteristic point as the circle center and 4s as the radius, wherein the size of the Haar wavelet template is 2s multiplied by 2s, and s is the value of the scale of the characteristic point.
Specifically, in the embodiment, a color image registration algorithm based on an improved SURF operator is adopted, the SURF feature is a color image registration algorithm with unchanged scale and rotation, and the steps are basically the same as those of the SIFT algorithm, but the operation speed is increased by 3-5 times.
The SURF algorithm uses integral images and box filters and mainly comprises three parts of feature point extraction, feature descriptor generation and feature matching.
The characteristic point extraction is carried out through characteristic point detection based on a scale space constructed by a Gaussian pyramid, and the characteristic points are extracted through a Hessian matrix.
Wherein, in order to ensure the rotation invariance of SURF characteristics in the generated characteristic descriptors, a main direction is determined for the extracted characteristic points. Specifically, Haar wavelet responses in x and y directions are calculated by taking the characteristic point as the center of a circle and 4s (s is the value of the scale where the characteristic point is located) as the radius, and the size of a Haar wavelet template is 2s multiplied by 2 s; gaussian weighting is given to the Haar responses so that responses closer to the feature points are larger; scanning a circle in a pi/3 sector range by taking the characteristic point as a center, and performing accumulated superposition on Haar wavelet responses in all angles to form a new vector; the principal direction is the direction of the maximum of the cumulative overlap-add. After the main direction is determined, a 12s × 12s square window region is constructed with the feature point as the center, the region is divided into 2 × 2 sub-regions, gaussian weight coefficients are given, the sum of Haar wavelet responses dx in the horizontal direction, the sum of Haar wavelet responses dy in the vertical direction, and the sum of absolute values of the Haar wavelet responses are calculated for each sampling point of the region, a four-dimensional vector (Σ dx, Σ dy, Σ dx, Σ dy) is formed in each region, and the descriptor of the feature point is (2 × 2) × 2 ═ 8-dimensional vector Vs (i1, i2, …, i 8).
After the feature points of the two images are extracted, feature matching can be performed according to similarity measurement between the feature points, and the Euclidean distance between the feature points is selected as the similarity measurement.
In this embodiment, an original infrared video image is subjected to image enhancement preprocessing and image denoising preprocessing, which include the following steps:
compressing the original infrared video image in the high dynamic range to a low dynamic range;
and processing the original infrared video image in the low dynamic range by adopting the optimized CLAHE algorithm and outputting the enhanced infrared image in the low dynamic range.
The original infrared video image subjected to the enhancement preprocessing is subjected to bilateral filtering denoising processing, and the bilateral filter has the advantages of being capable of performing edge preservation (edge preservation) and having a good effect on edge protection of high-frequency detail line objects.
In this embodiment, since most of the lines are parallel and continuous, according to the calculated main direction, that is, the direction of the line, and combining with each selected feature point, in order to conveniently and quickly obtain the overall information of the line from the infrared image, a foreground point can be determined, a window of a parallelogram parallel to the direction of the line is added at the foreground point, and the size of the parallelogram is determined according to the length of the line in the image and the distance between adjacent lines; moving the parallelogram windows to two sides along the main direction respectively, and calculating the foreground information filling degree of pixels falling into the overlapping area of the image and the window; this information is retained if the degree of filling exceeds a predetermined threshold, and the overall state of the line in the image can be derived from the information in the window.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method for confirming information of a thread based on infrared data is characterized by comprising the following processes:
acquiring an original infrared video image of a specified segment domain of the linear object, and preprocessing the original infrared video image;
sequencing the preprocessed original infrared video images according to a time sequence to obtain infrared sequence images, and splicing any two continuous adjacent frame infrared sequence images through a weighted smoothing algorithm;
calculating an overlapping region and a non-overlapping region of any two continuous adjacent frames of infrared sequence images to obtain a fused image, and superposing the fused images to generate an integral image according to the calculated fused image;
extracting the features of the integral image based on a color image registration algorithm of an improved SURF operator;
and judging whether the characteristic parameters of the adjacent images are the same or not, wherein the areas corresponding to the same characteristic parameters are the areas where the linear objects are located.
2. The thread information confirmation method according to claim 1, characterized in that: the process of extracting the features of the integral image based on the color image registration algorithm of the improved SURF operator comprises the following steps:
detecting characteristic points based on a scale space constructed by a Gaussian pyramid, and extracting the characteristic points through a Hessian matrix;
generating a feature descriptor, determining a main direction for the extracted feature points in order to ensure the size and rotation invariance of SURF features;
after the feature points are extracted, feature matching is carried out according to the similarity measurement among the feature points, and the Euclidean distance among the feature points is selected as the similarity measurement.
3. The thread information confirmation method according to claim 1, characterized in that: calculating the overlapping area and the non-overlapping area of any two continuous adjacent frames of infrared sequence images to obtain a fused image, wherein the method comprises the following steps:
for the overlapping area, according to the weights of the overlapping area in two adjacent frames of images, carrying out summation operation after weighting on the pixels of the overlapping area, and taking the pixels as new pixel values of the overlapping area; for the non-overlapping region, the pixel value is unchanged;
and mapping the pixels of the overlapped area and the non-overlapped area into a new image, and assigning values to the pixels of each point to complete the fusion of the images.
4. The thread information confirmation method according to claim 2, characterized in that: extracting characteristic points through a Hessian matrix, comprising the following steps:
and determining the position of the characteristic point in the integral image by using the determinant maximum value of the Hessian matrix.
5. The thread information confirmation method according to claim 2, characterized in that: to ensure the dimensional and rotational invariance of SURF features in generating feature descriptors, the method includes:
and extracting the scale and rotation invariant characteristics of the preprocessed adjacent continuous images by using the accelerated robust feature descriptors.
6. The thread information confirmation method according to claim 2, characterized in that: determining a principal direction for the extracted feature points, comprising:
and performing Haar wavelet response operation in the characteristic scale circular area with the set radius, wherein the direction of the maximum value of the Haar wavelet response sum is taken as the main direction of the characteristic point.
7. The thread information confirmation method according to claim 6, characterized in that: the process of performing Haar wavelet response operation comprises the following steps:
and calculating Haar wavelet responses in the x direction and the y direction by taking the characteristic point as the circle center and 4s as the radius, wherein the size of the Haar wavelet template is 2s multiplied by 2s, and s is the value of the scale of the characteristic point.
8. The thread information confirmation method according to claim 1, characterized in that: and carrying out image enhancement preprocessing and image denoising preprocessing on the original infrared video image.
9. The thread information confirmation method according to claim 8, characterized in that: the process of carrying out image enhancement preprocessing and image denoising preprocessing on an original infrared video image comprises the following steps:
compressing the original infrared video image in the high dynamic range to a low dynamic range;
and processing the original infrared video image in the low dynamic range by adopting the optimized CLAHE algorithm and outputting the enhanced infrared image in the low dynamic range.
10. The thread information confirmation method according to claim 1, characterized in that: further comprising:
determining a foreground point, adding a parallelogram window parallel to the trend of the linear objects at the foreground point, and determining the size of the parallelogram according to the length of the linear objects in the image and the distance between adjacent linear objects;
moving the parallelogram windows to two sides along the main direction respectively, and calculating the foreground information filling degree of pixels falling into the overlapping area of the image and the window; the information is retained if the filling level exceeds a predetermined threshold.
CN202011280729.0A 2020-11-16 2020-11-16 Infrared data-based line object information confirmation method Pending CN112381797A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011280729.0A CN112381797A (en) 2020-11-16 2020-11-16 Infrared data-based line object information confirmation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011280729.0A CN112381797A (en) 2020-11-16 2020-11-16 Infrared data-based line object information confirmation method

Publications (1)

Publication Number Publication Date
CN112381797A true CN112381797A (en) 2021-02-19

Family

ID=74584790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011280729.0A Pending CN112381797A (en) 2020-11-16 2020-11-16 Infrared data-based line object information confirmation method

Country Status (1)

Country Link
CN (1) CN112381797A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103985254A (en) * 2014-05-29 2014-08-13 四川川大智胜软件股份有限公司 Multi-view video fusion and traffic parameter collecting method for large-scale scene traffic monitoring
CN108184096A (en) * 2018-01-08 2018-06-19 北京艾恩斯网络科技有限公司 Run skating area full view monitoring device, system and method in a kind of airport
CN111179211A (en) * 2018-10-23 2020-05-19 中国石油化工股份有限公司 Pipeline heating diagnosis method of unmanned aerial vehicle infrared video for crude oil pipeline inspection
CN111709898A (en) * 2020-06-20 2020-09-25 昆明物理研究所 Infrared image enhancement method and system based on optimized CLAHE

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103985254A (en) * 2014-05-29 2014-08-13 四川川大智胜软件股份有限公司 Multi-view video fusion and traffic parameter collecting method for large-scale scene traffic monitoring
CN108184096A (en) * 2018-01-08 2018-06-19 北京艾恩斯网络科技有限公司 Run skating area full view monitoring device, system and method in a kind of airport
CN111179211A (en) * 2018-10-23 2020-05-19 中国石油化工股份有限公司 Pipeline heating diagnosis method of unmanned aerial vehicle infrared video for crude oil pipeline inspection
CN111709898A (en) * 2020-06-20 2020-09-25 昆明物理研究所 Infrared image enhancement method and system based on optimized CLAHE

Similar Documents

Publication Publication Date Title
CN111784576B (en) Image stitching method based on improved ORB feature algorithm
CN109615611B (en) Inspection image-based insulator self-explosion defect detection method
CN108230237B (en) Multispectral image reconstruction method for electrical equipment online detection
CN111428748B (en) HOG feature and SVM-based infrared image insulator identification detection method
CN110349207B (en) Visual positioning method in complex environment
CN107253485A (en) Foreign matter invades detection method and foreign matter intrusion detection means
CN106257535A (en) Electrical equipment based on SURF operator is infrared and visible light image registration method
CN111814686A (en) Vision-based power transmission line identification and foreign matter invasion online detection method
KR20130030220A (en) Fast obstacle detection
CN104933434A (en) Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method
CN110660065B (en) Infrared fault detection and identification algorithm
CN108805050B (en) Electric wire detection method based on local binary pattern
CN106530281A (en) Edge feature-based unmanned aerial vehicle image blur judgment method and system
CN110189375A (en) A kind of images steganalysis method based on monocular vision measurement
CN110414308B (en) Target identification method for dynamic foreign matters on power transmission line
Kalaivani et al. Analysis of image fusion techniques based on quality assessment metrics
CN112288682A (en) Electric power equipment defect positioning method based on image registration
CN117197700B (en) Intelligent unmanned inspection contact net defect identification system
CN115018785A (en) Hoisting steel wire rope tension detection method based on visual vibration frequency identification
CN114581658A (en) Target detection method and device based on computer vision
CN117392565A (en) Automatic identification method for unmanned aerial vehicle power inspection defects
CN112381797A (en) Infrared data-based line object information confirmation method
CN116524269A (en) Visual recognition detection system
CN116030430A (en) Rail identification method, device, equipment and storage medium
CN114926446A (en) Belt tearing edge detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210219