CN115471845A - Converter station digital instrument identification method based on deep learning and OpenCV - Google Patents

Converter station digital instrument identification method based on deep learning and OpenCV Download PDF

Info

Publication number
CN115471845A
CN115471845A CN202211113841.4A CN202211113841A CN115471845A CN 115471845 A CN115471845 A CN 115471845A CN 202211113841 A CN202211113841 A CN 202211113841A CN 115471845 A CN115471845 A CN 115471845A
Authority
CN
China
Prior art keywords
dial
instrument
mask
image
opencv
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211113841.4A
Other languages
Chinese (zh)
Inventor
谭林林
程鑫
朱俊强
费章君
方鑫
王嘉琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Electric Power Research Institute of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
Nanjing Zhengtu Information Technology Co ltd
Southeast University
Electric Power Research Institute of State Grid Jiangsu Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Zhengtu Information Technology Co ltd, Southeast University, Electric Power Research Institute of State Grid Jiangsu Electric Power Co Ltd filed Critical Nanjing Zhengtu Information Technology Co ltd
Priority to CN202211113841.4A priority Critical patent/CN115471845A/en
Publication of CN115471845A publication Critical patent/CN115471845A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/15Cutting or merging image elements, e.g. region growing, watershed or clustering-based techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/146Aligning or centring of the image pick-up or image-field
    • G06V30/1475Inclination or skew detection or correction of characters or of image to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/18Extraction of features or characteristics of the image
    • G06V30/1801Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes or intersections
    • G06V30/18067Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes or intersections by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19173Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/242Division of the character sequences into groups prior to recognition; Selection of dictionaries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a converter station digital instrument identification method based on deep learning and OpenCV, and belongs to the technical field of image processing. The identification method comprises the steps that an SVM classification network extracted based on HOG features is used for finding out pictures containing digital instruments from inspection pictures of the converter station robot; extracting a dial area by using a YOLACT image segmentation algorithm; image preprocessing operations such as inclination correction and perspective transformation are carried out on the instrument panel; reading dial readings by using a deep learning identification algorithm; and generating a device identification result description file. The converter station digital instrument recognition method based on deep learning and OpenCV is used for recognizing the actually shot instrument photo, and has the advantages of high accuracy, high speed, strong anti-interference capability and high application value in the industrial instrument recognition aspect.

Description

Converter station digital instrument identification method based on deep learning and OpenCV
Technical Field
The invention relates to the technical field of image processing, in particular to a converter station digital instrument identification method based on deep learning and OpenCV.
Background
The converter station is a station for converting alternating current into direct current or converting direct current into alternating current in a high-voltage direct current transmission system, and is a key point for ensuring stable and reliable operation of a power system. A large number of digital instruments are arranged in the converter station and used for monitoring the working state of the power transformation equipment, the traditional inspection mode is that the instruments are read manually at regular time, but the number of the instruments in the converter station is large, and the manual meter reading efficiency is low. Along with the development and construction of the intelligent converter station and the popularization and application of the inspection robot, the recording work of instrument data is improved towards the direction of automatic operation, the inspection robot acquires various instrument images (such as a voltmeter, an ammeter, a digital display meter and a disconnecting link instrument) from a station area, and a robot vision module analyzes the images in real time. Therefore, it is very important to research a quick, accurate, stable and reliable digital instrument identification method suitable for the inspection robot.
Disclosure of Invention
The purpose of the invention is as follows: the converter station digital instrument recognition method based on deep learning and OpenCV is provided, intelligent, rapid and accurate converter station digital instrument detection is achieved, the intelligent level of operation of the power grid converter station is improved, the investment of field operation and maintenance personnel is reduced, the labor cost is reduced, and the defects of low labor efficiency and more errors are overcome.
The above purpose is realized by the following technical scheme:
the converter station digital instrument identification method based on deep learning and OpenCV comprises the following steps:
s1, identifying an instrument picture containing a digital number pipe in an instrument image collected by the inspection robot by using an SVM (support vector machine) two-classification network based on HOG (histogram oriented gradient) feature extraction;
firstly, inputting an instrument model picture, voting and counting the local gradient amplitude and direction of the image to form a histogram based on gradient characteristics, and then splicing local features to form a total HOG feature vector.
And secondly, sending the obtained HOG feature vector into an SVM classifier for training to obtain a corresponding instrument panel SVM classification model.
And thirdly, sending the instrument image shot by the inspection robot into an S2.2 trained SVM model, and performing two-classification prediction to find out the image containing the digital instrument.
S2, target segmentation and region extraction: and (3) extracting a dial from the instrument picture containing the digital number tube obtained in the step (1) by using a one-stage example segmentation algorithm YOLACT to obtain a single digital tube display area. The method comprises the following specific steps:
firstly, performing feature extraction on a table part by using ResNet-101 combined with an FPN network as a Yolact backbone network to generate five feature maps of P3-P7. Wherein the input image size is 550 × 550;
secondly, sending the P3 characteristic diagram obtained in the first step into a ProtoNet branch, generating prototype masks by using a full convolution network, and predicting 32 prototype masks for each dial image;
and thirdly, sending the P3-P7 characteristic diagram obtained in the first step into a diagnosis Header branch to generate three types of outputs: 1) Predicting the confidence of the category, wherein the dimensionality is W multiplied by H multiplied by 3 multiplied by 2; 2) Predicted mask coefficients: dimension is W × H × 3 × 32; 3) The coordinate offset of the box bounding box is predicted with dimensions W × H × 3 × 4. Removing repeated detection frames through rapid non-maximum suppression operation;
step four, linearly combining the mask obtained by the ProtoNet branch in the step two and the mask coefficient obtained by the Petri Header branch in the step three;
and step five, setting a loss function training model: 1) Loss of classification; 2) Frame regression loss: using Smooth-L 1 A Loss training bounding box parameter; 3) Mask loss: the pixel-by-pixel binary cross entropy of the predicted mask and the real mask.
Sixthly, generating a final mask by the result of the linear combination through a Sigmoid function, and cutting the final mask by using a predicted box boundary box;
model training is carried out based on the YOLACT instance segmentation algorithm of the six steps, and the model training is applied to the instrument picture identified in the step S1 to carry out dial plate segmentation;
seventh, considering that the box bounding box detected by the yolcat instance division in the sixth step may contain a plurality of dial masks, in order to remove the influence of the interference masks and to gradually extract the masks of each dial, thereby facilitating information identification for each dial, the following steps may be taken:
1) Firstly, acquiring a detected box and a mask in the corresponding box;
2) Carrying out graying and binarization operation on the region of the box frame;
3) The corrosion operation is used for carrying out corrosion operation on the binarized area, so that the influence of an interference mask and noise can be removed; then, performing expansion operation to prevent information loss caused by excessive corrosion of a target mask of the corrosion operation;
4) Performing edge detection on the target area subjected to corrosion expansion to obtain edge coordinates of the mask, namely coordinates (x) of four positions of the digital dial area, namely, the upper left position, the upper right position, the lower left position and the lower right position 1 ,y 1 ),(x 2 ,y 2 ),(x 3 ,y 3 ),(x 4 ,y 4 );
5) Extracting the dial plate by using the edge coordinate, and putting the dial plate into a box-sized background without noise;
6) And repeating the five steps until the detected box is traversed.
And S3, performing image processing operations such as inclination correction and perspective transformation on the instrument image.
S3.1, obtaining four position coordinates of the upper left, the upper right, the lower left and the lower right of each dial through the seventh step of S2;
s3.2, calculating the width and the height of the dial plate picture after perspective transformation, wherein the width is the maximum value of the distance between the upper left coordinate and the upper right coordinate and the distance between the lower left coordinate, the lower right coordinate and the coordinate, and the height is the maximum value of the distance between the upper left coordinate and the lower left coordinate and the distance between the upper right coordinate and the lower right coordinate, and the specific calculation formula is as follows:
Figure BDA0003844677460000021
in the formula, width means the distance between the upper left and upper right points, width means the distance between the lower left and lower right points, and the width of the dial picture is the maximum value between them: width = max (WidthhA, widthB)
Figure BDA0003844677460000031
In the formula, height A represents the distance between the upper left point and the lower left point, height B represents the distance between the upper right point and the lower right point, and the height of the dial picture is the maximum value between the two points: height = max (Height a, height b).
4 coordinate points of the dial picture after transformation are constructed, and the coordinates of the four positions of the upper left position, the upper right position, the lower left position and the lower right position are (0, 0), (Width-1, 0), (0, height-1), (Width-1, height-1);
s3.3, generating a perspective transformation matrix by using a getPerspectivetDeformer function in OpenCV, and introducing two parameters which are a list consisting of dial area coordinates in the original image and a list consisting of coordinates after perspective transformation;
and S3.4, performing perspective transformation by using a warp Perspectral function in OpenCV, generating a dial picture after the perspective transformation, transmitting parameters which are an original picture and a transformation matrix respectively, outputting the length and the width of the image, and returning values which are the image after the perspective transformation.
And S4, identifying the dial number in the inclination corrected nixie tube display area in the step S3 by using a PaddleOCR optical character detection technology. Firstly, a data set of a digital instrument panel to be detected is manufactured by using a RoLabelImg labeling tool; secondly, self-defined 0-9 and small-point dictionary files are used and are placed in a PaddleOCR/ppocr/utils/fact/path, and all digital characters can be mapped into indexes of a dictionary during model training; thirdly, modifying and optimizing training parameters in the PaddleOCR configuration file; fourthly, training a digital instrument panel data set by utilizing a PaddleOCR network structure; and fifthly, recognizing the digital instrument panel image after the inclination correction obtained in the step 3 by using the trained PaddleOCR model.
And S5, generating a converter station digital instrument reading result description file according to the identification result, wherein the description file comprises the digital information, the threshold value information and the warning character description of the instrument.
The invention has the beneficial effects that:
according to the invention, through an image recognition technology based on deep learning and OpenCV morphology, more intelligent, rapid and accurate converter station digital instrument detection is realized, the intelligent level of the operation of the power grid converter station is improved, the investment of field operation and maintenance personnel is reduced, the labor cost is reduced, and the defects of low labor efficiency and more errors are eliminated.
Drawings
The following further illustrates embodiments of the present invention with reference to the drawings.
Fig. 1 is a schematic diagram of a converter station instrument identification method based on deep learning and OpenCV in the present invention.
Detailed Description
The present invention is described in further detail below with reference to the attached drawing figures.
In this disclosure, aspects of the present invention are described with reference to the accompanying drawings, in which a number of illustrative embodiments are shown. It should be appreciated that the various concepts and embodiments described above, as well as those described in greater detail below, may be implemented in any of numerous ways, as the disclosed concepts and embodiments are not limited to any one implementation. Additionally, some aspects of the present disclosure may be used alone, or in any suitable combination with other aspects of the present disclosure.
The converter station instrument identification method based on deep learning and OpenCV provided by the present application is described in detail below with reference to the accompanying drawings.
Example 1:
s1, identifying a picture containing a digital instrument in an instrument image acquired by the inspection robot by using an SVM (support vector machine) two-classification network based on HOG (histogram oriented feature) feature extraction. Firstly, inputting an instrument model picture, voting and counting the local gradient amplitude and direction of the image to form a histogram based on gradient characteristics, and then splicing local characteristics to be used as a total HOG characteristic vector. And then, sending the obtained HOG feature vector into an SVM classifier for training to obtain a corresponding instrument panel SVM classification model. And finally, sending the instrument image shot by the inspection robot into a trained SVM model, and performing two-classification prediction to find out the picture containing the digital instrument.
S2, carrying out classified instrument picture target segmentation and table area extraction by using a one-stage example segmentation algorithm YOLACT to obtain a single nixie tube display area. The backbone network used therein was ResNet-101 and the table portion was feature extracted in conjunction with FPN. And sending the obtained features into two parallel processing network branches of a PeerionHeader and a ProtoNet to generate a group of prototype masks and predict mask coefficients of each instance. An example mask is then generated by linearly combining the prototype with the mask coefficients, and the final mask is clipped using the predicted bounding box. Meanwhile, considering that a plurality of dial plate masks are possibly contained, in order to conveniently identify information of each dial plate, the following steps can be adopted:
1) First, the detected box and the mask in the corresponding box are acquired.
2) And carrying out graying and binarization operation on the area of the box frame.
3) The corrosion operation is used for carrying out corrosion operation on the binarized area, so that the influence of an interference mask and noise can be removed; the dilation operation is then performed to prevent information loss due to excessive erosion of the target mask of the erosion operation.
4) And carrying out edge detection on the target area subjected to corrosion expansion to obtain edge coordinates of the mask.
5) The edge coordinates are used to extract the dial and put it in a box-sized background without noise.
6) And repeating the steps 1-5 continuously until the detected box is traversed.
S3, obtaining four position coordinates of the dial through a YOLACT example segmentation algorithm, setting the position coordinates of four points of the dial after perspective transformation, generating a perspective transformation matrix by using a getPerpective transformation function and a warp Peractive function in OpenCV, and then performing perspective transformation. The coordinate of the upper left corner is transformed into the original point by perspective, the width is the maximum value of the distance between the original upper left and the original upper right coordinates and the distance between the original lower left, the original lower right and the original coordinates, and the height is the maximum value of the distance between the original upper left and the original lower left coordinates and the distance between the original upper right and the original lower right coordinates.
And S4, identifying the digital instrument panel indicating number after the inclination correction by using a PaddleOCR optical character detection technology.
And S5, generating a converter station digital instrument reading result description file according to the identification result, wherein the description file comprises the digital information, the threshold value information and the warning information of the instrument. The xml description file structure is as follows:
Figure BDA0003844677460000051
the digital instrument recognition method can accurately and quickly recognize the readings of various instrument devices in multiple scenes, further improves the intelligentization and automation levels of a power system and a transformer substation, reduces the labor cost, and has important significance for the intelligentization promotion of national power grids.
The foregoing shows and describes the general principles, principal features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are given by way of illustration of the principles of the present invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, and such changes and modifications are within the scope of the invention as claimed.

Claims (5)

1. The converter station digital instrument identification method based on deep learning and OpenCV is characterized by comprising the following steps of:
s1, identifying an instrument picture containing a digital number pipe in a patrol photo of a robot by using an SVM classification network based on HOG feature extraction;
s2, target segmentation and region extraction: extracting a dial from the instrument picture containing the digital number tube obtained in the step 1 by using a one-stage example segmentation algorithm YOLACT to obtain a single digital tube display area;
s3, performing image processing operation including tilt correction and perspective transformation on the nixie tube display area image obtained in the step S2;
s4, identifying dial indications in the inclined corrected nixie tube display area in the step S3 by using a PaddleOCR optical character detection technology;
and S5, generating a meter reading result description file according to the dial reading identification result in the step S4.
2. The converter station digital instrument recognition method based on deep learning and OpenCV, as recited in claim 1, wherein the specific method of the SVM classification network based on HOG feature extraction in S1 is:
s1.1, inputting an instrument model image, voting and counting local gradient amplitude and direction of the image to form a histogram based on gradient characteristics, and then splicing local characteristics to serve as a total HOG characteristic vector;
s1.2, sending the HOG feature vector obtained in the S1.1 into an SVM classifier for training to obtain a corresponding instrument panel SVM classification model;
and S1.3, sending the instrument image shot by the inspection robot into the SVM classification model trained in S1.2, and performing two-classification prediction to find out the picture containing the digital instrument.
3. The converter station digital instrument recognition method based on deep learning and OpenCV, according to claim 1, wherein the step of S2 extracting a dial from an instrument picture containing a digital number tube by using a one-stage instance segmentation algorithm YOLACT includes:
s2.1, performing feature extraction on the table part by using ResNet-101 combined with an FPN network as a Yolact backbone network to generate five feature maps of P3-P7, wherein the size of an input image is 550 multiplied by 550;
s2.2, sending the P3 characteristic diagram obtained in the S2.1 into a ProtoNet branch, generating prototype masks by using a full convolution network, and predicting 32 prototype masks for each dial image;
s2.3, sending the P3-P7 characteristic diagram obtained in the S2.1 into a diagnosis Header branch to generate three types of outputs: 1) Predicting the confidence of the category, wherein the dimension is W multiplied by H multiplied by 3 multiplied by 2; 2) Predicted mask coefficients: the dimension is W multiplied by H multiplied by 3 multiplied by 32; 3) Predicting the coordinate offset of the box bounding box, wherein the dimensionality is W multiplied by H multiplied by 3 multiplied by 4, and removing repeated detection boxes through rapid non-maximum suppression operation;
s2.4, linearly combining the mask obtained by ProtoNet branching in S2.2 and the mask coefficient obtained by Pediction Header branching in S2.3;
s2.5, setting a loss function training model: 1) Loss of classification; 2) Box regression loss: using Smooth-L 1 A Loss training bounding box parameter; 3) Mask loss: predicting pixel-by-pixel binary cross entropy of the mask and the real mask;
s2.6, generating a final mask through a Sigmoid function according to the result of the linear combination of the S2.5, and cutting the final mask by using a predicted box bounding box;
s2.7, performing model training based on the YOLACT example segmentation algorithm in the S2.1-S2.6 steps, applying the model training to the instrument picture identified in the S1, and performing dial segmentation;
s2.8, considering that the box bounding box detected by dividing the yolcact instance in S2.6 may contain a plurality of dial masks, in order to remove the influence of the interference mask and to gradually extract the mask of each dial, thereby facilitating information identification for each dial, the following steps may be taken:
1) Firstly, acquiring a detected box and a mask in a corresponding box;
2) Carrying out graying and binarization operation on the area of the box frame;
3) Carrying out corrosion operation on the binarized region by using corrosion operation to remove the influence of an interference mask and noise; then, performing expansion operation to prevent information loss caused by excessive corrosion of a target mask of the corrosion operation;
4) Performing edge detection on the target area subjected to corrosion expansion to obtain edge coordinates of the mask, namely coordinates (x) of four positions of the digital dial area, namely, the upper left position, the upper right position, the lower left position and the lower right position 1 ,y 1 ),(x 2 ,y 2 ),(x 3 ,y 3 ),(x 4 ,y 4 );
5) Extracting the dial plate by using the edge coordinate, and putting the dial plate into a box-sized background without noise;
6) And repeating the five steps until the detected box is traversed.
4. The converter station instrument recognition method based on deep learning and OpenCV, as recited in claim 1, wherein the step S3 of performing image processing includes:
s3.1, obtaining four position coordinates of the upper left, the upper right, the lower left and the lower right of each dial through the step S2.8;
s3.2, calculating the width and the height of the dial plate picture after perspective transformation, wherein the width is the maximum value of the distance between the upper left coordinate and the upper right coordinate and the distance between the lower left coordinate, the lower right coordinate and the coordinate, and the height is the maximum value of the distance between the upper left coordinate and the lower left coordinate and the distance between the upper right coordinate and the lower right coordinate, and the specific calculation formula is as follows:
Figure FDA0003844677450000021
in the formula, width means the distance between the upper left and upper right points, width means the distance between the lower left and lower right points, and the width of the dial picture is the maximum value between them: width = max (WidthhA, widthB)
Figure FDA0003844677450000022
In the formula, height A represents the distance between the upper left point and the lower left point, height B represents the distance between the upper right point and the lower right point, and the height of the dial picture is the maximum value between the two points: height = max (Height a, height b);
4 coordinate points of the dial picture after transformation are constructed, and the coordinates of the four positions of the upper left position, the upper right position, the lower left position and the lower right position are (0, 0), (Width-1, 0), (0, height-1), (Width-1, height-1);
s3.3, generating a perspective transformation matrix by using a getPerspectivetDeformer function in OpenCV, and introducing two parameters which are a list consisting of dial area coordinates in the original image and a list consisting of coordinates after perspective transformation;
and S3.4, performing perspective transformation by using a warp Perspectral function in OpenCV, generating a dial picture after the perspective transformation, transmitting parameters which are an original picture and a transformation matrix respectively, outputting the length and the width of the image, and returning values which are the image after the perspective transformation.
5. The deep learning and OpenCV-based converter station digital instrument recognition method as claimed in claim 1, wherein the step S4 of recognizing the dial indication in the tilt-corrected nixie tube display area in step S3 by using PaddleOCR optical character detection technology is as follows:
s4.1, making and labeling a data set of the digital instrument panel to be detected;
s4.2, self-defining files 0-9 and the small-point dictionary file are used and placed in a PaddleOCR/ppocr/utils/fact/path;
s4.3, modifying and optimizing training parameters in the PaddleOCR configuration file;
s4.4, training a digital instrument panel data set by utilizing a PaddleOCR network structure;
and S4.5, recognizing the digital instrument panel image after the inclination correction obtained in the step S3 by using the trained PaddleOCR model.
CN202211113841.4A 2022-09-14 2022-09-14 Converter station digital instrument identification method based on deep learning and OpenCV Pending CN115471845A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211113841.4A CN115471845A (en) 2022-09-14 2022-09-14 Converter station digital instrument identification method based on deep learning and OpenCV

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211113841.4A CN115471845A (en) 2022-09-14 2022-09-14 Converter station digital instrument identification method based on deep learning and OpenCV

Publications (1)

Publication Number Publication Date
CN115471845A true CN115471845A (en) 2022-12-13

Family

ID=84332819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211113841.4A Pending CN115471845A (en) 2022-09-14 2022-09-14 Converter station digital instrument identification method based on deep learning and OpenCV

Country Status (1)

Country Link
CN (1) CN115471845A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116645682A (en) * 2023-07-24 2023-08-25 济南瑞泉电子有限公司 Water meter dial number identification method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116645682A (en) * 2023-07-24 2023-08-25 济南瑞泉电子有限公司 Water meter dial number identification method and system
CN116645682B (en) * 2023-07-24 2023-10-20 济南瑞泉电子有限公司 Water meter dial number identification method and system

Similar Documents

Publication Publication Date Title
CN111401361B (en) End-to-end lightweight depth license plate recognition method
CN110276285B (en) Intelligent ship water gauge identification method in uncontrolled scene video
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
CN110619623B (en) Automatic identification method for heating of joint of power transformation equipment
CN113643228B (en) Nuclear power station equipment surface defect detection method based on improved CenterNet network
CN110674808A (en) Transformer substation pressure plate state intelligent identification method and device
CN114241364A (en) Method for quickly calibrating foreign object target of overhead transmission line
CN114241469A (en) Information identification method and device for electricity meter rotation process
CN115471845A (en) Converter station digital instrument identification method based on deep learning and OpenCV
CN113888462A (en) Crack identification method, system, readable medium and storage medium
CN111461121A (en) Electric meter number identification method based on YO L OV3 network
CN114694130A (en) Method and device for detecting telegraph poles and pole numbers along railway based on deep learning
CN117372956A (en) Method and device for detecting state of substation screen cabinet equipment
CN110807416A (en) Digital instrument intelligent recognition device and method suitable for mobile detection device
CN114061476B (en) Method for detecting deflection of insulator of power transmission line
CN115937492A (en) Transformer equipment infrared image identification method based on feature identification
CN114140793A (en) Matching method and device for terminal block and terminal block wiring
CN115359505A (en) Electric power drawing detection and extraction method and system
CN114359948A (en) Power grid wiring diagram primitive identification method based on overlapping sliding window mechanism and YOLOV4
CN117173385B (en) Detection method, device, medium and equipment of transformer substation
CN112330643B (en) Secondary equipment state identification method based on sparse representation image restoration
CN113159047B (en) Substation equipment infrared image temperature value identification method based on CGAN image amplification
CN113052865B (en) Power transmission line small sample temperature image amplification method based on image similarity
CN117173723A (en) Paper form identification method, system, equipment and storable medium
CN118071785A (en) Automatic extraction method and device for standard units of chip layout level

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240507

Address after: 210096, No. four archway, Xuanwu District, Jiangsu, Nanjing 2

Applicant after: SOUTHEAST University

Country or region after: China

Applicant after: STATE GRID JIANGSU ELECTRIC POWER COMPANY Research Institute

Address before: 210096, No. four archway, Xuanwu District, Jiangsu, Nanjing 2

Applicant before: SOUTHEAST University

Country or region before: China

Applicant before: NANJING ZHENGTU INFORMATION TECHNOLOGY Co.,Ltd.

Applicant before: STATE GRID JIANGSU ELECTRIC POWER COMPANY Research Institute