CN118015555A - Knife switch state identification method based on visual detection and mask pattern direction vector - Google Patents

Knife switch state identification method based on visual detection and mask pattern direction vector Download PDF

Info

Publication number
CN118015555A
CN118015555A CN202410427906.5A CN202410427906A CN118015555A CN 118015555 A CN118015555 A CN 118015555A CN 202410427906 A CN202410427906 A CN 202410427906A CN 118015555 A CN118015555 A CN 118015555A
Authority
CN
China
Prior art keywords
knife switch
disconnecting link
mask pattern
pattern direction
knife
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410427906.5A
Other languages
Chinese (zh)
Inventor
张文睿
李佑文
丁桃胜
蔡一磊
俞铭
杨航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Sac Rail Traffic Engineering Co ltd
Original Assignee
Nanjing Sac Rail Traffic Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Sac Rail Traffic Engineering Co ltd filed Critical Nanjing Sac Rail Traffic Engineering Co ltd
Priority to CN202410427906.5A priority Critical patent/CN118015555A/en
Publication of CN118015555A publication Critical patent/CN118015555A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a knife switch state identification method based on visual detection and mask pattern direction vector, belonging to the technical field of knife switch state identification, comprising the following steps: acquiring knife switch video stream data under different scenes and illumination conditions, marking images and manufacturing a knife switch data set through a data enhancement technology; improving a YOLO-v8 network by using a attention feature enhancement technology, detecting a knife switch instance object from a two-dimensional image, and generating a mask image covering the knife switch; the method and the device for judging the opening and closing states of the disconnecting link based on the mask image build the minimum circumscribed rectangle based on the mask image, calculate the direction vector of the mask image to obtain the angle of the disconnecting link so as to judge the opening and closing states of the disconnecting link, improve the accuracy and the effectiveness of judging the opening and closing states of the disconnecting link from the single-view image under a complex scene, and meet the requirement of intelligent recognition of the states of the disconnecting link of the contact network system.

Description

Knife switch state identification method based on visual detection and mask pattern direction vector
Technical Field
The invention belongs to the technical field of knife switch state identification, and relates to a knife switch state identification method based on visual detection and mask pattern direction vectors.
Background
The contact net system is an important component for maintaining the normal operation of rail transit, and a large amount of maintenance work is needed, so that the operation safety of overhaul and maintenance becomes important. A plurality of knife switches are used as equipment on-off controllers in the system, and whether the on-off states displayed by the knife switches in the system are consistent with the actual conditions or not is related to the stability of the whole network system. The traditional switch position-changing confirmation relies on manual work to enter a high-voltage area of a transformer substation to check the position of the switch, so that safety risks exist. In order to meet the requirements of intelligent operation and maintenance of a contact network system, a method for rapidly and accurately identifying the disconnecting link on-off state from a single-view image is necessary, the task amount and risk of manual inspection can be reduced, and safe and effective operation of the contact network is ensured.
The technical difficulty of the knife switch state identification is that: the engineering site is provided with a plurality of devices similar to the color and the texture of the knife switch, which are difficult to distinguish; the relative positions of the knife switch and the camera are different in different scenes, and a target area cannot be framed in an image; the device such as a cable exists outdoors, and the disconnecting link in the image may be partially blocked, so that the identification fails. The traditional knife switch state identification method uses local descriptors to describe points by extracting the points of interest from the image, and judges the knife switch state in a mode of matching with a data set, such as [ Zhang Jinfeng ] and the like, and knife switch state identification based on the improved brightness sequence descriptors. However, such methods rely on manually labeled features and are sensitive to illumination, background, clutter and are not suitable for use in engineering sites.
Thanks to the excellent performance of many deep learning frameworks in target detection and segmentation tasks, the deep learning-based knife switch state recognition method is continuously developed, such as [ Kang Yiwu and the like, and the improved YOLOv-based substation knife switch state automatic recognition method is office automation 2023 ]. But this method may result in a decrease in recognition rate due to the object portion being blocked. In addition, as the input of the neural network is the whole RGB image, the local characteristic information occupies smaller area in the whole image, and the information of the rest part of the image appears as a background. After the neural network undergoes multiple convolutions, the background information as part of the network input can be accumulated in multiple iterations, so that a large amount of noise is generated, interference can be caused to the marked target characteristic information, and the network training efficiency is low.
Disclosure of Invention
The invention aims at: the method overcomes the defects of the existing method for identifying the state of the disconnecting link, combines the visual detection of the YOLO-v8 and the attention characteristic enhancement technology, judges the disconnecting link on-off state through the mask pattern direction vector, provides the method for identifying the state of the disconnecting link based on the improved YOLO-v8 and the mask pattern direction vector, and has higher state identification accuracy.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows: a knife switch state identification method based on visual detection and mask pattern direction vector aims at judging the switch on-off state of a knife switch from a single RGB image, and comprises the following steps:
step 1: the method comprises the steps of collecting a knife switch video stream from a video monitoring system, decomposing the knife switch video stream into RGB images frame by frame and labeling;
step 2: expanding a data set by using a data enhancement method;
step 3: dividing a knife switch data set;
Step 4: the converged channel attention mechanism improves the YOLO-v8 network;
Step 5: training the improved network to detect a disconnecting link instance object and generating a mask image covering the disconnecting link;
Step 6: constructing a minimum circumscribed rectangle based on the mask image;
step 7: calculating a mask pattern direction vector to obtain a knife switch angle;
Step 8: judging the opening and closing state of the knife switch.
Further, the knife gate video stream in the step 1 is from a network monitoring camera in a video monitoring system, camera video stream data of different time periods, positions and angles are collected, the video stream data are disassembled into RGB images frame by frame to obtain knife gate image data sets of different illumination, background and on-off states, and then knife gates in the images are marked by irregular polygon frames along the peripheral outline of the knife gate.
Further, the data enhancement method in the step 2 is implemented by: the image is processed in the original data set by adding random noise, adjusting the ambient brightness and randomly rotating, and corresponding labeling information is automatically generated, so that an expanded data set is obtained, and the scale is about 3 times of that of the original data set.
Further, dividing the knife switch data set in the step 3 refers to dividing the expanded knife switch data set into a training set, a test set and a verification set according to the proportion of 20:1:2.
Further, the method for improving the YOLO-v8 network by fusing the channel attention mechanism in the step 4 is implemented in the following manner: the YOLO-v8 network architecture is preserved, and an attention mechanism module is added during the downsampling of the backbone network Darknet-53. The network inputs an RGB image with 640 multiplied by 640 pixels, and features in the image are extracted through serial superposition of a 4-time Conv convolution layer and a Residual Block. Where tensor F of each residual block output dimension w×h×d is taken as input to the attention module, followed by an average pooling operation, the channel attention module generates channel weights of dimension 1×1×d by performing one-dimensional convolution with convolution kernel size n=5. The obtained weight is processed through an activation function and subjected to dimension reduction to obtain a result F ', and the result F' is used as the input of the next convolution layer.
Furthermore, in the step 4, a channel attention mechanism is fused to improve the YOLO-v8 network, dense features obtained after downsampling are put into a PAFPN module, high-level strong semantic features are transferred from top to bottom, a pyramid from bottom to top is added, and low-level strong positioning features are transferred upwards, so that the aim of enhancing the whole network pyramid is fulfilled. And finally, separating the classification task from the regression task by using Decoupled-Head decoupling Head structure, and respectively obtaining classification loss and regression loss after convolution. Classification Loss uses the modified cross entropy Loss VFL Loss:
wherein q is the intersection ratio of the prediction frame and the real frame, and p is the score; if the prediction frame intersects with the real frame, namely q is more than 0, judging the sample as a positive sample; if the two boxes do not intersect, let q=0, be the negative sample. Regression losses using Distribution Focal Loss + CIoU Loss, the three losses were weighted proportionally to give the overall loss.
Further, in step 5, training the improved network to detect the object of the knife switch instance and generate a mask image covering the knife switch, which is implemented in the following manner: training on the improved YOLO-v8 network in the step 4 by using the knife switch data set divided in the step 3 to obtain a training weight model. The model inputs the whole RGB image, can detect all knife switch example objects in the image, and simultaneously generates a mask image covering the knife switch.
Further, in step 6, a minimum circumscribed rectangle based on the mask image is constructed, and the implementation manner is as follows: and (5) extracting an edge point set of the knife switch mask image in the step 5 by using a canny edge detection method, namely knife switch contour points. And constructing a minimum circumscribed rectangle around the contour point of the disconnecting link to obtain the coordinate information of the center point, the length and the width and the top point of the rectangle.
Further, in step 7, the knife switch angle is obtained by calculating the direction vector of the mask pattern, and the implementation mode is as follows: constructing a plane rectangular coordinate system by taking the upper left corner of the image as an origin; taking the top left vertex of the minimum circumscribed rectangle in the step 6 as a rotation point, starting to rotate clockwise by taking the positive direction of the x axis as an axis, and marking the first side which is overlapped in rotation as width; calculating a normal vector of the width edge, namely a knife gate mask pattern direction vector; the vector origin is translated to the origin of the coordinate system, thereby obtaining the angle of the knife switch in the image plane.
Further, in the step 8, the switch opening and closing state is judged, and the implementation mode is as follows: comparing a preset angle threshold value of the disconnecting link opening and closing state with the disconnecting link angle obtained in the step 7, judging the actual opening and closing state of the disconnecting link, and classifying the actual opening and closing state into three states of opening, closing and action.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. The invention improves the YOLO-v8 network and mainly comprises a channel attention module added in the down-sampling process of a Convolutional Neural Network (CNN). The Convolutional Neural Network (CNN) model has certain limitations in target detection work due to the size of a convolutional kernel, feature pooling and the like. The invention introduces the channel attention module, can better extract detail characteristics in the image, ensures that the network is more focused and the knife switch target characteristics are realized, and improves the recognition efficiency and accuracy of target detection.
2. The invention uses the mode of detecting the knife switch first and judging the opening and closing state of the knife switch, thereby improving the detection speed and solving the problem that the prior method depends on the camera to be at a fixed angle. The method is not interfered by factors such as illumination and shadow, can better cope with the scene when the disconnecting link part is shielded, and effectively improves the stability and accuracy of identifying and detecting the disconnecting link state in the contact network system.
Drawings
Fig. 1 is a flow chart of a knife switch state recognition method based on visual detection and mask pattern direction vectors.
FIG. 2 is a diagram of the network structure of the improved YOLO-v8 of the present invention.
FIG. 3 is a diagram of a channel attention module according to the present invention.
FIG. 4 is a graph showing the detection result of knife switch object according to the present invention.
FIG. 5 is a graph showing the result of the canny edge detection method according to the present invention.
FIG. 6 is a diagram showing the result of identifying the status of the knife switch according to the present invention.
FIG. 7 is a graph comparing the training loss of the network before and after the model adds the attention module in the experiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
As shown in FIG. 1, a knife switch state identification method based on improved YOLO-v8 and mask pattern direction vectors comprises the following steps:
Step 1: the network monitoring camera in the video monitoring system collects video stream data of cameras in different time periods, positions and angles, the video stream data are disassembled into RGB images frame by frame, a knife gate data set containing about 600 images is formed by screening out repeated damaged images, and then the knife gates in the images are marked with irregular polygonal frames along the peripheral outline of the knife gate.
Step 2: the data enhancement method is used for carrying out data enhancement processing on the image in the original knife switch data set by adding random noise, adjusting the ambient brightness and randomly rotating, and automatically generating corresponding labeling information to obtain a knife switch data set with the size of 1800 RGB images containing 640 multiplied by 640 pixels and the corresponding labeling information.
Step 3: the knife switch data set is divided into a training set, a testing set and a verification set according to the proportion of 20:1:2, and is derived into data in a YOLO-v8 format.
Step 4: as shown in fig. 2, the YOLO-v8 network body architecture is preserved, and an attention mechanism module is added during the down-sampling of the backbone networks Darknet-53. The network inputs an RGB image with 640 multiplied by 640 pixels, and features in the image are extracted through serial superposition of a 4-time Conv convolution layer and a Residual Block. Where tensor F of each residual block output dimension w×h×d is taken as input to the attention module, followed by an average pooling operation, the channel attention module generates channel weights of dimension 1×1×d by performing one-dimensional convolution with convolution kernel size n=5. The resulting weights are processed through an activation function and dimensionally restored to result F' and used as input to the next convolutional layer, as shown in fig. 3.
Furthermore, the dense features obtained after downsampling are put into PAFPN modules, the strong semantic features of a high layer are transferred from top to bottom, and a pyramid from bottom to top is added, so that the strong positioning features of a low layer are transferred, and the aim of enhancing the whole network pyramid is fulfilled. And finally, separating the classification task from the regression task by using Decoupled-Head decoupling Head structure, and respectively obtaining classification loss and regression loss after convolution. Classification Loss uses the modified cross entropy Loss VFL Loss:
wherein q is the intersection ratio of the prediction frame and the real frame, and p is the score; if the prediction frame intersects with the real frame, namely q is more than 0, judging the sample as a positive sample; if the two boxes do not intersect, let q=0, be the negative sample. Regression losses using Distribution Focal Loss + CIoU Loss, the three losses were weighted proportionally to give the overall loss.
Step 5: training on the improved YOLO-v8 network in the step 4 by using the knife switch data set divided in the step 3 to obtain a training weight model. The model inputs the whole RGB image, can detect all knife switch example objects in the image, and simultaneously generates a mask image covering the knife switch, as shown in figure 4.
Step 6: as shown in fig. 5, the edge point set of the knife gate mask image in step 5, that is, knife gate contour points, is extracted using a canny edge detection method. And constructing a minimum circumscribed rectangle around the contour point of the disconnecting link to obtain the coordinate information of the center point, the length and the width and the top point of the rectangle.
Step 7: constructing a plane rectangular coordinate system by taking the upper left corner of the image as an origin; taking the top left vertex of the minimum circumscribed rectangle in the step 6 as a rotation point, starting to rotate clockwise by taking the positive direction of the x axis as an axis, and marking the first side which is overlapped in rotation as width; calculating a normal vector of the width edge, namely a knife gate mask pattern direction vector; the vector origin is translated to the origin of the coordinate system, thereby obtaining the angle of the knife switch in the image plane.
Step 8: comparing the preset angle threshold value of the disconnecting link opening and closing state with the disconnecting link angle obtained in the step 7, and judging the actual opening and closing state of the disconnecting link, wherein the actual opening and closing state is classified into three states of opening, closing and action, as shown in fig. 6.
Table 1 shows the comparison result of the detection success rate of the knife switch state before and after the network improvement, and FIG. 7 shows the comparison chart of the loss reduction rate, accuracy and average accuracy (mAP 50-95) indexes before and after the network improvement. From table 1 and fig. 7, it can be seen that the attention module is added in the main network in the invention to optimize the recognition result of the disconnecting link state, the comprehensive detection success rate exceeds 99%, the loss convergence speed is faster, the loss is smaller, and the accuracy is higher in the training process.
Table 1: the network improves the success rate of detecting the states of the front and rear disconnecting links.
In summary, the invention provides a knife switch state identification method based on improved YOLO-v8 and mask pattern direction vectors, which can accurately and reliably judge the on-off state of a knife switch from an image, provides technical support for intellectualization and informatization of a contact network system, and has wide market prospect.

Claims (10)

1. A knife switch state identification method based on visual detection and mask pattern direction vectors is characterized by comprising the following steps:
step 1: the method comprises the steps of collecting a knife switch video stream from a video monitoring system, decomposing the knife switch video stream into RGB images frame by frame and labeling;
step 2: processing the image acquired in the step 1 by using a data enhancement method, and expanding a data set;
Step 3: dividing the knife switch data set expanded in the step 2;
step 4: the method comprises the steps of integrating a channel attention mechanism to improve a YOLO-v8 network, reserving a YOLO-v8 network main body architecture, and adding an attention mechanism module in a downsampling process of a main network Darknet-53;
Step 5: training on the improved YOLO-v8 network in the step 4 by using the data set of the disconnecting link divided in the step 3 to obtain a training weight model, detecting a disconnecting link instance object and generating a mask image covering the disconnecting link;
step 6: extracting an edge point set of the knife switch mask image in the step 5, namely knife switch contour points, by using a canny edge detection method, and constructing a minimum circumscribed rectangle around the knife switch contour points;
step 7: calculating a mask pattern direction vector to obtain a knife switch angle;
step 8: and judging the actual opening and closing state of the disconnecting link.
2. The method for identifying the state of a knife switch based on visual detection and mask pattern direction vectors according to claim 1, wherein the method comprises the following steps: and (2) acquiring video stream data of the video camera in the step (1) by a network monitoring camera in a video monitoring system at different time intervals, positions and angles, disassembling the video stream data into RGB images frame by frame, screening out repeated damaged images to form a knife gate data set, and marking the knife gates in the images by using irregular polygonal frames along the peripheral outline of the knife gates.
3. The method for identifying the state of a knife switch based on visual detection and mask pattern direction vectors according to claim 1, wherein the method comprises the following steps: the data enhancement method in the step 2 refers to that the data enhancement processing is carried out on the image in the original knife switch data set by adding random noise, adjusting the ambient brightness and randomly rotating, corresponding labeling information is automatically generated, and the knife switch data set with the size of 1800 RGB images containing 640 multiplied by 640 pixels and the corresponding labeling information is obtained.
4. The method for identifying the state of a knife switch based on visual detection and mask pattern direction vectors according to claim 1, wherein the method comprises the following steps: dividing the disconnecting link data set in the step 3 refers to dividing the disconnecting link data set into a training set, a testing set and a verification set according to the proportion of 20:1:2, and deriving the data into data in a YOLO-v8 format.
5. The method for identifying the state of a knife switch based on visual detection and mask pattern direction vectors according to claim 1, wherein the method comprises the following steps: in the step 4, the RGB image with 640×640 pixels is input into the network, and features in the image are extracted through serial superposition of 4 times Conv convolution layers and Residual Block Residual blocks, wherein tensor F with w×h×d of each Residual Block output dimension is taken as input of an attention module, and then an average pooling operation is performed, and the channel attention module generates channel weights with 1×1×d of dimension by performing one-dimensional convolution with a convolution kernel size of n=5;
The obtained weight is processed through an activation function and subjected to dimension reduction to obtain a result F ', and the result F' is used as the input of the next convolution layer.
6. The method for identifying the state of a knife switch based on visual detection and mask pattern direction vectors according to claim 1, wherein the method comprises the following steps: in the step 4, a channel attention mechanism is fused to improve a YOLO-v8 network, dense features obtained after downsampling are put into a PAFPN module, high-level strong semantic features are transferred from top to bottom, a pyramid from bottom to top is added, and low-level strong positioning features are transferred upwards, so that the aim of enhancing the whole network pyramid is fulfilled; and finally, separating the classification task from the regression task by using Decoupled-Head decoupling Head structure, and respectively obtaining classification loss and regression loss after convolution.
7. The method for recognizing the state of a knife switch based on visual detection and mask pattern direction vectors according to claim 6, wherein the method comprises the following steps: the classification Loss uses the modified cross entropy Loss VFL Loss:
Wherein q is the intersection ratio of the prediction frame and the real frame, and p is the score; if the prediction frame intersects with the real frame, namely q is more than 0, judging the sample as a positive sample; if the two boxes do not intersect, let q=0, be the negative sample;
the regression loss uses Distribution Focal Loss + CIoU Loss, and the three losses are weighted by proportions to obtain the overall loss.
8. The method for identifying the state of a knife switch based on visual detection and mask pattern direction vectors according to claim 1, wherein the method comprises the following steps: and 6, constructing a minimum circumscribed rectangle around the contour point of the disconnecting link to obtain the coordinate information of the center point, the length and the width and the top point of the rectangle.
9. The method for identifying the state of a knife switch based on visual detection and mask pattern direction vectors according to claim 8, wherein the method comprises the following steps: and 7, calculating a mask pattern direction vector to obtain a knife switch angle, wherein the implementation mode is as follows: constructing a plane rectangular coordinate system by taking the upper left corner of the image as an origin; taking the top left vertex of the minimum circumscribed rectangle in the step 6 as a rotation point, starting to rotate clockwise by taking the positive direction of the x axis as an axis, and marking the first side which is overlapped in rotation as width; calculating a normal vector of the width edge, namely a knife gate mask pattern direction vector; the vector origin is translated to the origin of the coordinate system, thereby obtaining the angle of the knife switch in the image plane.
10. The method for identifying the state of a knife switch based on visual detection and mask pattern direction vectors according to claim 1, wherein the method comprises the following steps: in the step 8, the disconnecting link opening and closing state is judged, and the realization mode is as follows: comparing a preset angle threshold value of the disconnecting link opening and closing state with the disconnecting link angle obtained in the step 7, judging the actual opening and closing state of the disconnecting link, and classifying the actual opening and closing state into three states of opening, closing and action.
CN202410427906.5A 2024-04-10 2024-04-10 Knife switch state identification method based on visual detection and mask pattern direction vector Pending CN118015555A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410427906.5A CN118015555A (en) 2024-04-10 2024-04-10 Knife switch state identification method based on visual detection and mask pattern direction vector

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410427906.5A CN118015555A (en) 2024-04-10 2024-04-10 Knife switch state identification method based on visual detection and mask pattern direction vector

Publications (1)

Publication Number Publication Date
CN118015555A true CN118015555A (en) 2024-05-10

Family

ID=90944967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410427906.5A Pending CN118015555A (en) 2024-04-10 2024-04-10 Knife switch state identification method based on visual detection and mask pattern direction vector

Country Status (1)

Country Link
CN (1) CN118015555A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10699168B1 (en) * 2018-12-29 2020-06-30 Alibaba Group Holding Limited Computer-executed method and apparatus for assessing vehicle damage
CN111523610A (en) * 2020-05-06 2020-08-11 青岛联合创智科技有限公司 Article identification method for efficient sample marking
CN113920190A (en) * 2021-06-30 2022-01-11 南京林业大学 Ginkgo flower spike orientation method and system
CN114463270A (en) * 2021-12-30 2022-05-10 浙江大华技术股份有限公司 Disconnecting link state identification method and device, electronic device and storage medium
CN114821042A (en) * 2022-04-27 2022-07-29 南京国电南自轨道交通工程有限公司 R-FCN disconnecting link detection method combining local features and global features
CN115719416A (en) * 2022-11-18 2023-02-28 中国南方电网有限责任公司超高压输电公司南宁监控中心 Disconnecting link state identification method and device, computer equipment and storage medium
CN116363574A (en) * 2023-02-09 2023-06-30 福建睿思特科技股份有限公司 Double-column disconnecting link state judging method and device based on Yolov7 key point detection
CN116645563A (en) * 2023-06-12 2023-08-25 重庆邮电大学 Typical traffic event detection system based on deep learning
CN116844228A (en) * 2023-06-26 2023-10-03 西南石油大学 Method for identifying actions of containment pandas based on space-time channel attention mechanism
CN116934722A (en) * 2023-07-27 2023-10-24 浙江工业大学 Small intestine micro-target detection method based on self-correction coordinate attention
CN117173148A (en) * 2023-09-19 2023-12-05 华大天元(北京)科技股份有限公司 Power station equipment defect identification method and related equipment
CN117456167A (en) * 2023-11-01 2024-01-26 南通大学 Target detection algorithm based on improved YOLOv8s
CN117475416A (en) * 2023-11-20 2024-01-30 西安热工研究院有限公司 Thermal power station pointer type instrument reading identification method, system, equipment and medium
CN117541586A (en) * 2024-01-10 2024-02-09 长春理工大学 Thyroid nodule detection method based on deformable YOLO

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10699168B1 (en) * 2018-12-29 2020-06-30 Alibaba Group Holding Limited Computer-executed method and apparatus for assessing vehicle damage
CN111523610A (en) * 2020-05-06 2020-08-11 青岛联合创智科技有限公司 Article identification method for efficient sample marking
CN113920190A (en) * 2021-06-30 2022-01-11 南京林业大学 Ginkgo flower spike orientation method and system
CN114463270A (en) * 2021-12-30 2022-05-10 浙江大华技术股份有限公司 Disconnecting link state identification method and device, electronic device and storage medium
CN114821042A (en) * 2022-04-27 2022-07-29 南京国电南自轨道交通工程有限公司 R-FCN disconnecting link detection method combining local features and global features
CN115719416A (en) * 2022-11-18 2023-02-28 中国南方电网有限责任公司超高压输电公司南宁监控中心 Disconnecting link state identification method and device, computer equipment and storage medium
CN116363574A (en) * 2023-02-09 2023-06-30 福建睿思特科技股份有限公司 Double-column disconnecting link state judging method and device based on Yolov7 key point detection
CN116645563A (en) * 2023-06-12 2023-08-25 重庆邮电大学 Typical traffic event detection system based on deep learning
CN116844228A (en) * 2023-06-26 2023-10-03 西南石油大学 Method for identifying actions of containment pandas based on space-time channel attention mechanism
CN116934722A (en) * 2023-07-27 2023-10-24 浙江工业大学 Small intestine micro-target detection method based on self-correction coordinate attention
CN117173148A (en) * 2023-09-19 2023-12-05 华大天元(北京)科技股份有限公司 Power station equipment defect identification method and related equipment
CN117456167A (en) * 2023-11-01 2024-01-26 南通大学 Target detection algorithm based on improved YOLOv8s
CN117475416A (en) * 2023-11-20 2024-01-30 西安热工研究院有限公司 Thermal power station pointer type instrument reading identification method, system, equipment and medium
CN117541586A (en) * 2024-01-10 2024-02-09 长春理工大学 Thyroid nodule detection method based on deformable YOLO

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
冷睿轩: ""基于YOLOv8 的输电线路异物识别算法应用"", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 3, 15 March 2024 (2024-03-15), pages 3 *
张姝瑾等: ""基于YOLO_v8n-seg-FCA-BiFPN的奶牛身体分割方法"", 《农业机械学报》, vol. 55, no. 3, 25 March 2024 (2024-03-25), pages 2 *

Similar Documents

Publication Publication Date Title
CN110070033B (en) Method for detecting wearing state of safety helmet in dangerous working area in power field
CN111223088B (en) Casting surface defect identification method based on deep convolutional neural network
CN111428748B (en) HOG feature and SVM-based infrared image insulator identification detection method
CN106874894B (en) Human body target detection method based on regional full convolution neural network
CN112967243A (en) Deep learning chip packaging crack defect detection method based on YOLO
CN114022432B (en) Insulator defect detection method based on improved yolov5
CN106407928B (en) Transformer composite insulator casing monitoring method and system based on raindrop identification
CN111047554A (en) Composite insulator overheating defect detection method based on instance segmentation
CN107679495B (en) Detection method for movable engineering vehicles around power transmission line
CN110929593A (en) Real-time significance pedestrian detection method based on detail distinguishing and distinguishing
CN110399840A (en) A kind of quick lawn semantic segmentation and boundary detection method
CN111402224A (en) Target identification method for power equipment
CN114648714A (en) YOLO-based workshop normative behavior monitoring method
CN112329771B (en) Deep learning-based building material sample identification method
CN115272204A (en) Bearing surface scratch detection method based on machine vision
CN112131924A (en) Transformer substation equipment image identification method based on density cluster analysis
Zhu et al. Towards automatic wild animal detection in low quality camera-trap images using two-channeled perceiving residual pyramid networks
CN113569981A (en) Power inspection bird nest detection method based on single-stage target detection network
CN110047041A (en) A kind of empty-frequency-domain combined Traffic Surveillance Video rain removing method
CN110321890A (en) A kind of digital instrument recognition methods of electric inspection process robot
CN111597939B (en) High-speed rail line nest defect detection method based on deep learning
CN113052139A (en) Deep learning double-flow network-based climbing behavior detection method and system
CN113537079A (en) Target image angle calculation method based on deep learning
CN111767919B (en) Multilayer bidirectional feature extraction and fusion target detection method
CN111160372A (en) Large target identification method based on high-speed convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination