CN111738117B - Deep learning-based detection method for electric bucket tooth video key frame - Google Patents

Deep learning-based detection method for electric bucket tooth video key frame Download PDF

Info

Publication number
CN111738117B
CN111738117B CN202010533835.9A CN202010533835A CN111738117B CN 111738117 B CN111738117 B CN 111738117B CN 202010533835 A CN202010533835 A CN 202010533835A CN 111738117 B CN111738117 B CN 111738117B
Authority
CN
China
Prior art keywords
bucket tooth
electric
key frame
video
electric bucket
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010533835.9A
Other languages
Chinese (zh)
Other versions
CN111738117A (en
Inventor
解治宇
徐连生
孙健
邓卓夫
柳小波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Angang Group Mining Co Ltd
Original Assignee
Angang Group Mining Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Angang Group Mining Co Ltd filed Critical Angang Group Mining Co Ltd
Priority to CN202010533835.9A priority Critical patent/CN111738117B/en
Publication of CN111738117A publication Critical patent/CN111738117A/en
Application granted granted Critical
Publication of CN111738117B publication Critical patent/CN111738117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a detection method of an electric bucket tooth video key frame based on deep learning. The invention mainly solves the defects of the traditional scheme that the selected video key frame image has no representativeness, poor instantaneity and large calculation amount due to the fact that the key frame is selected erroneously in the past by utilizing a deep learning mode.

Description

Deep learning-based detection method for electric bucket tooth video key frame
Technical Field
The invention relates to a detection method of an electric bucket tooth video key frame based on deep learning.
Background
Many of the video acquired from the monitoring device are redundant data, and it is a difficult problem to acquire key frame images from these video data. In the past, there have been key frame image extraction in video streams by extracting key frames in video by a K-means clustering method and setting an initial threshold, sampling-based key frame extraction, and extraction of key frame images in video streams based on shot boundaries and colors and also texture features. The clustering mode is not ideal in the effect of the extracted key frame image caused by unreasonable influence of threshold setting. The sampling mode is to randomly extract the machine frame from the video frame image as the bucket tooth key frame, but the key frame images are not representative and obviously unreasonable. The bucket tooth key frame image is determined according to the shot transformation and the color feature transformation in the video through the shot boundary and the color feature method. By determining the key frame according to the overall information change of the image, the method is easy to cause error, non-ideal and poor representativeness of key frame selection and increases the calculation amount.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides a detection method of an electric bucket tooth video key frame based on deep learning by utilizing the existing deep learning technology, which is mainly applied to finding out and storing a required electric bucket tooth key frame image from an electric bucket tooth monitoring video image in a mine.
The method proposed by the invention is realized in this way.
The invention discloses a detection method of an electric bucket tooth video key frame based on deep learning, which is characterized by comprising the following steps of: the system comprises an electric bucket tooth monitoring video acquisition part, a bucket tooth video data processing part, an electric bucket tooth video key frame acquisition part based on deep learning and other application parts of key frames acquired by the method. The method comprises the following main steps:
step 1: collecting monitoring video data of the electric bucket teeth from a monitoring camera arranged above the electric shovel, and taking the monitoring video data of the electric bucket teeth as input for subsequently extracting video key frame images of the electric bucket teeth;
step 2: carrying out video data preprocessing on the electric bucket tooth monitoring video data;
step 3: building an electric bucket tooth video key frame image extractor;
step 3.1: constructing an initial electric bucket tooth video key frame image to extract a convolutional neural network, and modifying the convolutional neural network on the basis;
step 3.2: through multiple experiments, a depth residual error substructure is added on the basis of the original SegNet network, and a convolution kernel of 1*1 is added before each pooling to realize cross-channel information interaction and information integration, and then the effect of deconvolution into the original resolution is better;
step 4: taking the preprocessed video in the step 2 as a key frame image of the electric bucket tooth video to extract the input of a convolutional neural network;
step 4.1: performing salient feature detection on the electric bucket tooth monitoring video through an electric bucket tooth video key frame image extractor, and then obtaining a key frame image in the electric bucket tooth monitoring video;
step 4.2: the saliency characteristic image about the electric bucket teeth in the electric bucket tooth monitoring video, namely the image about the electric bucket teeth in the monitoring video, can be obtained by carrying out saliency detection on the electric bucket tooth monitoring video;
step 5: carrying out image processing on the obtained electric bucket tooth saliency characteristic image;
step 6: after the key frame image in the electric bucket tooth monitoring video is acquired, the key frame image can be used for subsequent missing detection of the electric bucket tooth.
As a further optimization of the present invention, the step 2: the video stream data preprocessing comprises the following steps: 1) The video preprocessing, such as video gray level conversion or binary conversion, is carried out on the obtained monitoring video image of the electric bucket tooth, so that various influences caused by noise in the monitoring video can be further removed.
As a further optimization of the present invention, the step 4 continues to perform image processing on the obtained bucket tooth key frame image: 1) And further performing expansion corrosion treatment on the obtained bucket tooth key frame image. 2) And then carrying out secondary screening on the processed bucket tooth key frame image: judging the area size of the largest connected region in the image by using OpenCV, and determining a more effective bucket tooth key frame image if the area size of the connected region is larger than a set threshold value.
The invention has the following effects: according to the detection method of the electric bucket tooth video key frame based on deep learning, bucket tooth key frame images during working of the mine electric shovel can be effectively extracted and used for subsequent missing detection of the electric bucket tooth. And the extracted video key frame image has a certain representativeness.
Drawings
Fig. 1 is a flow of a method for detecting a bucket tooth video key frame based on deep learning.
Fig. 2 is a key frame extraction network of a method for detecting an electric bucket tooth video key frame based on deep learning.
Detailed Description
The following is a more detailed description of embodiments of the present invention. The embodiments and examples of the present invention are for illustrative purposes and are not intended to be limiting.
The detection method of the electric bucket tooth video key frame based on the deep learning adopted in the embodiment is as shown in fig. 1 and 2.
The specific implementation steps are as follows:
step 1: collecting video data from a monitoring camera arranged above the electric shovel, and taking the video data as input for subsequently extracting bucket tooth key frame images;
step 2: carrying out video graying conversion or binary conversion pretreatment on video data;
step 3: taking the preprocessed video in the step 2 as an input of a convolutional neural network for extracting key frame images of the bucket teeth, and performing significance detection on the key frames of the bucket teeth video to obtain the key frame images of the bucket teeth; the specific method comprises the following steps: through multiple experiments, the bucket tooth key frame image adds a depth residual error substructure in a full convolution network, and adds a 1*1 convolution kernel before each pooling to realize cross-channel information interaction and information integration, and then deconvolutes the information into the original resolution, so that the bucket tooth video key frame image is very representative. The structure of the convolutional neural network extracted from the bucket tooth video key frame image is shown in fig. 2.
Step 4: then, performing further expansion corrosion treatment operation on the obtained bucket tooth saliency characteristic image;
step 5: and (3) carrying out secondary screening on the treated bucket tooth saliency characteristic image: calculating the maximum communication area, and if the maximum communication area is larger than a set threshold value, judging that the maximum communication area is the bucket tooth key frame image of the video; after the tooth key frame images are acquired, these key frame images can be used in a subsequent electric bucket tooth missing detection application.
The invention solves the defects of the traditional scheme that the traditional scheme of no representativeness and large calculation amount is caused by the error selection of the key frames in the deep learning mode, can effectively extract the key frame images of the electric bucket teeth when the mine electric shovel works, and is used for the subsequent missing detection of the electric bucket teeth.

Claims (3)

1. The method for detecting bucket tooth key frames in the electric shovel monitoring video based on deep learning is characterized by comprising the following steps of: the method comprises an acquisition part of an electric bucket tooth monitoring video, a video data preprocessing part, an electric bucket tooth key frame acquisition part of the electric bucket tooth monitoring video based on deep learning and an application part of a key frame acquired by the method; the method comprises the following main steps:
step 1: collecting monitoring video data of the electric bucket teeth from a monitoring camera arranged above the electric shovel, and taking the monitoring video data of the electric bucket teeth as input for subsequently extracting video key frame images of the electric bucket teeth;
step 2: carrying out video data preprocessing on the electric bucket tooth monitoring video data;
step 3: building an electric bucket tooth video key frame image extractor;
step 3.1: constructing an initial electric bucket tooth video key frame image to extract a convolutional neural network, and modifying the convolutional neural network on the basis;
step 3.2: through multiple experiments, a depth residual error substructure is added on the basis of the original SegNet network, and a convolution kernel of 1*1 is added before each pooling to realize cross-channel information interaction and information integration, and then the effect of deconvolution into the original resolution is better;
step 4: taking the preprocessed video in the step 2 as a key frame image of the electric bucket tooth video to extract the input of a convolutional neural network;
step 4.1: performing salient feature detection on the electric bucket tooth monitoring video through an electric bucket tooth video key frame image extractor, and then obtaining a key frame image in the electric bucket tooth monitoring video;
step 4.2: the saliency characteristic image about the electric bucket teeth in the electric bucket tooth monitoring video, namely the image about the electric bucket teeth in the monitoring video, can be obtained by carrying out saliency detection on the electric bucket tooth monitoring video;
step 5: carrying out image processing on the obtained electric bucket tooth saliency characteristic image;
in the step 5, the obtained electric bucket tooth saliency characteristic image is subjected to image processing continuously; comprising the following steps: 1) Performing further expansion corrosion treatment on the obtained bucket tooth saliency characteristic image; 2) And then carrying out secondary screening on the treated bucket tooth saliency characteristic image: judging the area size of the largest connected region in the image by using OpenCV, and if the area size of the connected region is larger than a set threshold value, determining an effective bucket tooth key frame image;
step 6: after the key frame image in the electric bucket tooth monitoring video is acquired, the key frame image can be used for subsequent missing detection of the electric bucket tooth.
2. The method for detecting bucket tooth key frames in deep learning-based electric shovel monitoring video according to claim 1, wherein the method comprises the following steps: the step 2: the monitoring video stream data preprocessing comprises the following steps: and performing video gray level conversion or binary conversion on the obtained electric bucket tooth monitoring video image, and reducing the influence caused by color.
3. The method for detecting bucket tooth key frames in deep learning-based electric shovel monitoring video according to claim 1, wherein the method comprises the following steps: in the step 3, for the extraction of the bucket tooth video key frame image of the electric shovel, a depth residual error substructure is added in a full convolution network through multiple experiments, and a convolution kernel of 1*1 is added before each pooling to realize the information interaction and information integration of a cross channel, and then the bucket tooth key frame image in the video is obtained through deconvolution into the original resolution.
CN202010533835.9A 2020-06-12 2020-06-12 Deep learning-based detection method for electric bucket tooth video key frame Active CN111738117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010533835.9A CN111738117B (en) 2020-06-12 2020-06-12 Deep learning-based detection method for electric bucket tooth video key frame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010533835.9A CN111738117B (en) 2020-06-12 2020-06-12 Deep learning-based detection method for electric bucket tooth video key frame

Publications (2)

Publication Number Publication Date
CN111738117A CN111738117A (en) 2020-10-02
CN111738117B true CN111738117B (en) 2023-12-19

Family

ID=72648959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010533835.9A Active CN111738117B (en) 2020-06-12 2020-06-12 Deep learning-based detection method for electric bucket tooth video key frame

Country Status (1)

Country Link
CN (1) CN111738117B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561912B (en) * 2021-02-20 2021-06-01 四川大学 Medical image lymph node detection method based on priori knowledge

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921130A (en) * 2018-07-26 2018-11-30 聊城大学 Video key frame extracting method based on salient region
CN110008804A (en) * 2018-12-12 2019-07-12 浙江新再灵科技股份有限公司 Elevator monitoring key frame based on deep learning obtains and detection method
CN110826491A (en) * 2019-11-07 2020-02-21 北京工业大学 Video key frame detection method based on cascading manual features and depth features

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921130A (en) * 2018-07-26 2018-11-30 聊城大学 Video key frame extracting method based on salient region
CN110008804A (en) * 2018-12-12 2019-07-12 浙江新再灵科技股份有限公司 Elevator monitoring key frame based on deep learning obtains and detection method
CN110826491A (en) * 2019-11-07 2020-02-21 北京工业大学 Video key frame detection method based on cascading manual features and depth features

Also Published As

Publication number Publication date
CN111738117A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
CN109035149B (en) License plate image motion blur removing method based on deep learning
CN109886974B (en) Seal removing method
CN109871938A (en) A kind of components coding detection method based on convolutional neural networks
CN110309806B (en) Gesture recognition system and method based on video image processing
US10395374B2 (en) Surveillance video based video foreground extraction method
CN106682665B (en) Seven-segment type digital display instrument number identification method based on computer vision
CN108665417B (en) License plate image deblurring method, device and system
CN107038445B (en) Binarization and segmentation method for Chinese character verification code
CN110288535B (en) Image rain removing method and device
CN109344820A (en) Digital electric meter Recognition of Reading method based on computer vision and deep learning
CN111738117B (en) Deep learning-based detection method for electric bucket tooth video key frame
CN110096945B (en) Indoor monitoring video key frame real-time extraction method based on machine learning
CN1564600A (en) Detection method of moving object under dynamic scene
CN110889374A (en) Seal image processing method and device, computer and storage medium
CN109254654B (en) Driving fatigue feature extraction method combining PCA and PCANet
CN110807747B (en) Document image noise reduction method based on foreground mask
CN109886900B (en) Synthetic rain map rain removing method based on dictionary training and sparse representation
CN114758139B (en) Method for detecting accumulated water in foundation pit
CN110866470A (en) Face anti-counterfeiting detection method based on random image characteristics
CN111127450B (en) Bridge crack detection method and system based on image
CN114612907A (en) License plate recognition method and device
CN112101213A (en) Method for acquiring fingerprint direction information
Choubey et al. Bilateral Partitioning based character recognition for Vehicle License plate
Enze et al. Inter-frame differencing in training data for artificial intelligence: contour processing for inter-frame differencing method
CN104952143A (en) Color change detection method and color change detection system for red waterlines of bill images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant