CN112101260A - Method, device, equipment and storage medium for identifying safety belt of operator - Google Patents

Method, device, equipment and storage medium for identifying safety belt of operator Download PDF

Info

Publication number
CN112101260A
CN112101260A CN202011002640.8A CN202011002640A CN112101260A CN 112101260 A CN112101260 A CN 112101260A CN 202011002640 A CN202011002640 A CN 202011002640A CN 112101260 A CN112101260 A CN 112101260A
Authority
CN
China
Prior art keywords
image
area
preset
target operator
preprocessed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011002640.8A
Other languages
Chinese (zh)
Other versions
CN112101260B (en
Inventor
方燕琼
涂小涛
郑培文
胡春潮
伍晓泉
李晓枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Electric Power Science Research Institute Energy Technology Co Ltd
Original Assignee
Guangdong Electric Power Science Research Institute Energy Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Electric Power Science Research Institute Energy Technology Co Ltd filed Critical Guangdong Electric Power Science Research Institute Energy Technology Co Ltd
Priority to CN202011002640.8A priority Critical patent/CN112101260B/en
Publication of CN112101260A publication Critical patent/CN112101260A/en
Application granted granted Critical
Publication of CN112101260B publication Critical patent/CN112101260B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a device, equipment and a storage medium for identifying safety belts of operators, wherein the method comprises the following steps: acquiring an image of a working area; preprocessing the image of the operation area to generate a preprocessed image; detecting whether a target operator exists in the preprocessed image; when a target operator exists in the preprocessed image, determining an image of a region to be identified of the target operator from the preprocessed image; generating comprehensive features according to the HOG features and the HOC features of the direction gradient histograms and the color histograms extracted from the images of the areas to be identified; and inputting the comprehensive characteristics into a preset SVM classification model, and outputting a safety belt identification result. Therefore, the accuracy of safety belt identification is improved, the operating personnel can be timely reminded, and safety accidents are avoided.

Description

Method, device, equipment and storage medium for identifying safety belt of operator
Technical Field
The invention relates to the technical field of image recognition, in particular to a method, a device, equipment and a storage medium for recognizing a safety belt of an operator.
Background
At present, some work needs operating personnel to accomplish in the high altitude in the electric power construction job site, and when highly reaching the certain degree, operating personnel just need wear the safety belt and carry out the operation to guarantee self safety, but some operating personnel often do not wear the safety belt because of self reason and cause the personal safety problem, and the people has various objective problems for the supervision, if the supervision is untimely, personal safety accident appears very easily.
Aiming at the safety supervision of a construction site and the safety belt wearing condition of an ascending operator, a safety belt wearing detection method based on image recognition is adopted. Firstly, extracting a foreground from an obtained video image by using a background difference method, carrying out binarization on the foreground to segment a moving target, then distinguishing whether the moving target represents a human moving target or not by using methods such as scale filtering according to the characteristics of the target, and tracking and marking the moving target representing the human. And finally, two detection lines are arranged in the middle of the road surface, and when the moving target of the person reaches the middle of the two detection lines, whether the person wears the safety belt or not is judged by detecting the distribution condition of chromatic values of pixel points in 2/3 parts of the moving target.
However, the above identification method can only perform seat belt wearing detection on all people in the image detection line, and during the actual operation, there may be management personnel in the image, and the above identification method may have a false identification, and since the operation scene is relatively complex and the proportion of seat belts in the picture is small, the seat belt identification method based on only the color Histogram (HOC) is easily affected by factors such as weather, illumination, shadow or angle, and the identification accuracy is low.
Disclosure of Invention
The invention provides a safety belt identification method, a safety belt identification device, safety belt identification equipment and a storage medium for operators, and solves the technical problem that the identification accuracy is low due to the fact that the safety belt identification method in the prior art cannot automatically identify the operators and is easily influenced by environmental factors.
The invention provides a worker safety belt identification method, which comprises the following steps:
acquiring an image of a working area;
preprocessing the image of the operation area to generate a preprocessed image;
detecting whether a target operator exists in the preprocessed image;
when a target operator exists in the preprocessed image, determining an image of a region to be identified of the target operator from the preprocessed image;
generating comprehensive features according to the HOG features and the HOC features of the direction gradient histograms and the color histograms extracted from the images of the areas to be identified;
and inputting the comprehensive characteristics into a preset SVM classification model, and outputting a safety belt identification result.
Optionally, the step of preprocessing the image of the work area to generate a preprocessed image includes:
detecting a plurality of color components of the work area image; the plurality of color components includes a red color component, a green color component, and a blue color component;
calculating a gray value of the working area image by using the red component, the green component and the blue component;
constructing a gray image according to the gray value of the operation area image;
and denoising the gray level image to generate a preprocessed image.
Optionally, the step of detecting whether a target operator exists in the preprocessed image includes:
inputting the preprocessed image into a preset personnel detection model and a preset ladder detection model, and generating a first image frame and a second image frame on the preprocessed image; the first image frame is used for identifying the position of a person, and the second image frame is used for identifying the position of a ladder;
when there is an overlapping portion between the first image frame and the second image frame, calculating a first ratio between an area of the overlapping portion and an area of the second image frame;
calculating a first distance between a bottom edge of the overlapping portion and a bottom edge of the second image frame;
calculating a second distance between a top edge of the second image frame and a bottom edge of the second image frame;
determining a second ratio between the first distance and the second distance;
and when the first ratio is greater than or equal to a first preset threshold value and the second ratio is greater than or equal to a second preset threshold value, determining that a target operator exists in the preprocessed image.
Optionally, when there is a target operator in the preprocessed image, the step of determining an image of an area to be identified of the target operator from the preprocessed image includes:
when a target operator exists in the preprocessed image, capturing an image in the first image frame from the preprocessed image as a person region image;
inputting the personnel area image into a preset two-dimensional attitude estimation network model, and determining two-dimensional joint coordinates of personnel corresponding to the personnel area image;
if the matching number of the two-dimensional joint coordinates and preset two-dimensional climbing posture coordinates is larger than a first preset number threshold, determining that the posture of the person corresponding to the person region image is a climbing posture;
and intercepting a preset proportion image of the personnel area image as an image of an area to be identified of the target operator.
Optionally, the method further comprises:
inputting the two-dimensional joint coordinates into a preset three-dimensional attitude estimation network model, and determining three-dimensional joint coordinates of personnel corresponding to the personnel area image;
if the matching number of the three-dimensional joint coordinates and the preset three-dimensional climbing posture coordinates is larger than a second preset number threshold, determining that the posture of the person corresponding to the person region image is a climbing posture;
and intercepting a preset proportion image of the personnel area image as an image of an area to be identified of the target operator.
Optionally, the step of generating a comprehensive feature according to the feature of the histogram of oriented gradients HOG and the feature of the color histogram HOC extracted from the image of the region to be identified includes:
removing the background of the image of the area to be identified, filtering and generating an image to be processed;
dividing the image to be processed into a plurality of pixel small blocks;
calculating a gradient direction histogram of the plurality of pixel small blocks to obtain an HOG characteristic;
loading the image to be processed in a preset HSV color space, and determining the hue and saturation value of each pixel point in the image to be processed;
constructing a color histogram based on the hue and saturation values of the pixels to obtain an HOC characteristic;
and carrying out normalization processing on the HOG characteristic and the HOC characteristic and splicing to generate comprehensive characteristics.
Optionally, the safety belt recognition result includes that the target operator has a safety belt and the target operator does not have a safety belt, and the step of inputting the comprehensive characteristics into a preset SVM classification model and outputting the safety belt recognition result includes:
judging whether a safety belt exists in the image of the area to be recognized corresponding to the comprehensive characteristics or not according to the comprehensive characteristics through the preset SVM classification model;
if yes, outputting a prompt that the target operator has a safety belt;
and if not, outputting a prompt that the target operator does not have the safety belt.
The invention provides an operator safety belt recognition device, comprising:
the image acquisition module of the area to be operated is used for acquiring an image of the operation area;
the preprocessing image generation module is used for preprocessing the image of the operation area to generate a preprocessing image;
the target operator detection module is used for detecting whether a target operator exists in the preprocessed image;
the image recognition device comprises a to-be-recognized area image determining module, a to-be-recognized area image determining module and a recognition module, wherein the to-be-recognized area image determining module is used for determining an image of a to-be-recognized area of a target operator from the preprocessed image when the target operator exists in the preprocessed image;
the comprehensive feature generation module is used for generating comprehensive features according to the HOG features and the HOC features of the histogram of oriented gradients and the color histogram extracted from the image of the region to be identified;
and the safety belt recognition result output module is used for inputting the comprehensive characteristics into a preset SVM classification model and outputting a safety belt recognition result.
Optionally, the preprocessed image generating module includes:
a color component detection sub-module for detecting a plurality of color components of the work area image; the plurality of color components includes a red color component, a green color component, and a blue color component;
the gray value calculation submodule is used for calculating the gray value of the working area image by adopting the red component, the green component and the blue component;
the gray level image construction submodule is used for constructing a gray level image according to the gray level value of the operation area image;
and the pre-processing image generation submodule is used for carrying out denoising processing on the gray level image to generate a pre-processing image.
Optionally, the target operator detection module includes:
the image frame generation sub-module is used for inputting the preprocessed image into a preset personnel detection model and a preset ladder detection model, and generating a first image frame and a second image frame on the preprocessed image; the first image frame is used for identifying the position of a person, and the second image frame is used for identifying the position of a ladder;
a first ratio calculation sub-module for calculating a first ratio between an area of an overlapping portion and an area of the second image frame when the overlapping portion exists between the first image frame and the second image frame;
a first distance calculating submodule for calculating a first distance between a bottom side of the overlapping portion and a bottom side of the second image frame;
a second distance calculating submodule for calculating a second distance between a top edge of the second image frame and a bottom edge of the second image frame;
a second ratio determination submodule for determining a second ratio between the first distance and the second distance;
and the target operator detection submodule is used for determining that a target operator exists in the preprocessed image when the first proportion is greater than or equal to a first preset threshold value and the second proportion is greater than or equal to a second preset threshold value.
Optionally, the module for determining an image of the area to be identified includes:
a personnel area image intercepting submodule, configured to intercept, when a target operator exists in the preprocessed image, an image in the first image frame from the preprocessed image as a personnel area image;
the two-dimensional joint coordinate determination submodule is used for inputting the personnel area image into a preset two-dimensional attitude estimation network model and determining two-dimensional joint coordinates of personnel corresponding to the personnel area image;
the first climbing posture determining submodule is used for determining the posture of a person corresponding to the person region image as a climbing posture if the matching number of the two-dimensional joint coordinates and preset two-dimensional climbing posture coordinates is greater than a first preset number threshold;
and the first to-be-identified area image intercepting submodule is used for intercepting a preset proportion image of the personnel area image as the to-be-identified area image of the target operator.
Optionally, the module for determining an image of the area to be identified further includes:
the three-dimensional joint coordinate determination submodule is used for inputting the two-dimensional joint coordinates into a preset three-dimensional posture estimation network model and determining the three-dimensional joint coordinates of the personnel corresponding to the personnel area image;
the second climbing posture determining submodule is used for determining the posture of the person corresponding to the person region image as a climbing posture if the matching number of the three-dimensional joint coordinates and the preset three-dimensional climbing posture coordinates is greater than a second preset number threshold;
and the second to-be-identified area image intercepting submodule is used for intercepting a preset proportion image of the personnel area image as the to-be-identified area image of the target operator.
Optionally, the comprehensive feature generation module includes:
the image to be processed generation submodule is used for removing the background of the image in the area to be identified, filtering the image and generating an image to be processed;
the pixel small block dividing submodule is used for dividing the image to be processed into a plurality of pixel small blocks;
the HOG characteristic determination submodule is used for calculating the gradient direction histograms of the pixel small blocks to obtain the HOG characteristic;
the pixel value determining submodule is used for loading the image to be processed in a preset HSV color space and determining the value of hue and saturation of each pixel point in the image to be processed;
the HOG characteristic determination submodule is used for constructing a color histogram based on the hue and saturation values of the pixel points to obtain HOC characteristics;
and the comprehensive characteristic generation submodule is used for carrying out normalization processing on the HOG characteristic and the HOC characteristic and splicing to generate comprehensive characteristics.
Optionally, the seat belt recognition result includes that the target operator has a seat belt and the target operator does not have a seat belt, and the seat belt recognition result output module includes:
the safety belt judgment sub-module is used for judging whether a safety belt exists in the image of the area to be identified corresponding to the comprehensive characteristics or not according to the comprehensive characteristics through the preset SVM classification model;
the first prompting submodule is used for outputting a prompt that the target operator has a safety belt if the target operator has the safety belt;
and the second prompting submodule is used for outputting the prompt that the target operator does not have the safety belt if the target operator does not have the safety belt.
The invention further provides an electronic device, which comprises a memory and a processor, wherein the memory stores a computer program, and when the computer program is executed by the processor, the processor executes the steps of the worker safety belt identification method according to any one of the embodiments.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by the processor, implements the method for identifying a seat belt for an operator as described in any of the above embodiments.
According to the technical scheme, the invention has the following advantages:
in the embodiment of the invention, the acquired operation area image is preprocessed to generate the preprocessed image, when the target operator exists in the preprocessed image, the image of the area to be recognized of the target operator is determined, the corresponding HOG characteristic and HOC characteristic are extracted according to the image of the area to be recognized to obtain the comprehensive characteristic, and finally the comprehensive characteristic is input into the preset SVM classification model, so that the safety belt recognition result is output to determine whether the target operator carries the safety belt.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating steps of a method for identifying a safety belt of an operator according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a method for identifying operator safety belts in accordance with an alternative embodiment of the present invention;
FIG. 3 is a schematic diagram of an overlapping portion of a first image frame and a second image frame according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating the steps of determining an image of an area to be identified of the target operator according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of two-dimensional attitude estimation results and three-dimensional attitude estimation results according to an embodiment of the invention;
FIG. 6 is a schematic diagram of a three-dimensional pose estimation network model according to an embodiment of the present invention;
FIG. 7 is a block diagram illustrating a process of determining whether a worker is in a climbing posture according to an embodiment of the present invention;
fig. 8 is a block diagram of an operator seatbelt identification device according to an embodiment of the present invention.
Detailed Description
In the prior art, a pedestrian target positive sample and a pedestrian target negative sample in a scene to be monitored are collected, then a classifier for pedestrian detection is trained based on Histogram of Oriented Gradient (HOG) features and a Support Vector Machine (SVM), the pedestrian target entering the scene is detected, and then the detected pedestrian is subjected to dressing analysis and judgment, so that the pedestrian detection is easily influenced by environmental factors, whether the pedestrian is an operator or not cannot be identified, and the identification accuracy is low. The embodiment of the invention provides a safety belt identification method, a safety belt identification device, safety belt identification equipment and a storage medium for an operator, which are used for solving the technical problem that the identification accuracy is low due to the fact that the safety belt identification method in the prior art cannot automatically identify the operator and is easily influenced by environmental factors.
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of a method for identifying a seat belt of an operator according to an embodiment of the present invention.
The invention provides a worker safety belt identification method, which comprises the following steps:
step 101, acquiring a working area image;
in the embodiment of the invention, when the worker needs to be judged whether to carry the safety belt, the image of the working area where the worker is located can be acquired through the camera or other image acquisition devices.
Step 102, preprocessing the image of the operation area to generate a preprocessed image;
in a specific implementation, the acquired job region image may have noise or color that is difficult to distinguish, and at this time, the job region image may be preprocessed to eliminate the noise in the job region image, and the gray values in the job region image are unified to generate a preprocessed image.
103, detecting whether a target operator exists in the preprocessed image;
after the preprocessed image is acquired, in order to reduce the use load of the processor and improve the recognition efficiency of the safety belt, whether a target operator exists in the preprocessed image or not can be detected, and if the target operator exists, whether the target operator has the safety belt or not can be continuously judged.
If the image does not exist, the preprocessed image generated by the image of the other operation area can be continuously acquired for detection.
104, when a target operator exists in the preprocessed image, determining an image of a region to be identified of the target operator from the preprocessed image;
in an example of the present invention, when the target operator exists in the preprocessed image, it is described that the preprocessed image needs to detect whether the target operator carries a safety belt, but in order to avoid a false detection, for example, the target operator is already on the ground, or the target operator does not exist on a ladder. At this time, the image of the area to be identified of the target operator is determined from the preprocessed image so as to detect whether the target operator is in a state of climbing operation.
105, generating comprehensive features according to the HOG features and the HOC features of the histogram of oriented gradients extracted from the image of the region to be identified;
after the image of the area to be recognized is obtained, the situation that the target operator is in a climbing operation state at the moment is described, at the moment, the corresponding HOG feature and HOC feature are extracted from the image of the area to be recognized, so that whether the safety belt exists or not and the position of the safety belt are recognized, and in order to improve the efficiency of subsequent safety belt recognition, the HOG feature and the HOC feature are combined to generate comprehensive features.
It is worth mentioning that the HOG (Histogram of Oriented gradients) feature refers to a descriptor that can quickly describe the local Gradient feature of an object. Firstly, a window is divided into a plurality of blocks, then a plurality of cells are divided in each block, then a gradient direction histogram in each cell is counted as a feature vector of the cell, then the feature vectors of each cell are connected to be used as a feature vector of one block, and finally the feature vectors of the blocks are connected to be an HOG feature descriptor of the window.
The HOC (histogram of colors) feature refers to a descriptor that can describe the proportion of different colors in the whole image.
And 106, inputting the comprehensive characteristics into a preset SVM classification model, and outputting a safety belt identification result.
A Support Vector Machine (SVM) refers to a two-class classification model, a basic model of which is defined as a linear classifier with the maximum interval on a feature space, and a learning strategy of which is interval maximization and can be finally converted into a solution of a convex quadratic programming problem.
In the embodiment of the invention, the acquired operation area image is preprocessed to generate the preprocessed image, when the target operator exists in the preprocessed image, the image of the area to be recognized of the target operator is determined, the corresponding HOG characteristic and HOC characteristic are extracted according to the image of the area to be recognized to obtain the comprehensive characteristic, and finally the comprehensive characteristic is input into the preset SVM classification model, so that the safety belt recognition result is output to determine whether the target operator carries the safety belt.
Referring to fig. 2, fig. 2 is a flowchart illustrating steps of a method for identifying a seat belt of an operator according to an alternative embodiment of the present invention.
The invention provides a worker safety belt identification method, which comprises the following steps:
step 201, acquiring a working area image;
in the embodiment of the present invention, the specific implementation process of step 201 is similar to that of step 101, and is not described herein again.
Step 202, preprocessing the image of the operation area to generate a preprocessed image;
optionally, the step 202 may include the steps of:
detecting a plurality of color components of the work area image; the plurality of color components includes a red color component, a green color component, and a blue color component;
calculating a gray value of the working area image by using the red component, the green component and the blue component;
constructing a gray image according to the gray value of the operation area image;
and denoising the gray level image to generate a preprocessed image.
In the embodiment of the present invention, generally, the acquired work area images are all color images, and the sensitivities of human vision to the three colors of red, blue and green are different, so that gray scale processing needs to be performed on the color images by using a weighted average method to construct a gray scale image.
Taking a YUV color space as an example, a Y component in the YUV space is used to construct a gray image, and the corresponding linear relationship between the brightness Y and three color components of R (red component), G (green component), and B (blue component) is shown in the following formula:
Y=0.299×R+0.587×G+0.114×B
where Y represents both the luminance value and the grayscale value of the image, and R, G, B represents the values on R, G, B three components in the RGB color space of the image.
YUV is a kind of compiled true-color space (color space), and the proper terms such as Y' UV, YUV, YCbCr, YPbPr, etc. may be called YUV, overlapping with each other. "Y" represents brightness (Luma) or gray scale value, and "U" and "V" represent Chroma (Chroma or Chroma) and are used to describe the color and saturation of the image for specifying the color of the pixel.
After the gray scale image is constructed, noise may exist in the image during the process of obtaining the image due to environmental influences, such as camera shake, and at this time, denoising processing needs to be performed on the constructed gray scale image to generate a preprocessed image.
For example, a neighborhood mean filtering method may be used for noise processing, and the processing steps of the algorithm are as follows: an image f (x, y) is set as an N-order square matrix, the smoothed image is represented as g (x, y), and the average value of pixels in the image f (x, y) and the appointed field of the pixels is used as a new value of the pixels so as to eliminate abrupt change pixel points and filter certain noise. The mathematical meaning of the neighborhood mean filtering method is shown in the following formula:
Figure BDA0002694859180000111
the set of coordinates of pixels in the neighborhood with the pixel (x, y) as the center is represented as S in the above formula, and a is the number of all pixels in the set S.
Step 203, detecting whether a target operator exists in the preprocessed image;
further, the step 203 may include the steps of:
inputting the preprocessed image into a preset personnel detection model and a preset ladder detection model, and generating a first image frame and a second image frame on the preprocessed image; the first image frame is used for identifying the position of a person, and the second image frame is used for identifying the position of a ladder;
when there is an overlapping portion between the first image frame and the second image frame, calculating a first ratio between an area of the overlapping portion and an area of the second image frame;
calculating a first distance between a bottom edge of the overlapping portion and a bottom edge of the second image frame;
calculating a second distance between a top edge of the second image frame and a bottom edge of the second image frame;
determining a second ratio between the first distance and the second distance;
and when the first ratio is greater than or equal to a first preset threshold value and the second ratio is greater than or equal to a second preset threshold value, determining that a target operator exists in the preprocessed image.
In a specific implementation, the preset personnel detection model and the preset ladder detection model can be obtained by training a target detection model such as M2Det, and a personnel detector (M2Det1) and a ladder detector (M2Det2) are respectively trained by using a large number of images of personnel and ladders with real mark frames; inputting images to a personnel detector (M2det1) and a ladder detector (M2det2), calculating feature vectors through a network, and correcting the model by combining a back propagation method to obtain a personnel detection model and a ladder detection model.
In the embodiment of the present invention, after the first image frame and the second image frame are acquired, whether a target operator exists in the preprocessed image may be determined according to a position relationship between the first image frame and the second image frame, where the target operator refers to an operator on a ladder.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating an overlapping portion of a first image frame and a second image frame, including a first image frame B, according to an embodiment of the present inventionpAnd a second image frame B1The area of the first image frame is SbpThe area of the second image frame is Sb1The area of the overlapping part is SiThe distance between the top edge and the bottom edge of the first image frame is H1The specific distance between the bottom side of the overlapping portion and the bottom side of the first image frame is Hd
In a specific implementation, S is calculated when there is an overlapping portion between the first image frame and the second image frameiAnd Sb1First ratio of (T)1=Si/SblAnd H1And HdSecond ratio of (T)2=Hd/H1. And when the first proportion reaches a first preset threshold value and the second proportion reaches a second preset threshold value, determining that a target operator exists in the preprocessed image.
Optionally, when there is no overlapping portion between the first image frame and the second image frame, it is determined that there is no target operator in the preprocessed image.
Step 204, when a target operator exists in the preprocessed image, determining an image of a region to be identified of the target operator from the preprocessed image;
referring to FIG. 4, in one example of the present invention, the step 204 may include the following steps S1-S4:
s1, when a target operator exists in the preprocessed image, capturing an image in a first image frame from the preprocessed image to serve as a person area image;
s2, inputting the personnel area image into a preset two-dimensional attitude estimation network model, and determining two-dimensional joint coordinates of personnel corresponding to the personnel area image;
in a specific implementation, when a target operator exists in the preprocessed image, an image in the first image frame may be cut from the preprocessed image to serve as a person region image and input into the two-dimensional pose estimation network model, so as to determine two-dimensional joint coordinates of a person corresponding to the person region image.
It is worth mentioning that the process of determining the two-dimensional joint coordinates by the two-dimensional pose estimation network model may be as follows:
extracting features of the input personnel area image by adopting the same network structure of the front 10 layers of the VGG-19 network;
the extracted person region image features are input into a two-branch multi-stage convolutional neural network, where the first branch predicts a set of two-dimensional confidence maps for body part locations (e.g., elbows, knees, etc.). The second branch predicts a set of two-dimensional vector fields of partial affinities that encode the degree of correlation between body part locations;
and analyzing the confidence coefficient and the two-dimensional vector field through greedy reasoning to generate two-dimensional joint points for all people in the image.
Further description of the two-dimensional pose estimation method:
firstly, extracting a feature F from a picture by utilizing the first 10 layers of a VGG-19 network, and then processing the feature F through a continuous multi-stage network, wherein each stage (t) of the network comprises two branches, and the output results are S respectivelyt(part confidence map) and Lt(part affinity map). Finally, the support area of the body and the position information and the direction information are stored by using the characteristics of PAFs (partial definition fields).
The key parts of the human body are detected through repeatedly iterated CNN networks, and each CNN network has two branches, namely CNN _ S and CNN _ L. The networks of the first and subsequent stages differ morphologically. The two network branches of each stage are used to calculate the position confidence maps (joints)Dots) and site affinity domains (limbs). The input received in the first stage of the network is the feature F, which is processed by the network to obtain S1And L1. Starting from the second phase, the input to the phase t network consists of three parts: s, St-1,Lt-1,F。
The inputs to each stage network are:
Figure BDA0002694859180000131
Figure BDA0002694859180000132
by iterating until the network converges. Next, two sites were determined by PAFs (d)j1And dj2) If they are connected, PAFs calculates dj1And dj2Linear integration on the line, if
Figure BDA0002694859180000133
In a direction of
Figure BDA0002694859180000134
The directions of the (connected vectors) coincide and the value of the linear integral E is large, the probability that these two parts are the torso is large.
Figure BDA0002694859180000135
And finally, traversing all the collocations, calculating integral sum, finding out all the trunks of the body, wherein adjacent trunks have shared joint points, and combining all the trunks through the joint points to obtain the body skeleton of all the people so as to obtain the two-dimensional joint point coordinates of all the people.
S3, if the matching number of the two-dimensional joint coordinates and preset two-dimensional climbing posture coordinates is larger than a first preset number threshold, determining that the posture of a person corresponding to the person region image is a climbing posture;
in the embodiment of the invention, a two-dimensional climbing posture library is constructed in advance, two-dimensional joint point coordinates are matched with joint point coordinates of postures in the climbing posture library, if a matching value of the two-dimensional joint point coordinates and joint point coordinates of one climbing posture in the two-dimensional climbing posture library exceeds a set first preset coordinate threshold value, the two-dimensional joint point has a climbing posture, otherwise, the two-dimensional joint point is not in the climbing posture.
And S4, intercepting the image with the preset proportion of the personnel area image as the image of the area to be identified of the target operator.
The predetermined scale image may be set according to the type of a seat belt, and in the case of a five-point seat belt, since the five-point seat belt is mainly worn on the shoulder/chest, waist, and thigh of a human body, i.e., 2/5 through 4/5 of the human body, positions 2/5 through 4/5 in the person region image are selected as the to-be-identified region image of the target operator.
Further, the step 204 may further include the steps of:
inputting the two-dimensional joint coordinates into a preset three-dimensional attitude estimation network model, and determining three-dimensional joint coordinates of personnel corresponding to the personnel area image;
if the matching number of the three-dimensional joint coordinates and the preset three-dimensional climbing posture coordinates is larger than a second preset number threshold, determining that the posture of the person corresponding to the person region image is a climbing posture;
and intercepting a preset proportion image of the personnel area image as an image of an area to be identified of the target operator.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating a two-dimensional attitude estimation result and a three-dimensional attitude estimation result in the embodiment of the present invention.
Inputting a two-dimensional human body joint diagram into a three-dimensional posture estimation network model, and firstly mapping an input two-dimensional human body joint point to a potential space by using a SemGconv layer to obtain human body posture characteristics; and finally mapping the coded features to an output space through an additional SemGConv layer to obtain the coordinates of the three-dimensional joint points.
In the embodiment of the invention, a three-dimensional climbing posture library is constructed in advance, three-dimensional joint point coordinates are matched with joint point coordinates of postures in the climbing posture library, if the matching value of the three-dimensional joint point coordinates and the joint point coordinates of one climbing posture in the three-dimensional climbing posture library exceeds a set second preset coordinate threshold value, the three-dimensional joint point has a climbing posture, otherwise, the three-dimensional joint point is not in the climbing posture.
In an example of the present invention, two-dimensional joint coordinates may be input to a three-dimensional pose estimation network model to obtain three-dimensional joint coordinates, a building block of the three-dimensional pose estimation network model is a residual block, a schematic diagram of the three-dimensional pose estimation network model may be as shown in fig. 6, a frame includes two SemGConv layers having 128 channels and a non-local layer, and the rest of the modules are similar to the modules in the frame and are not described herein again, except for the last layer, all the SemGConv layers perform batch normalization and ReLU function activation.
Step 205, generating comprehensive features according to the HOG features and the HOC features of the histogram of oriented gradients extracted from the image of the region to be identified;
in another example of the present invention, the step 205 may include the steps of:
removing the background of the image of the area to be identified, filtering and generating an image to be processed;
dividing the image to be processed into a plurality of pixel small blocks;
calculating a gradient direction histogram of the plurality of pixel small blocks to obtain an HOG characteristic;
loading the image to be processed in a preset HSV color space, and determining the hue and saturation value of each pixel point in the image to be processed;
constructing a color histogram based on the hue and saturation values of the pixels to obtain an HOC characteristic;
and carrying out normalization processing on the HOG characteristic and the HOC characteristic and splicing to generate comprehensive characteristics.
In the embodiment of the invention, the HOG characteristic calculation steps are as follows:
inputting an image of a video randomly extracted from a real-time video sequence after background subtraction;
gradient calculation: filtering an input image with a filter kernel of [ -1,0,1 [ -0 [ -1 [ ])]And [ -1,0,1 [ -1]TCalculating the gradient of the image in the horizontal direction and the vertical direction respectively, thereby calculating the gradient amplitude of each pixel p
Figure BDA0002694859180000151
And gradient direction Θ (p) as shown by the following formula:
Figure BDA0002694859180000152
Figure BDA0002694859180000153
wherein, vxV and vxRespectively representing the horizontal component and the vertical component of the gradient obtained after filtering, wherein theta (p) is an unsigned real number with a value range of 0-180 degrees;
dividing an input image into small blocks with the same size, and combining a plurality of small blocks into a middle block;
acquiring a direction channel: dividing the value range of theta (p) from 0 DEG to 180 DEG into n channels on average;
selecting a histogram: and counting the gradient direction histograms of the pixels in each small block, wherein the abscissa of the histogram is the n selected direction channels, and the ordinate of the histogram is the accumulated sum of the gradient amplitudes of the pixels belonging to a certain direction channel. Finally, a group of vectors is obtained;
normalization treatment: normalizing the vector by taking a middle block where the pixel corresponding to the vector is positioned as a unit;
forming the HOG feature: all the vectors processed above are connected to form a group of vectors, which are HOG features.
The HOC feature calculation steps are as follows:
transforming the input image into an HSV color space;
uniformly dividing the values of hue and saturation of each pixel point in the image into m channels respectively, and combining every two channels to obtain m channels2Combining seed channels;
the normalized HOC features are generated according to the method for generating the normalized HOG histogram, which is not described herein again.
And finally, serially splicing the normalized HOG characteristic and the HOC characteristic to generate a comprehensive characteristic.
The process of normalizing the HOG features and the HOC features may be as follows:
normalizing the HOG characteristic and the HOC characteristic by using a linear transformation method and a range method, and outputting a normalized comprehensive characteristic vector, wherein a normalization reference formula of the linear transformation method is as follows:
Figure BDA0002694859180000161
the standard formula of the range method is as follows:
Figure BDA0002694859180000162
wherein xiIndicating the magnitude of a characteristic value in the characteristic, yiThe size of the feature values after normalization, min (x) and max (x), represent the minimum and maximum values of the feature values in the feature.
In a specific implementation, the seat belt identification result includes that the target operator has a seat belt and the target operator does not have a seat belt, and the step 106 may be replaced by the following steps 206 and 208:
step 206, judging whether a safety belt exists in the image of the area to be recognized corresponding to the comprehensive characteristics according to the comprehensive characteristics through the preset SVM classification model;
in the embodiment of the invention, the comprehensive characteristics are input into a preset SVM classification model to judge whether the safety belt exists in the image of the area to be identified.
It is worth mentioning that the preset SVM classification model can be obtained by pre-training, and the training process can be as follows: the comprehensive characteristics obtained by determining the images of the areas to be recognized with the safety belts and without the safety belts are sent to an SVM classifier for training to obtain a maximum boundary hyperplane, namely the hyperplane farthest from the boundary observation points of two categories of the safety belts and without the safety belts, and the training is completed at the moment to obtain the preset SVM classification model.
Step 207, if yes, outputting a prompt that the target operator has a safety belt;
and step 208, if not, outputting a prompt that the target operator does not have a safety belt.
In the specific implementation, after a preset SVM classification model is obtained, the comprehensive characteristics can be input into the preset SVM classification model for classification judgment, and if safety belts exist in the images of the area to be recognized corresponding to the comprehensive characteristics, a prompt that a target operator has the safety belts is output; and if the safety belt does not exist in the image of the area to be recognized corresponding to the comprehensive features, outputting a prompt that the target operator does not have the safety belt.
In the embodiment of the invention, the acquired operation area image is preprocessed to generate the preprocessed image, when the target operator exists in the preprocessed image, the image of the area to be recognized of the target operator is determined, the corresponding HOG characteristic and HOC characteristic are extracted according to the image of the area to be recognized to obtain the comprehensive characteristic, and finally the comprehensive characteristic is input into the preset SVM classification model, so that the safety belt recognition result is output to determine whether the target operator carries the safety belt.
Referring to fig. 7, fig. 7 is a block diagram illustrating a flow chart of determining whether a worker is in a climbing posture according to an embodiment of the present invention, where the flow chart includes:
1. receiving an input image;
2. inputting the input images into a personnel detector and a ladder detector respectively, and judging whether an operator is on a ladder (namely whether the operator is a target operator);
3. inputting an input image into a two-dimensional attitude estimation module to carry out two-dimensional attitude estimation, determining whether the input image is a climbing attitude, generating two-dimensional joint coordinates and inputting the two-dimensional joint coordinates into a three-dimensional attitude estimation module;
4. the three-dimensional attitude estimation module carries out three-dimensional attitude estimation and determines whether the attitude is a climbing attitude;
5. receiving the judgment result of the climbing posture by the two-dimensional posture estimation module and the three-dimensional posture estimation module;
6. and if the judgment result is that the operator is in the climbing posture and the operator is on the ladder, outputting the target image.
Referring to fig. 8, fig. 8 is a block diagram showing a construction of an operator seatbelt recognition apparatus according to an embodiment of the present invention.
The embodiment of the invention provides an operator safety belt recognition device, which comprises:
a to-be-operated area image obtaining module 801, configured to obtain an operated area image;
a preprocessed image generating module 802, configured to preprocess the job region image to generate a preprocessed image;
a target operator detection module 803, configured to detect whether a target operator exists in the preprocessed image;
a to-be-identified region image determining module 804, configured to determine, when a target operator exists in the preprocessed image, a to-be-identified region image of the target operator from the preprocessed image;
a comprehensive feature generation module 805, configured to generate a comprehensive feature according to the histogram of oriented gradients HOG feature and the histogram of colors HOC feature extracted from the image of the region to be identified;
and a safety belt recognition result output module 806, configured to input the comprehensive features into a preset SVM classification model, and output a safety belt recognition result.
Optionally, the preprocessed image generating module 802 includes:
a color component detection sub-module for detecting a plurality of color components of the work area image; the plurality of color components includes a red color component, a green color component, and a blue color component;
the gray value calculation submodule is used for calculating the gray value of the working area image by adopting the red component, the green component and the blue component;
the gray level image construction submodule is used for constructing a gray level image according to the gray level value of the operation area image;
and the pre-processing image generation submodule is used for carrying out denoising processing on the gray level image to generate a pre-processing image.
Optionally, the target operator detection module 803 includes:
the image frame generation sub-module is used for inputting the preprocessed image into a preset personnel detection model and a preset ladder detection model, and generating a first image frame and a second image frame on the preprocessed image; the first image frame is used for identifying the position of a person, and the second image frame is used for identifying the position of a ladder;
a first ratio calculation sub-module for calculating a first ratio between an area of an overlapping portion and an area of the second image frame when the overlapping portion exists between the first image frame and the second image frame;
a first distance calculating submodule for calculating a first distance between a bottom side of the overlapping portion and a bottom side of the second image frame;
a second distance calculating submodule for calculating a second distance between a top edge of the second image frame and a bottom edge of the second image frame;
a second ratio determination submodule for determining a second ratio between the first distance and the second distance;
and the target operator detection submodule is used for determining that a target operator exists in the preprocessed image when the first proportion is greater than or equal to a first preset threshold value and the second proportion is greater than or equal to a second preset threshold value.
Optionally, the module 804 for determining the image of the region to be identified includes:
a personnel area image intercepting submodule, configured to intercept, when a target operator exists in the preprocessed image, an image in the first image frame from the preprocessed image as a personnel area image;
the two-dimensional joint coordinate determination submodule is used for inputting the personnel area image into a preset two-dimensional attitude estimation network model and determining two-dimensional joint coordinates of personnel corresponding to the personnel area image;
the first climbing posture determining submodule is used for determining the posture of a person corresponding to the person region image as a climbing posture if the matching number of the two-dimensional joint coordinates and preset two-dimensional climbing posture coordinates is greater than a first preset number threshold;
and the first to-be-identified area image intercepting submodule is used for intercepting a preset proportion image of the personnel area image as the to-be-identified area image of the target operator.
Optionally, the module 804 for determining an image of the area to be identified further includes:
the three-dimensional joint coordinate determination submodule is used for inputting the two-dimensional joint coordinates into a preset three-dimensional posture estimation network model and determining the three-dimensional joint coordinates of the personnel corresponding to the personnel area image;
the second climbing posture determining submodule is used for determining the posture of the person corresponding to the person region image as a climbing posture if the matching number of the three-dimensional joint coordinates and the preset three-dimensional climbing posture coordinates is greater than a second preset number threshold;
and the second to-be-identified area image intercepting submodule is used for intercepting a preset proportion image of the personnel area image as the to-be-identified area image of the target operator.
Optionally, the comprehensive feature generating module 805 includes:
the image to be processed generation submodule is used for removing the background of the image in the area to be identified, filtering the image and generating an image to be processed;
the pixel small block dividing submodule is used for dividing the image to be processed into a plurality of pixel small blocks;
the HOG characteristic determination submodule is used for calculating the gradient direction histograms of the pixel small blocks to obtain the HOG characteristic;
the pixel value determining submodule is used for loading the image to be processed in a preset HSV color space and determining the value of hue and saturation of each pixel point in the image to be processed;
the HOG characteristic determination submodule is used for constructing a color histogram based on the hue and saturation values of the pixel points to obtain HOC characteristics;
and the comprehensive characteristic generation submodule is used for carrying out normalization processing on the HOG characteristic and the HOC characteristic and splicing to generate comprehensive characteristics.
Optionally, the seat belt recognition result includes a target operator with a seat belt and a target operator without a seat belt, and the seat belt recognition result output module 806 includes:
the safety belt judgment sub-module is used for judging whether a safety belt exists in the image of the area to be identified corresponding to the comprehensive characteristics or not according to the comprehensive characteristics through the preset SVM classification model;
the first prompting submodule is used for outputting a prompt that the target operator has a safety belt if the target operator has the safety belt;
and the second prompting submodule is used for outputting the prompt that the target operator does not have the safety belt if the target operator does not have the safety belt.
The embodiment of the present invention further provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the processor executes the steps of the method for identifying a safety belt of an operator according to any one of the above embodiments.
The embodiment of the invention also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by the processor, the method for identifying the safety belt of the operator is implemented according to any one of the embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An operator seatbelt identification method, comprising:
acquiring an image of a working area;
preprocessing the image of the operation area to generate a preprocessed image;
detecting whether a target operator exists in the preprocessed image;
when a target operator exists in the preprocessed image, determining an image of a region to be identified of the target operator from the preprocessed image;
generating comprehensive features according to the HOG features and the HOC features of the direction gradient histograms and the color histograms extracted from the images of the areas to be identified;
and inputting the comprehensive characteristics into a preset SVM classification model, and outputting a safety belt identification result.
2. The method of claim 1, wherein the step of pre-processing the work area image to generate a pre-processed image comprises:
detecting a plurality of color components of the work area image; the plurality of color components includes a red color component, a green color component, and a blue color component;
calculating a gray value of the working area image by using the red component, the green component and the blue component;
constructing a gray image according to the gray value of the operation area image;
and denoising the gray level image to generate a preprocessed image.
3. The method of claim 1, wherein the step of detecting the presence of the target operator in the pre-processed image comprises:
inputting the preprocessed image into a preset personnel detection model and a preset ladder detection model, and generating a first image frame and a second image frame on the preprocessed image; the first image frame is used for identifying the position of a person, and the second image frame is used for identifying the position of a ladder;
when there is an overlapping portion between the first image frame and the second image frame, calculating a first ratio between an area of the overlapping portion and an area of the second image frame;
calculating a first distance between a bottom edge of the overlapping portion and a bottom edge of the second image frame;
calculating a second distance between a top edge of the second image frame and a bottom edge of the second image frame;
determining a second ratio between the first distance and the second distance;
and when the first ratio is greater than or equal to a first preset threshold value and the second ratio is greater than or equal to a second preset threshold value, determining that a target operator exists in the preprocessed image.
4. The method according to claim 3, wherein the step of determining the image of the area to be identified of the target worker from the pre-processed image when the target worker exists in the pre-processed image comprises:
when a target operator exists in the preprocessed image, capturing an image in the first image frame from the preprocessed image as a person region image;
inputting the personnel area image into a preset two-dimensional attitude estimation network model, and determining two-dimensional joint coordinates of personnel corresponding to the personnel area image;
if the matching number of the two-dimensional joint coordinates and preset two-dimensional climbing posture coordinates is larger than a first preset number threshold, determining that the posture of the person corresponding to the person region image is a climbing posture;
and intercepting a preset proportion image of the personnel area image as an image of an area to be identified of the target operator.
5. The method of claim 4, further comprising:
inputting the two-dimensional joint coordinates into a preset three-dimensional attitude estimation network model, and determining three-dimensional joint coordinates of personnel corresponding to the personnel area image;
if the matching number of the three-dimensional joint coordinates and the preset three-dimensional climbing posture coordinates is larger than a second preset number threshold, determining that the posture of the person corresponding to the person region image is a climbing posture;
and intercepting a preset proportion image of the personnel area image as an image of an area to be identified of the target operator.
6. The method according to claim 1, wherein the step of generating a composite feature according to the histogram of oriented gradients HOG feature and the histogram of colors HOC feature extracted from the image of the region to be identified comprises:
removing the background of the image of the area to be identified, filtering and generating an image to be processed;
dividing the image to be processed into a plurality of pixel small blocks;
calculating a gradient direction histogram of the plurality of pixel small blocks to obtain an HOG characteristic;
loading the image to be processed in a preset HSV color space, and determining the hue and saturation value of each pixel point in the image to be processed;
constructing a color histogram based on the hue and saturation values of the pixels to obtain an HOC characteristic;
and carrying out normalization processing on the HOG characteristic and the HOC characteristic and splicing to generate comprehensive characteristics.
7. The method of claim 1, wherein the seat belt recognition results include a target operator's seat belt and a target operator's seat belt, and the step of inputting the integrated features into a preset SVM classification model and outputting the seat belt recognition results includes:
judging whether a safety belt exists in the image of the area to be recognized corresponding to the comprehensive characteristics or not according to the comprehensive characteristics through the preset SVM classification model;
if yes, outputting a prompt that the target operator has a safety belt;
and if not, outputting a prompt that the target operator does not have the safety belt.
8. An operator seatbelt identification device, comprising:
the image acquisition module of the area to be operated is used for acquiring an image of the operation area;
the preprocessing image generation module is used for preprocessing the image of the operation area to generate a preprocessing image;
the target operator detection module is used for detecting whether a target operator exists in the preprocessed image;
the image recognition device comprises a to-be-recognized area image determining module, a to-be-recognized area image determining module and a recognition module, wherein the to-be-recognized area image determining module is used for determining an image of a to-be-recognized area of a target operator from the preprocessed image when the target operator exists in the preprocessed image;
the comprehensive feature generation module is used for generating comprehensive features according to the HOG features and the HOC features of the histogram of oriented gradients and the color histogram extracted from the image of the region to be identified;
and the safety belt recognition result output module is used for inputting the comprehensive characteristics into a preset SVM classification model and outputting a safety belt recognition result.
9. An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the method of identifying a safety belt for a worker as set forth in any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method for identifying a safety belt of a worker according to any one of claims 1 to 7.
CN202011002640.8A 2020-09-22 2020-09-22 Method, device, equipment and storage medium for identifying safety belt of operator Active CN112101260B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011002640.8A CN112101260B (en) 2020-09-22 2020-09-22 Method, device, equipment and storage medium for identifying safety belt of operator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011002640.8A CN112101260B (en) 2020-09-22 2020-09-22 Method, device, equipment and storage medium for identifying safety belt of operator

Publications (2)

Publication Number Publication Date
CN112101260A true CN112101260A (en) 2020-12-18
CN112101260B CN112101260B (en) 2023-09-26

Family

ID=73755832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011002640.8A Active CN112101260B (en) 2020-09-22 2020-09-22 Method, device, equipment and storage medium for identifying safety belt of operator

Country Status (1)

Country Link
CN (1) CN112101260B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613452A (en) * 2020-12-29 2021-04-06 广东电网有限责任公司清远供电局 Person line-crossing identification method, device, equipment and storage medium
CN112991211A (en) * 2021-03-12 2021-06-18 中国大恒(集团)有限公司北京图像视觉技术分公司 Dark corner correction method for industrial camera
CN113569729A (en) * 2021-07-27 2021-10-29 广联达科技股份有限公司 High-altitude operation scene detection method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105957107A (en) * 2016-04-27 2016-09-21 北京博瑞空间科技发展有限公司 Pedestrian detecting and tracking method and device
CN108416289A (en) * 2018-03-06 2018-08-17 陕西中联电科电子有限公司 A kind of working at height personnel safety band wears detection device and detection method for early warning
CN109635758A (en) * 2018-12-18 2019-04-16 武汉市蓝领英才科技有限公司 Wisdom building site detection method is dressed based on the high altitude operation personnel safety band of video
CN110046557A (en) * 2019-03-27 2019-07-23 北京好运达智创科技有限公司 Safety cap, Safe belt detection method based on deep neural network differentiation
CN111144263A (en) * 2019-12-20 2020-05-12 山东大学 Construction worker high-fall accident early warning method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105957107A (en) * 2016-04-27 2016-09-21 北京博瑞空间科技发展有限公司 Pedestrian detecting and tracking method and device
CN108416289A (en) * 2018-03-06 2018-08-17 陕西中联电科电子有限公司 A kind of working at height personnel safety band wears detection device and detection method for early warning
CN109635758A (en) * 2018-12-18 2019-04-16 武汉市蓝领英才科技有限公司 Wisdom building site detection method is dressed based on the high altitude operation personnel safety band of video
CN110046557A (en) * 2019-03-27 2019-07-23 北京好运达智创科技有限公司 Safety cap, Safe belt detection method based on deep neural network differentiation
CN111144263A (en) * 2019-12-20 2020-05-12 山东大学 Construction worker high-fall accident early warning method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613452A (en) * 2020-12-29 2021-04-06 广东电网有限责任公司清远供电局 Person line-crossing identification method, device, equipment and storage medium
CN112613452B (en) * 2020-12-29 2023-10-27 广东电网有限责任公司清远供电局 Personnel line-crossing identification method, device, equipment and storage medium
CN112991211A (en) * 2021-03-12 2021-06-18 中国大恒(集团)有限公司北京图像视觉技术分公司 Dark corner correction method for industrial camera
CN113569729A (en) * 2021-07-27 2021-10-29 广联达科技股份有限公司 High-altitude operation scene detection method and device and electronic equipment

Also Published As

Publication number Publication date
CN112101260B (en) 2023-09-26

Similar Documents

Publication Publication Date Title
CN112101260B (en) Method, device, equipment and storage medium for identifying safety belt of operator
CN113361495B (en) Method, device, equipment and storage medium for calculating similarity of face images
Ajmal et al. A comparison of RGB and HSV colour spaces for visual attention models
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN104298996B (en) A kind of underwater active visual tracking method applied to bionic machine fish
US11238301B2 (en) Computer-implemented method of detecting foreign object on background object in an image, apparatus for detecting foreign object on background object in an image, and computer-program product
CN103020992B (en) A kind of video image conspicuousness detection method based on motion color-associations
CN101715111B (en) Method for automatically searching abandoned object in video monitoring
Chen et al. Obstacle detection system for visually impaired people based on stereo vision
CN109506628A (en) Object distance measuring method under a kind of truck environment based on deep learning
CN112733914B (en) Underwater target visual identification classification method based on support vector machine
CN116309607B (en) Ship type intelligent water rescue platform based on machine vision
CN105069816B (en) A kind of method and system of inlet and outlet people flow rate statistical
CN107818303A (en) Unmanned plane oil-gas pipeline image automatic comparative analysis method, system and software memory
CN111259756A (en) Pedestrian re-identification method based on local high-frequency features and mixed metric learning
CN103544488A (en) Face recognition method and device
CN107491714B (en) Intelligent robot and target object identification method and device thereof
CN105354547A (en) Pedestrian detection method in combination of texture and color features
CN110647813A (en) Human face real-time detection and identification method based on unmanned aerial vehicle aerial photography
CN110660187B (en) Forest fire alarm monitoring system based on edge calculation
Foedisch et al. Adaptive road detection through continuous environment learning
CN115830641B (en) Employee identification method and device, electronic equipment and storage medium
KR20110019969A (en) Apparatus for detecting face
CN110458004A (en) A kind of recongnition of objects method, apparatus, equipment and storage medium
CN114495263A (en) Alarm pre-control device for preventing personal injury

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 501-503, annex building, Huaye building, No.1-3 Chuimao new street, Xihua Road, Yuexiu District, Guangzhou City, Guangdong Province 510000

Applicant after: China Southern Power Grid Power Technology Co.,Ltd.

Address before: Room 501-503, annex building, Huaye building, No.1-3 Chuimao new street, Xihua Road, Yuexiu District, Guangzhou City, Guangdong Province 510000

Applicant before: GUANGDONG DIANKEYUAN ENERGY TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant