CN112132040B - Vision-based safety belt real-time monitoring method, terminal equipment and storage medium - Google Patents

Vision-based safety belt real-time monitoring method, terminal equipment and storage medium Download PDF

Info

Publication number
CN112132040B
CN112132040B CN202011012590.1A CN202011012590A CN112132040B CN 112132040 B CN112132040 B CN 112132040B CN 202011012590 A CN202011012590 A CN 202011012590A CN 112132040 B CN112132040 B CN 112132040B
Authority
CN
China
Prior art keywords
safety belt
face
detection area
real
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011012590.1A
Other languages
Chinese (zh)
Other versions
CN112132040A (en
Inventor
江永付
陈从华
谢超
陈海沯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mingjian Xiamen Software Development Co ltd
Original Assignee
Mingjian Xiamen Software Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mingjian Xiamen Software Development Co ltd filed Critical Mingjian Xiamen Software Development Co ltd
Priority to CN202011012590.1A priority Critical patent/CN112132040B/en
Publication of CN112132040A publication Critical patent/CN112132040A/en
Application granted granted Critical
Publication of CN112132040B publication Critical patent/CN112132040B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a safety belt real-time monitoring method based on vision, terminal equipment and a storage medium, wherein the method comprises the following steps: acquiring a real-time monitoring infrared image of a driver; positioning a human face and estimating the human face posture to obtain the human face position and the human face posture information; intercepting a safety belt detection area according to the face position and the face posture information; the belt detection area is input into a trained classifier to analyze the wearing condition of the belt. The method of the invention can reliably intercept the effective safety belt detection area under the complex driving scene of the driver; and the convolutional neural network model is optimized, so that the safety belt detection efficiency is further improved under the condition of ensuring the accuracy, and the real-time monitoring requirement on the condition that a driver wears the safety belt is met.

Description

Vision-based safety belt real-time monitoring method, terminal equipment and storage medium
Technical Field
The invention relates to the field of safety belt monitoring, in particular to a vision-based safety belt real-time monitoring method, terminal equipment and a storage medium.
Background
With the continuous popularization of automobiles, motor vehicle traffic rises year by year. Whether the driver wears the safety belt has a decisive influence on the casualty of the accident, but the potential safety hazard caused by improper use of the safety belt occurs due to the fact that part of the driver has loose safety consciousness.
In order to effectively analyze the situation that the driver wears the safety belt, an alarm is sent out in time to remind the driver. The method is to extract the edge of the safety belt by the traditional image processing method and judge whether the safety belt is fastened by combining the conditions of the inclination angle, the length, the parallel relation of the straight lines of the two edges and the like of the straight line of the safety belt, but the extraction of the edge of the safety belt is easily influenced by complex environmental background, illumination, wearing of the clothes of the driver and the like, so that the analysis of whether the safety belt is fastened is further influenced. Another method is to analyze in combination with a convolutional neural network model to determine whether to tie the belt by targeting the belt (the target may be the belt itself or a specific mark coated on the belt), segmenting or classifying the area of the driver's body below the face. The safety belt target positioning or dividing method is time-consuming to calculate, and the coating mark detection mode is poor in general applicability; and the body area of the driver below the face is judged, the characteristics of the safety belt cannot be highlighted due to the overlarge detection area, and the safety belt information above the shoulders cannot be fully utilized.
Disclosure of Invention
The invention aims to provide a safety belt real-time monitoring method based on vision, terminal equipment and a storage medium, so as to solve the problems. For this purpose, the invention adopts the following specific technical scheme:
according to an aspect of the present invention, there is provided a vision-based real-time safety belt monitoring method, including the steps of:
acquiring a real-time monitoring infrared image of a driver;
positioning a human face and estimating the human face posture to obtain the human face position and the human face posture information;
intercepting a safety belt detection area according to the face position and the face posture information;
the belt detection area is input into a trained classifier to analyze the wearing condition of the belt.
Further, the real-time monitoring infrared image is obtained through a fatigue monitoring infrared camera installed in the cab.
Further, the face position and face posture information includes face height H, width W, and upper left corner P 1 Lower right corner point P 2 And face pose estimation angles α, β, and γ, where α, β, and γ are pitch angle, yaw angle, and roll angle, respectively.
Further, the specific process of intercepting the safety belt detection area according to the face position and the face posture information is as follows:
calculating the upper left corner point P of the detection area of the safety belt according to the formula (1) tl The coordinates of the two points of the coordinate system,
wherein, (P) tl _x,P tl Y) is the point P tl Coordinates of (P) 1 _x,P 1 Y) and (P) 2 _x,P 2 Y) are respectively the upper left corner points P of the human face 1 And lower right corner point P 2 Coordinates of (c);
taking the face height H as a reference of the size of the safety belt detection area, and the size L=coef H of the safety belt detection area;
estimating angles alpha, beta and gamma according to the face gesture, and P is calculated according to formulas (2) and (3) tl The coordinates are corrected back to the standard state that the face is opposite to the camera,
wherein (P) c _x,P c Y) is the center coordinate of a rectangular frame of the face, (P) tl _x_roll,P tl Y roll) is corrected gamma and then the point P tl Is used for the purpose of determining the coordinates of (a),
wherein, (P) tl _x_r,P tl Y_r) is corrected to the alpha, beta and gamma postpoints P tl Is defined by the coordinates of (a).
Further, coef takes a value of 1.2.
Further, the classifier is realized by adopting an optimized convolutional neural network model, wherein the convolutional neural network model adopts a narrow and deep full convolutional network structure and comprises 9 convolutional layers and 1 output layer, 7 of the 9 convolutional layers are separable convolutional layers, the rest are common convolutional layers, and the convolutional layers adopt 3 multiplied by 3 and 1 multiplied by 1 convolutional kernels for characteristic extraction and weighting operation.
Further, the optimizing of the convolutional neural network model comprises the following steps: pruning a neuron structure with the convolution kernel weight parameter value smaller than a threshold value in a trained network model, performing iterative training on the pruned model to fine-tune model parameters, and finally performing merging operation on a convolution layer and a BatchNorm layer in the model according to a formula (4); the output result is a value rho of the set value for classifying and judging the wearing condition of the safety belt, when rho is more than a threshold value, the safety belt is judged not to be fastened, otherwise, the safety belt is fastened,
wherein y is conv As the operation result of the convolution layer, w conv And b conv The weight and bias of the convolution layer respectively; y is bn For the result of BatchNorm operation, lambda and mu are the scaling factor and offset of the BatchNorm layer, respectively, epsilon is a small value preventing the denominator from being 0, ex]And Var [ x ]]The sliding mean and the sliding variance of the BatchNorm layer are respectively;and->And the combined weight and bias value.
Further, the threshold is 0.65.
According to another aspect of the present invention there is also provided a terminal device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor implements the steps of the method as described above when said computer program is executed.
According to a further aspect of the present invention there is also provided a computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method as described above.
By adopting the technical scheme, the invention has the beneficial effects that: the method of the invention can reliably intercept the effective safety belt detection area under the complex driving scene of the driver; and the convolutional neural network model is optimized, so that the safety belt detection efficiency is further improved under the condition of ensuring the accuracy, and the real-time monitoring requirement on the condition that a driver wears the safety belt is met.
Drawings
For further illustration of the various embodiments, the invention is provided with the accompanying drawings. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate embodiments and together with the description, serve to explain the principles of the embodiments. With reference to these matters, one of ordinary skill in the art will understand other possible embodiments and advantages of the present invention. The components in the figures are not drawn to scale and like reference numerals are generally used to designate like components.
FIG. 1 is a flow chart of a vision-based seat belt real-time monitoring method of the present invention;
FIG. 2 is a schematic illustration of an original seat belt detection zone;
fig. 3 is a schematic view of the corrected seat belt detection region.
Detailed Description
The invention will now be further described with reference to the drawings and detailed description.
With reference to fig. 1, a method for real-time monitoring of a safety belt based on vision is described, which is implemented on the basis of a driver fatigue detection device in order to reduce additional equipment costs. The method comprises the following steps:
s1, acquiring a real-time monitoring infrared image of a driver, and particularly acquiring the infrared image through a fatigue monitoring infrared camera arranged in a cab.
S2, positioning a human face and estimating the human face posture to obtain human face position and human face posture information, wherein the human face position and human face posture information comprises human face height H, width W and human face upper left corner P 1 Lower right corner point P 2 And face pose estimation angles α, β, and γ, where α, β, and γ are pitch angle, yaw angle, and roll angle, respectively.
S3, intercepting a safety belt detection area according to the face position and the face posture information. Specifically, based on the face position, the upper left corner point P of the belt detection area is calculated according to the formula (1) tl The coordinates are then set to the face height H as a reference for the size of the seat belt detection area, and the size l=coef×h of the seat belt detection area (coef takes a value of 1.2 in this embodiment), as shown in fig. 2.
Wherein (P) tl _x,P tl Y) is the point P tl Coordinates of (P) 1 _x,P 1 Y) and (P) 2 _x,P 2 Y) are respectively the upper left corner points P of the human face 1 And lower right corner point P 2 Is defined by the coordinates of (a).
To reduce the influence of camera safety angle, driver safety belt wearing mode, driver head action and the like on the interception of the detection area, estimating angles alpha, beta and gamma according to the human face posture, and converting P into a formula (2) and a formula (3) tl The coordinates are corrected back to the standard state that the face is opposite to the camera, as shown in fig. 3, so that the safety belt can be obviously represented in the detection area under different scenes, and the general applicability of the algorithm is improved.
Wherein (P) c _x,P c Y) is the center coordinate of a rectangular frame of the face, (P) tl _x_roll,P tl Y roll) is corrected gamma and then the point P tl Is defined by the coordinates of (a).
Wherein H and W are eachFace width and height, (P) tl _x_r,P tl Y_r) is corrected to the alpha, beta and gamma postpoints P tl Is defined by the coordinates of (a).
S4, inputting the detection area of the safety belt into a trained classifier to analyze the wearing condition of the safety belt.
The convolutional neural network has good learning ability for the diversity of the images of the safety belt caused by the complex wearing of the clothes of the driver, the background environment and the illumination influence, but the performance is slightly insufficient compared with the traditional image processing method due to the fact that the inference network has larger calculation amount. In order to meet the real-time performance of safety belt monitoring, the network layer needs to be simplified and optimized, and the calculation efficiency is improved as much as possible under the condition of meeting the classification accuracy. In this embodiment, the convolutional neural network model adopts a narrow and deep full convolutional network structure, as shown in table 1, and includes 9 convolutional layers and 1 sigmoid output layer, wherein 7 of the 9 convolutional layers are separable convolutional layers, and the rest are common convolutional layers. The convolution layers adopt 3 multiplied by 3 and 1 multiplied by 1 convolution kernels to carry out feature extraction and weighting operation, and a large-size convolution kernel receptive field effect is realized by overlapping small-size convolution kernels and using fewer parameters. In order to further improve the reasoning performance of the model, firstly pruning a neuron structure with the convolution kernel weight parameter value smaller than a threshold value in a trained network model, then carrying out iterative training on the pruned model to fine-tune model parameters, and finally carrying out merging operation on a convolution layer and a BatchNorm layer in the model according to a formula (4). The method reduces the calculated amount and the volume of the model, and further accelerates the model reasoning speed. The input parameter of the network model is 128×128×1, the output result is a value p of the seat belt wearing condition of classification judgment, when p > threshold value (in the embodiment, the threshold value is 0.65), the seat belt is judged to be unbuckled, otherwise, the seat belt is judged to be unbuckled.
Wherein y is conv As the operation result of the convolution layer, w conv And b conv The weight and bias of the convolution layer respectively; y is bn For the result of BatchNorm operation, lambda and mu are the scaling factor and offset of the BatchNorm layer, respectively, epsilon is a small value preventing the denominator from being 0, ex]And Var [ x ]]Sliding means and sliding variances learned by the BatchNorm layer respectively;and->And the combined weight and bias value.
Table 1 network architecture
According to the safety belt real-time monitoring method based on vision, firstly, according to the face position information and the face gesture, a driver can intercept an effective safety belt detection area all the time in a complex driving scene, so that the detection area is reduced, and the safety belt information is fully reflected; and then simplifying and optimizing the classified convolutional neural network model, so that the calculation amount of network reasoning is reduced while the detection precision of the safety belt is ensured, and further, the real-time monitoring of the safety belt is realized.
The invention further provides a terminal device, which can comprise a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps in the embodiment of the method, such as steps S1-S4 shown in fig. 1.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to perform the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments are used for describing the execution of the computer program in the terminal device.
The terminal device may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The terminal device may include, but is not limited to, a processor, a memory. For example, it may also include input output devices, network access devices, buses, and the like.
The processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is a control center of the terminal device, and which connects various parts of the entire terminal device using various interfaces and lines.
The memory may be used to store the computer program and/or module, and the processor may implement various functions of the terminal device by running or executing the computer program and/or module stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
Finally, the invention also provides a computer-readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the method as described above, such as the steps S1-S4 shown in fig. 1.
The individual modules/units of the computer program may be stored in a computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. The real-time safety belt monitoring method based on vision is characterized by comprising the following steps of:
acquiring a real-time monitoring infrared image of a driver;
locating a face and performing face pose estimationObtaining face position and face posture information, wherein the face position and face posture information comprises face height H, width W and upper left corner of the faceRight lower corner->Is a face pose estimation angle +.>、/>And->Wherein->、/>And->Pitch angle, yaw angle and roll angle, respectively;
intercepting a safety belt detection area according to the face position and the face posture information, specifically comprising: calculating the upper left corner of the safety belt detection area according to the formula (1)The coordinates of the two points of the coordinate system,
(1)
wherein,for->Coordinates of->And->Respectively the upper left corner points of the facesAnd right lower corner->Coordinates of (c);
at the height of human faceAs a seat belt detection zone sizeLThe size of the seat belt detection area is +.>,/>A value of 1.2;
estimating an angle from a face pose、/>And->Will be +.>The coordinates are corrected back to the standard state that the face is opposite to the camera,
(2),
wherein the method comprises the steps ofIs the center coordinate of a rectangular frame of the human face, < >>For correction->Rear point->Is used for the purpose of determining the coordinates of (a),
(3),
wherein,for correction->Rear point->Coordinates of (c);
the belt detection area is input into a trained classifier to analyze the wearing condition of the belt.
2. The method of claim 1, wherein the real-time monitoring infrared image is obtained by a fatigue monitoring infrared camera mounted in the cab.
3. The method of claim 1, wherein the classifier is implemented using an optimized convolutional neural network model, wherein the convolutional neural network model uses a narrow and deep full convolutional network structure comprising 9 convolutional layers and 1 output layer, 9 volumesThe middle 7 of the lamination layers are separable convolution layers, the rest are common convolution layers, and the convolution layers adoptAnd->Is subjected to feature extraction and weighting operations.
4. The method of claim 3, wherein optimizing the convolutional neural network model comprises the steps of: pruning a neuron structure with the convolution kernel weight parameter value smaller than a threshold value in a trained network model, performing iterative training on the pruned model to fine-tune model parameters, and finally performing merging operation on a convolution layer and a BatchNorm layer in the model according to a formula (4); the output result is a value of the wearing condition of the classified judgment safety beltWhen->>If the threshold value is reached, the safety belt is judged not to be fastened, otherwise, the safety belt is fastened,
(4)
in the middle ofFor the convolution layer operation result, < >>And->The weight and bias of the convolution layer respectively; />Is the result of BatchNorm operation, +.>And->Scaling factor and offset of the BatchNorm layer, respectively,>is a smaller value preventing denominator from being 0,/->And->The sliding mean and the sliding variance of the BatchNorm layer are respectively; />And->And the combined weight and bias value.
5. The method of claim 4, wherein the threshold is 0.65.
6. Terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1-5 when the computer program is executed.
7. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1-5.
CN202011012590.1A 2020-09-24 2020-09-24 Vision-based safety belt real-time monitoring method, terminal equipment and storage medium Active CN112132040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011012590.1A CN112132040B (en) 2020-09-24 2020-09-24 Vision-based safety belt real-time monitoring method, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011012590.1A CN112132040B (en) 2020-09-24 2020-09-24 Vision-based safety belt real-time monitoring method, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112132040A CN112132040A (en) 2020-12-25
CN112132040B true CN112132040B (en) 2024-03-15

Family

ID=73840970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011012590.1A Active CN112132040B (en) 2020-09-24 2020-09-24 Vision-based safety belt real-time monitoring method, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112132040B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298000B (en) * 2021-06-02 2022-10-25 上海大学 Safety belt detection method and device based on infrared camera
CN113553938B (en) * 2021-07-19 2024-05-14 黑芝麻智能科技(上海)有限公司 Seat belt detection method, apparatus, computer device, and storage medium
CN113486835B (en) * 2021-07-19 2024-06-28 黑芝麻智能科技有限公司 Seat belt detection method, apparatus, computer device, and storage medium
CN113822197A (en) * 2021-09-23 2021-12-21 南方电网电力科技股份有限公司 Work dressing identification method and device, electronic equipment and storage medium
CN115123141A (en) * 2022-07-14 2022-09-30 东风汽车集团股份有限公司 Vision-based passenger safety belt reminding device and method

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010113506A (en) * 2008-11-06 2010-05-20 Aisin Aw Co Ltd Occupant position detection device, occupant position detection method, and occupant position detection program
CN102750544A (en) * 2012-06-01 2012-10-24 浙江捷尚视觉科技有限公司 Detection system and detection method of rule-breaking driving that safety belt is not fastened and based on plate number recognition
CN102999749A (en) * 2012-12-21 2013-03-27 广东万安科技股份有限公司 Intelligent safety belt regulation violation event detecting method based on face detection
CN103021179A (en) * 2012-12-28 2013-04-03 佛山市华电智能通信科技有限公司 Real-time monitoring video based safety belt detection method
CN103150556A (en) * 2013-02-20 2013-06-12 西安理工大学 Safety belt automatic detection method for monitoring road traffic
CN103268468A (en) * 2012-07-06 2013-08-28 华南理工大学 Automatic detection method for fastening of safety belts by front sitting persons on motor vehicle
CN104657752A (en) * 2015-03-17 2015-05-27 银江股份有限公司 Deep learning-based safety belt wearing identification method
CN106503673A (en) * 2016-11-03 2017-03-15 北京文安智能技术股份有限公司 A kind of recognition methodss of traffic driving behavior, device and a kind of video acquisition device
CN107944341A (en) * 2017-10-27 2018-04-20 荆门程远电子科技有限公司 Driver based on traffic monitoring image does not fasten the safety belt automatic checkout system
CN109460699A (en) * 2018-09-03 2019-03-12 厦门瑞为信息技术有限公司 A kind of pilot harness's wearing recognition methods based on deep learning
CN109886209A (en) * 2019-02-25 2019-06-14 成都旷视金智科技有限公司 Anomaly detection method and device, mobile unit
WO2019128646A1 (en) * 2017-12-28 2019-07-04 深圳励飞科技有限公司 Face detection method, method and device for training parameters of convolutional neural network, and medium
CN111582077A (en) * 2020-04-23 2020-08-25 广州亚美智造科技有限公司 Safety belt wearing detection method and device based on artificial intelligence software technology

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010113506A (en) * 2008-11-06 2010-05-20 Aisin Aw Co Ltd Occupant position detection device, occupant position detection method, and occupant position detection program
CN102750544A (en) * 2012-06-01 2012-10-24 浙江捷尚视觉科技有限公司 Detection system and detection method of rule-breaking driving that safety belt is not fastened and based on plate number recognition
CN103268468A (en) * 2012-07-06 2013-08-28 华南理工大学 Automatic detection method for fastening of safety belts by front sitting persons on motor vehicle
CN102999749A (en) * 2012-12-21 2013-03-27 广东万安科技股份有限公司 Intelligent safety belt regulation violation event detecting method based on face detection
CN103021179A (en) * 2012-12-28 2013-04-03 佛山市华电智能通信科技有限公司 Real-time monitoring video based safety belt detection method
CN103150556A (en) * 2013-02-20 2013-06-12 西安理工大学 Safety belt automatic detection method for monitoring road traffic
CN104657752A (en) * 2015-03-17 2015-05-27 银江股份有限公司 Deep learning-based safety belt wearing identification method
CN106503673A (en) * 2016-11-03 2017-03-15 北京文安智能技术股份有限公司 A kind of recognition methodss of traffic driving behavior, device and a kind of video acquisition device
CN107944341A (en) * 2017-10-27 2018-04-20 荆门程远电子科技有限公司 Driver based on traffic monitoring image does not fasten the safety belt automatic checkout system
WO2019128646A1 (en) * 2017-12-28 2019-07-04 深圳励飞科技有限公司 Face detection method, method and device for training parameters of convolutional neural network, and medium
CN109460699A (en) * 2018-09-03 2019-03-12 厦门瑞为信息技术有限公司 A kind of pilot harness's wearing recognition methods based on deep learning
CN109886209A (en) * 2019-02-25 2019-06-14 成都旷视金智科技有限公司 Anomaly detection method and device, mobile unit
CN111582077A (en) * 2020-04-23 2020-08-25 广州亚美智造科技有限公司 Safety belt wearing detection method and device based on artificial intelligence software technology

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
Learning efficient convolutional networks through network slimming;Liu Z 等;《Proceedings of the IEEE international conference on computer vision》;2736-2744 *
NADS-Net: A Nimble Architecture for Driver and Seat Belt Detection via Convolutional Neural Networks;Chun S 等;《2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE》;2413-2421 *
Study of Detection Method on Real-time and High Precision Driver Seatbelt;Yang Dongsheng 等;《2020 Chinese Control And Decision Conference (CCDC)》;79-86 *
基于MobileNet与YOLOv3的轻量化卷积神经网络设计;邵伟平 等;《计算机应用》;第40卷(第S1期);8-13 *
基于卷积神经网络的驾驶员检测和安全带识别;詹益俊 等;《桂林电子科技大学学报》;第39卷(第03期);211-217 *
基于计算机视觉的驾驶员安全带佩戴的识别方法研究;张晋;《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》(第02期);C035-93 *
基于高速公路交通图像的安全带自动识别算法;石时需 等;《计算机与现代化》(第05期);118-121+126+158 *
深度学习在驾驶员安全带检测中的应用;霍星 等;《计算机科学》;第46卷(第S1期);182-187 *
面向交通卡口图像的驾驶员违章行为视觉感知研究;刘操;《中国博士学位论文全文数据库 (工程科技Ⅱ辑)》(第12期);C034-36 *

Also Published As

Publication number Publication date
CN112132040A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN112132040B (en) Vision-based safety belt real-time monitoring method, terminal equipment and storage medium
Li et al. Deep learning approaches on pedestrian detection in hazy weather
Bauer et al. FPGA-GPU architecture for kernel SVM pedestrian detection
CN111144242B (en) Three-dimensional target detection method, device and terminal
WO2021016873A1 (en) Cascaded neural network-based attention detection method, computer device, and computer-readable storage medium
CN109492609B (en) Method for detecting lane line, vehicle and computing equipment
CN109977776A (en) A kind of method for detecting lane lines, device and mobile unit
CN110443245B (en) License plate region positioning method, device and equipment in non-limited scene
CN110929655A (en) Lane line identification method in driving process, terminal device and storage medium
US11687886B2 (en) Method and device for identifying number of bills and multiple bill areas in image
CN112862845A (en) Lane line reconstruction method and device based on confidence evaluation
CN111582032A (en) Pedestrian detection method and device, terminal equipment and storage medium
CN111667504A (en) Face tracking method, device and equipment
CN110834667A (en) Vehicle steering control method and device, vehicle, terminal device and storage medium
CN112364846A (en) Face living body identification method and device, terminal equipment and storage medium
CN116863124B (en) Vehicle attitude determination method, controller and storage medium
CN111709377B (en) Feature extraction method, target re-identification method and device and electronic equipment
CN116342607B (en) Power transmission line defect identification method and device, electronic equipment and storage medium
CN111095295A (en) Object detection method and device
CN112101139B (en) Human shape detection method, device, equipment and storage medium
CN113239738B (en) Image blurring detection method and blurring detection device
CN112966556B (en) Moving object detection method and system
CN113888740A (en) Method and device for determining binding relationship between target license plate frame and target vehicle frame
CN113033256B (en) Training method and device for fingertip detection model
Mutholib et al. Development of portable automatic number plate recognition system on android mobile phone

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant