CN113221838A - Deep learning-based civilized elevator taking detection system and method - Google Patents

Deep learning-based civilized elevator taking detection system and method Download PDF

Info

Publication number
CN113221838A
CN113221838A CN202110613028.2A CN202110613028A CN113221838A CN 113221838 A CN113221838 A CN 113221838A CN 202110613028 A CN202110613028 A CN 202110613028A CN 113221838 A CN113221838 A CN 113221838A
Authority
CN
China
Prior art keywords
video
civilized
behavior
deep learning
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110613028.2A
Other languages
Chinese (zh)
Inventor
张天骏
晋志华
陈可鑫
王晓杰
李浩方
陈云飞
李孟洲
马军强
陈慧颖
肖瑞洁
朱爽爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University
Original Assignee
Zhengzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University filed Critical Zhengzhou University
Priority to CN202110613028.2A priority Critical patent/CN113221838A/en
Publication of CN113221838A publication Critical patent/CN113221838A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Indicating And Signalling Devices For Elevators (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of deep learning, and particularly relates to an unhygienic elevator taking detection system and method based on deep learning, wherein the system comprises a video module and a control prompt module, wherein the video module is used for acquiring video images in a car and detecting the unhygienic behaviors of the captured video images; the control prompt module is used for carrying out voice and text prompt and non-civilized behavior picture display on the non-civilized behaviors. The invention has high detection precision and high detection speed, and can remind passengers to stop the unlawful behavior in time; in addition, the non-civilized behavior is subjected to data recording and is kept in the video storage unit in a picture form, and the video evidence is kept as a video evidence, so that convenience is provided for pursuing accountability afterwards.

Description

Deep learning-based civilized elevator taking detection system and method
Technical Field
The invention belongs to the technical field of deep learning, and particularly relates to an civilized elevator-riding detection system and method based on deep learning.
Background
As a public transport means, the elevator is internally closed, so passengers are easy to have unconscious behaviors in a closed space, such as smoking in the elevator, quarrel, fight, bounce, throw away garbage randomly, take an electric car upstairs, harass women and the like. Although a camera is arranged in a traditional elevator, the traditional elevator cannot stop the non-civilized behavior at the first time when the non-civilized behavior occurs, and passengers can take the elevator normally.
Disclosure of Invention
In order to solve the problems in the prior art, the invention aims to provide an illegal ladder-taking detection system and method based on deep learning, which have the characteristics of high detection precision and high detection speed, can prompt a client to stop illegal behaviors in time and provide a good elevator-taking environment for passengers.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides an civilized elevator taking detection system based on deep learning, which comprises:
the video module is used for acquiring video images in the car and detecting the captured video images in an uncivilized behavior;
and the control prompt module is used for carrying out voice and text prompt and picture display on the non-civilized behaviors.
Further, the video module comprises a video acquisition unit, a video detection unit and a video storage unit;
the video acquisition unit adopts two high-definition cameras arranged at opposite angles above the lift car and is used for acquiring video image information in the elevator in real time;
the video detection unit is used for detecting the uncivilized behavior of the collected video image;
and the video storage unit is used for storing the video image information of the non-civilized behavior and keeping the video image information as the video evidence.
Furthermore, the video detection unit comprises a data set labeling unit, a weight parameter training unit and a detection result generation unit;
the data set labeling unit is used for acquiring an uncivilized behavior data set by simulating the uncivilized behavior, labeling the acquired data set and making a category label;
a weight parameter training unit for training weight parameters by inputting a training set by adopting a network model of YOLOV 5;
and the detection result generation unit is used for predicting the video image acquired by the video acquisition unit, generating a boundary box and predicting the category of the target image in the boundary box to obtain a detection result.
Furthermore, the control prompt module comprises a display and a loudspeaker which are installed in the car, and voice and text prompt and non-civilized behavior picture display are carried out on the non-civilized behaviors.
The invention also provides an civilized ladder-riding detection method based on deep learning, which comprises the following steps of:
collecting a video image in the car, and detecting the captured video image for an unlawful behavior;
and carrying out voice and text prompt and non-civilized behavior picture display on the non-civilized behavior.
Further, the video image in the car is gathered to carry out the detection of the action of not civilization to the video image that catches, include:
two high-definition cameras arranged on opposite angles above a lift car are adopted to obtain video image information inside the lift in real time;
detecting the non-civilized behavior of the collected video image;
and storing the video image information of the non-civilized behavior as video evidence for storage.
Further, the detecting the non-civilized behavior of the captured video image includes:
acquiring an illicit behavior data set by simulating an illicit behavior, labeling the acquired data set, and making a category label;
training weight parameters by inputting a training set by adopting a network model of YOLOV 5;
and predicting the video image acquired by the video acquisition unit to generate a boundary box and predicting the category of the target image in the boundary box to obtain a detection result.
Further, the labeling of the obtained data set specifically includes:
screening the acquired video image data, and removing useless data;
marking the screened image pictures by using marking software and adopting a rectangular frame to obtain a TXT label file corresponding to each picture, wherein the file structure is characterized by five values: (X, Y, W, H, C), wherein (X, Y) is the center coordinate of the rectangular frame, W and H are the width and height of the rectangular frame, and C is the prediction category;
and respectively putting the picture and the label data into two different folders to finish data labeling.
Further, the training of the weight parameters by inputting the training set specifically includes:
setting the depth _ multiple and the width _ multiple of the network parameters in the Yolov5 to be 0.33 and 0.5 respectively, namely selecting a network model as Yolov5S, wherein the depth of the network model can be changed by the depth _ multiple parameter;
loading a Yolov5S pre-training model;
setting a Batch size;
loading a training set;
training is started to update the network model parameters of YOLOV 5S.
Further, the generating a bounding box and predicting the category of the target image in the bounding box to obtain a detection result specifically includes:
inputting the obtained image picture into a trained Yolov5S network model, performing iterative computation through a residual neural network to generate a corresponding boundary box label, and visualizing a boundary box label file on the image to generate a detection result, namely finally obtaining the picture containing the boundary box label.
Compared with the prior art, the invention has the following advantages:
the invention relates to an incrustation elevator-taking detection system based on deep learning, which is characterized in that video images in a car are collected firstly, a training set is generated by simulating incrustation behaviors, a YOLOV5 deep learning method is adopted, training model weights of the training set are input, and finally the trained YOLOV5 network is used for detecting the incrustation behaviors such as smoking, fighting, jumping, litter throwing, upstairs carrying with trolley and the like, so that the detection precision is high, the detection speed is high, and passengers can be reminded to stop the incrustation behaviors in time; in addition, the non-civilized behavior is subjected to data recording and is kept in the video storage unit in a picture form, and the video evidence is kept as a video evidence, so that convenience is provided for pursuing accountability afterwards.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a block diagram of an embodiment of an immortal elevator-taking detection system based on deep learning according to the present invention;
FIG. 2 is a flowchart of an embodiment of the invention, which is based on deep learning and is used for detecting an unlawful elevator;
FIG. 3 is a flow chart of an embodiment of the present invention for capturing video images within a car and performing detection of illicit behavior on the captured video images;
fig. 4 is a network structure diagram of YOLOV5 according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments.
As shown in fig. 1, the system for detecting an unlawful elevator ride based on deep learning of the present embodiment includes a video module 11 and a control prompt module 12; the video module 11 is used for acquiring video images in the car and detecting the captured video images in an uncivilized behavior; the control prompt module 12 is used for performing voice and text prompt and non-civilized behavior picture display on the non-civilized behavior.
Specifically, the video module 11 includes a video capture unit 111, a video detection unit 112, and a video storage unit 113; the video acquisition unit 111 adopts two high-definition cameras arranged at opposite angles above the elevator car and is used for acquiring video image information inside the elevator in real time; the video detection unit 112 is used for detecting the uncivilized behavior of the acquired video image; the video storage unit 113 is used for storing video image information of the non-civilized behavior and storing the video image information as video evidence.
More specifically, the video detection unit 112 includes a data set labeling unit 1121, a weight parameter training unit 1122, and a detection result generating unit 1123; the data set labeling unit 1121 obtains an illicit behavior data set by simulating an illicit behavior, labels the obtained data set, and makes a category label; the weight parameter training unit 1122 trains weight parameters by inputting a training set using a network model of YOLOV 5; the detection result generation unit 1123 predicts the video image acquired by the video acquisition unit 111, generates a bounding box, and predicts the type of a target image in the bounding box, thereby obtaining a detection result.
Further, the control prompt module 12 includes a display and a speaker installed in the car, and performs voice and text prompt and non-civilized behavior picture display on the non-civilized behavior.
As shown in fig. 2, the present embodiment further provides a method for detecting an incrustation elevator based on deep learning, which includes the following steps:
and step S11, acquiring a video image in the car, and detecting the captured video image for the non-civilized behavior.
And step S12, carrying out voice and text prompt and picture display of the non-civilized behaviors.
As shown in fig. 3, the step S11 is implemented as follows:
and step S21, two high-definition cameras arranged on opposite angles above the elevator car are adopted to acquire the video image information inside the elevator in real time.
Step S22, detecting the non-civilized behavior of the collected video image; specifically, the method includes steps S221 to S223.
Step S221, an uncivilized behavior data set is obtained by simulating the uncivilized behavior, and the obtained data set is labeled to make a category label.
In step S222, because real-time information is required, a network model of YOLOV5 (as shown in fig. 4) is used, which can achieve fast real-time detection, and the weight parameters are trained by inputting the training set of step S221.
Step S223, predicting the video image acquired by the video acquisition unit, generating a boundary box and predicting the category of the target image in the boundary box to obtain a detection result.
And step S23, storing the video image information of the non-civilized behavior as the video evidence, and storing the picture with the boundary frame and the prediction type into a video storage unit.
In step S221, labeling the obtained data set, specifically including:
and step S31, screening the acquired video image data, and eliminating useless data to improve the detection accuracy.
Step S32, labeling the screened image pictures by using labeling software by adopting a rectangular frame to obtain a TXT label file corresponding to each picture, wherein the file structure is characterized by five values: (X, Y, W, H, C), wherein (X, Y) is the center coordinate of the rectangular frame, W and H are the width and height of the rectangular frame, and C is the prediction category.
And step S33, respectively putting the picture and the label data into two different folders, and finishing data annotation.
Step S222 is to train weight parameters by inputting a training set, and specifically includes:
step S41, setting the network parameters depth _ multiple and width _ multiple in the YOLOV5 as 0.33 and 0.5 respectively, namely selecting a network model as YOLOV 5S; wherein the depth _ multiple is used for changing the depth of the network model; namely, the size of a residual error neural network in the network model is changed, and the network depth is reduced, so that great speed improvement is brought.
And step S42, loading a YOLOV5S pre-training model to ensure that the model has better initialization parameters.
In step S43, the Batch size is set, and a larger Batch size requires a better deep learning apparatus.
Step S44, a training set is loaded.
Step S45, begin training network model parameters for updating YOLOV 5S.
After the network model is trained successfully to obtain the weight parameters, the training set can be supplemented to train the obtained weight parameters continuously, so that the detection of the non-civilized behaviors is more accurate.
Step S223 is to generate a bounding box and predict the category of the target image in the bounding box to obtain a detection result, which specifically includes: inputting the obtained image picture into a trained Yolov5S network model, performing iterative computation through a residual neural network to generate a corresponding boundary box label, and visualizing a boundary box label file on the image to generate a detection result, namely finally obtaining the picture containing the boundary box label.
Several specific examples of uneventful ride behavior are given below to provide a better understanding of the present invention.
The detection of smoking behavior comprises the following steps:
and step S51, obtaining pictures of smoke and flame by simulating the behavior of the smoker, labeling the obtained data set, and making a category label.
And step S52, training weight parameters by inputting a training set by adopting a network model of YOLOV 5.
And step S53, predicting the video image acquired by the video acquisition unit, generating a boundary box and predicting the type of the target image in the boundary box to obtain a detection result.
And step S54, if flame and smoke are detected, the video detection unit judges that the smoke is drawn, the control prompt module displays prompts such as 'do not smoke' and the like on the display, and simultaneously carries out voice warning, the elevator cannot work until the smoke and the flame disappear, and the elevator can normally run.
Secondly, the detection of fighting behaviors comprises the following steps:
and step S61, obtaining a fighting picture by simulating the behavior of a fighter, labeling the obtained data set, and making a category label.
And step S62, training weight parameters by inputting a training set by adopting a network model of YOLOV 5.
And step S63, predicting the video image acquired by the video acquisition unit, generating a boundary box and predicting the type of the target image in the boundary box to obtain a detection result.
And step S64, if the video detection unit judges that the fighting behavior is achieved, the control prompt module displays prompts such as 'do not play and do not alarm' on the display, and simultaneously carries out voice warning, the elevator cannot work until the fighting scene disappears, and the elevator can normally run.
Thirdly, the detection of the jumping behavior comprises the following steps:
and step S71, acquiring a jumping picture by simulating the behavior of a jumping person, labeling the acquired data set, and making a category label.
And step S72, training weight parameters by inputting a training set by adopting a network model of YOLOV 5.
And step S73, predicting the video image acquired by the video acquisition unit, generating a boundary box and predicting the type of the target image in the boundary box to obtain a detection result.
In step S74, if the video detection unit determines that the action is jumping, the control module displays a prompt such as "do not jump sharply" on the display, and sends a jump photo stored in the video storage unit to the display for notification.
Fourthly, the detection of the disposable garbage comprises the following steps:
firstly, the garbage is generally a small object, the detection effect is poor by directly adopting YOLOV5, and at the moment, on the basis of YOLOV5, a characteristic layer is added for detecting a small target object, so as to achieve the purpose of detecting the small target.
And training the model by using a common garbage data set so as to detect a common garbage target.
The detection of litter is that whether the garbage exists on the floor before and after the passenger takes the elevator is taken as a judgment standard: the video detection unit detects out rubbish promptly after the passenger takes advantage of the ladder, and the elevator reaches the unable opening of purpose floor sedan-chair door, and control suggestion module suggestion passenger picks up rubbish, and after the passenger picked up rubbish, the sedan-chair door could be opened.
Fifthly, detecting the upstairs of the trolley:
aiming at the behavior that a passenger takes an electric car to go upstairs, YOLOV5 is adopted for detection, a large number of data sets are used for training weight parameters, when a video detection unit detects that the electric car exists in an elevator, the elevator car door cannot be closed, and a control prompt module reminds the passenger that the electric car cannot be taken to go upstairs.
Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The use of the terms "a" or "an" and the like in the description and in the claims of this application do not necessarily denote a limitation of quantity. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect.
Finally, it is to be noted that: the above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. An civilized elevator riding detection system based on deep learning is characterized by comprising:
the video module is used for acquiring video images in the car and detecting the captured video images in an uncivilized behavior;
and the control prompt module is used for carrying out voice and text prompt and picture display on the non-civilized behaviors.
2. The deep learning based civilized ride detection system of claim 1, wherein the video module comprises a video acquisition unit, a video detection unit, and a video storage unit;
the video acquisition unit adopts two high-definition cameras arranged at opposite angles above the lift car and is used for acquiring video image information in the elevator in real time;
the video detection unit is used for detecting the uncivilized behavior of the collected video image;
and the video storage unit is used for storing the video image information of the non-civilized behavior and keeping the video image information as the video evidence.
3. The deep learning based civilized riding detection system of claim 2, wherein the video detection unit comprises a dataset labeling unit, a weight parameter training unit and a detection result generation unit;
the data set labeling unit is used for acquiring an uncivilized behavior data set by simulating the uncivilized behavior, labeling the acquired data set and making a category label;
a weight parameter training unit for training weight parameters by inputting a training set by adopting a network model of YOLOV 5;
and the detection result generation unit is used for predicting the video image acquired by the video acquisition unit, generating a boundary box and predicting the category of the target image in the boundary box to obtain a detection result.
4. The deep learning based civilized ride detection system of claim 1, wherein the control prompt module comprises a display and a speaker mounted in the car for voice text prompt and picture display of the civilized behavior.
5. An uncertainties elevator-taking detection method based on deep learning is characterized by comprising the following steps:
collecting a video image in the car, and detecting the captured video image for an unlawful behavior;
and carrying out voice and text prompt and non-civilized behavior picture display on the non-civilized behavior.
6. The deep learning based civilization riding detection method of claim 5, wherein the capturing video images of the interior of the car and performing the detection of the civilization behavior on the captured video images comprises:
two high-definition cameras arranged on opposite angles above a lift car are adopted to obtain video image information inside the lift in real time;
detecting the non-civilized behavior of the collected video image;
and storing the video image information of the non-civilized behavior as video evidence for storage.
7. The deep learning based civilization ladder detection method of claim 6, wherein the detection of the civilization behavior of the captured video images comprises:
acquiring an illicit behavior data set by simulating an illicit behavior, labeling the acquired data set, and making a category label;
training weight parameters by inputting a training set by adopting a network model of YOLOV 5;
and predicting the video image acquired by the video acquisition unit to generate a boundary box and predicting the category of the target image in the boundary box to obtain a detection result.
8. The deep learning-based civilized boarding detection method according to claim 7, wherein the labeling of the obtained data set specifically comprises:
screening the acquired video image data, and removing useless data;
marking the screened image pictures by using marking software and adopting a rectangular frame to obtain a TXT label file corresponding to each picture, wherein the file structure is characterized by five values: (X, Y, W, H, C), wherein (X, Y) is the center coordinate of the rectangular frame, W and H are the width and height of the rectangular frame, and C is the prediction category;
and respectively putting the picture and the label data into two different folders to finish data labeling.
9. The deep learning-based civilized riding detection method of claim 8, wherein the training of the weight parameters by the input training set specifically comprises:
setting the depth _ multiple and the width _ multiple of the network parameters in the Yolov5 to be 0.33 and 0.5 respectively, namely selecting a network model as Yolov5S, wherein the depth of the network model can be changed by the depth _ multiple parameter;
loading a Yolov5S pre-training model;
setting a Batch size;
loading a training set;
training is started to update the network model parameters of YOLOV 5S.
10. The method for detecting the ladder by using the non-civilized ladder based on the deep learning as claimed in claim 9, wherein the generating of the bounding box and the prediction of the category of the target image in the bounding box to obtain the detection result specifically comprises:
inputting the obtained image picture into a trained Yolov5S network model, performing iterative computation through a residual neural network to generate a corresponding boundary box label, and visualizing a boundary box label file on the image to generate a detection result, namely finally obtaining the picture containing the boundary box label.
CN202110613028.2A 2021-06-02 2021-06-02 Deep learning-based civilized elevator taking detection system and method Pending CN113221838A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110613028.2A CN113221838A (en) 2021-06-02 2021-06-02 Deep learning-based civilized elevator taking detection system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110613028.2A CN113221838A (en) 2021-06-02 2021-06-02 Deep learning-based civilized elevator taking detection system and method

Publications (1)

Publication Number Publication Date
CN113221838A true CN113221838A (en) 2021-08-06

Family

ID=77082261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110613028.2A Pending CN113221838A (en) 2021-06-02 2021-06-02 Deep learning-based civilized elevator taking detection system and method

Country Status (1)

Country Link
CN (1) CN113221838A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392820A (en) * 2021-08-17 2021-09-14 南昌虚拟现实研究院股份有限公司 Dynamic gesture recognition method and device, electronic equipment and readable storage medium
CN114332666A (en) * 2022-03-11 2022-04-12 齐鲁工业大学 Image target detection method and system based on lightweight neural network model
US20230188671A1 (en) * 2021-12-09 2023-06-15 Anhui University Fire source detection method and device under condition of small sample size and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110002315A (en) * 2018-11-30 2019-07-12 浙江新再灵科技股份有限公司 Vertical ladder electric vehicle detection method and warning system based on deep learning
CN110766098A (en) * 2019-11-07 2020-02-07 中国石油大学(华东) Traffic scene small target detection method based on improved YOLOv3
CN111353377A (en) * 2019-12-24 2020-06-30 浙江工业大学 Elevator passenger number detection method based on deep learning
CN111563557A (en) * 2020-05-12 2020-08-21 山东科华电力技术有限公司 Method for detecting target in power cable tunnel
CN112613350A (en) * 2020-12-04 2021-04-06 河海大学 High-resolution optical remote sensing image airplane target detection method based on deep neural network
CN112733676A (en) * 2020-12-31 2021-04-30 青岛海纳云科技控股有限公司 Method for detecting and identifying garbage in elevator based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110002315A (en) * 2018-11-30 2019-07-12 浙江新再灵科技股份有限公司 Vertical ladder electric vehicle detection method and warning system based on deep learning
CN110766098A (en) * 2019-11-07 2020-02-07 中国石油大学(华东) Traffic scene small target detection method based on improved YOLOv3
CN111353377A (en) * 2019-12-24 2020-06-30 浙江工业大学 Elevator passenger number detection method based on deep learning
CN111563557A (en) * 2020-05-12 2020-08-21 山东科华电力技术有限公司 Method for detecting target in power cable tunnel
CN112613350A (en) * 2020-12-04 2021-04-06 河海大学 High-resolution optical remote sensing image airplane target detection method based on deep neural network
CN112733676A (en) * 2020-12-31 2021-04-30 青岛海纳云科技控股有限公司 Method for detecting and identifying garbage in elevator based on deep learning

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392820A (en) * 2021-08-17 2021-09-14 南昌虚拟现实研究院股份有限公司 Dynamic gesture recognition method and device, electronic equipment and readable storage medium
CN113392820B (en) * 2021-08-17 2021-11-30 南昌虚拟现实研究院股份有限公司 Dynamic gesture recognition method and device, electronic equipment and readable storage medium
US20230188671A1 (en) * 2021-12-09 2023-06-15 Anhui University Fire source detection method and device under condition of small sample size and storage medium
US11818493B2 (en) * 2021-12-09 2023-11-14 Anhui University Fire source detection method and device under condition of small sample size and storage medium
CN114332666A (en) * 2022-03-11 2022-04-12 齐鲁工业大学 Image target detection method and system based on lightweight neural network model

Similar Documents

Publication Publication Date Title
CN113221838A (en) Deep learning-based civilized elevator taking detection system and method
WO2021159604A1 (en) Monitoring system, monitoring method, and monitoring device for railway train
CN111091072A (en) YOLOv 3-based flame and dense smoke detection method
CN108154236A (en) For assessing the technology of the cognitive state of group's level
CN107977646B (en) Partition delivery detection method
JP2006068315A5 (en)
CN110002315A (en) Vertical ladder electric vehicle detection method and warning system based on deep learning
JP2004021495A (en) Monitoring system and monitoring method
CN110619277A (en) Multi-community intelligent deployment and control method and system
CN107944434A (en) A kind of alarm method and terminal based on rotating camera
CN109761118A (en) Wisdom ladder networking control method and system based on machine vision
CN110188644A (en) A kind of staircase passenger's hazardous act monitoring system and method for view-based access control model analysis
CN109305490B (en) Intelligent classification dustbin based on image recognition technology
CN112507760B (en) Method, device and equipment for detecting violent sorting behaviors
CN112466003A (en) Vehicle state detection method, device, server and storage medium
CN113688761A (en) Pedestrian behavior category detection method based on image sequence
CN112818871A (en) Target detection method of full-fusion neural network based on half-packet convolution
CN112206541A (en) Game plug-in identification method and device, storage medium and computer equipment
CN114913460A (en) Electric vehicle elevator entering real-time detection method based on convolutional neural network
CN104077571A (en) Method for detecting abnormal behavior of throng by adopting single-class serialization model
CN113569710A (en) Elevator car stopping method, device, camera equipment, storage medium and system
CN108924482B (en) Video recording method and system
CN116840835B (en) Fall detection method, system and equipment based on millimeter wave radar
CN116443682A (en) Intelligent elevator control system
CN110674743A (en) Tumble detection method based on triaxial acceleration data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210806