CN114229646B - Elevator control method, elevator and elevator detection system - Google Patents

Elevator control method, elevator and elevator detection system Download PDF

Info

Publication number
CN114229646B
CN114229646B CN202111635787.5A CN202111635787A CN114229646B CN 114229646 B CN114229646 B CN 114229646B CN 202111635787 A CN202111635787 A CN 202111635787A CN 114229646 B CN114229646 B CN 114229646B
Authority
CN
China
Prior art keywords
elevator
joint point
information
point information
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111635787.5A
Other languages
Chinese (zh)
Other versions
CN114229646A (en
Inventor
钟晨初
林晓坤
李成文
董晓楠
田文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Huichuan Control Technology Co Ltd
Original Assignee
Suzhou Huichuan Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Huichuan Control Technology Co Ltd filed Critical Suzhou Huichuan Control Technology Co Ltd
Priority to CN202111635787.5A priority Critical patent/CN114229646B/en
Publication of CN114229646A publication Critical patent/CN114229646A/en
Application granted granted Critical
Publication of CN114229646B publication Critical patent/CN114229646B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B5/00Applications of checking, fault-correcting, or safety devices in elevators
    • B66B5/0006Monitoring devices or performance analysers
    • B66B5/0012Devices monitoring the users of the elevator system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B1/00Control systems of elevators in general
    • B66B1/02Control systems without regulation, i.e. without retroactive action
    • B66B1/06Control systems without regulation, i.e. without retroactive action electric
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B1/00Control systems of elevators in general
    • B66B1/34Details, e.g. call counting devices, data transmission from car to control system, devices giving information to the control system
    • B66B1/3415Control system configuration and the data transmission or communication within the control system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B5/00Applications of checking, fault-correcting, or safety devices in elevators
    • B66B5/02Applications of checking, fault-correcting, or safety devices in elevators responsive to abnormal operating conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Indicating And Signalling Devices For Elevators (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an elevator control method, an elevator and an elevator detection system, wherein the elevator control method comprises the following steps: acquiring image data of passengers in an elevator, and extracting human body contour information in the image data by using a first preset convolutional neural network model; identifying joint point information in the human body contour information through a second preset convolutional neural network model; when the joint point information is complete, the joint point information is sent to a server, so that when the server recognizes that the riding state of the passenger is a falling state according to the joint point information, control information corresponding to the falling state is sent to an elevator; and receiving control information sent by the server, and controlling the elevator to run according to the control information. The falling state of passengers is accurately identified, the running of the elevator is controlled, and the safety of taking the elevator is improved.

Description

Elevator control method, elevator and elevator detection system
Technical Field
The application relates to the technical field of elevators, in particular to an elevator control method, an elevator and an elevator detection system.
Background
With the rapid development of urbanization, elevators are commonly installed in high-rise buildings in order to facilitate people's daily lives. When accidents such as falling occur in the process of using the elevator by people, serious safety accidents can be caused if the accidents cannot be handled in time. Currently, image data in an elevator is transmitted to a worker through an installation of a camera device to determine whether a passenger falls down, so that the passenger cannot be dealt with in time when a safety accident occurs. The use of this method requires a lot of manpower, and is prone to misjudgment and missed judgment, resulting in safety accidents.
Disclosure of Invention
The main aim of the application is to provide an elevator control method, an elevator and an elevator detection system, which aim to accurately identify the elevator taking state of passengers and improve the safety of taking the elevator.
To achieve the above object, the present application provides an elevator control method including:
acquiring image data of passengers in an elevator, and extracting human body contour information in the image data by using a first preset convolutional neural network model;
identifying joint point information in the human body contour information through a second preset convolutional neural network model;
when the joint point information is complete, the joint point information is sent to a server, so that when the server recognizes that the riding state of the passenger is a falling state according to the joint point information, control information corresponding to the falling state is sent to an elevator;
and receiving control information sent by the server, and controlling the elevator to run according to the control information.
Optionally, the step of acquiring image data of passengers in the elevator comprises:
acquiring video data acquired by a camera device;
and decoding the video data to obtain the image data.
Optionally, before the step of identifying the joint point information in the human body contour information through the second preset convolutional neural network model, the method further includes:
acquiring the number of human bodies contained in the human body contour information;
and when the number of the human bodies is determined to be one, executing the step of identifying joint point information in the human body contour information through a second preset convolutional neural network model.
Optionally, the node information includes node coordinates and a connection relationship of the node, and before the step of sending the node information to the server when the node information is complete, the method further includes:
determining whether a preset joint point exists in the image data according to the joint point coordinates and the connection relation;
and when the existence of the preset articulation point is determined, determining that the articulation point information is complete.
In addition, the application also provides an elevator control method which is applied to the server and comprises the following steps:
receiving joint point information sent by an elevator;
inputting the joint point information into an input layer of a behavior recognition model, and determining the riding state of a passenger according to the output result of the behavior recognition model;
and controlling the operation of the elevator according to the riding state.
Optionally, the step of controlling the operation of the elevator according to the passenger status comprises:
and when the riding state is a falling state, controlling the elevator to stop running.
Optionally, after the step of controlling the elevator to stop running when the riding state is a falling state, the method further comprises:
acquiring the time length of the passenger maintaining the falling state;
and when the duration reaches a first preset duration, starting an alarm to inform staff of processing.
Optionally, the method further comprises:
acquiring the times of determining the riding state as the falling state in a second preset time period;
and when the times reach the preset times, sending warning information to the staff.
To achieve the above object, the present application also provides an elevator comprising a memory, a processor and an elevator control program stored on the memory and executable on the processor, which elevator control program when executed by the processor implements the steps of the elevator control method according to any one of the above.
To achieve the above object, the present application further provides an elevator detection system, on which an elevator control program is stored, which when executed by a processor implements the steps of the elevator control method according to any of the above embodiments.
The application provides an elevator control method, an elevator and an elevator detection system, wherein the method comprises the following steps: acquiring image data of passengers in the elevator, extracting human body contour information in the image data by using a first preset convolutional neural network model, identifying joint point information in the human body contour information by using a second preset convolutional neural network model, transmitting the joint point information to a server side when the joint point information is complete, enabling the server side to transmit control information corresponding to the falling state to the elevator when the riding state of the passengers is recognized to be the falling state according to the joint point information, receiving the control information transmitted by the server, and controlling the elevator to operate according to the control information. And the riding state of the passenger is determined by using the convolutional neural network model, and when the riding state is a falling state, the operation of the elevator is controlled, so that the safety of the passenger using the elevator is improved.
Drawings
Fig. 1 is a schematic diagram of a hardware architecture of an elevator according to an embodiment of the present application;
fig. 2 is a flow chart of an embodiment of the elevator control method of the present application;
fig. 3 is a system schematic diagram of an elevator detection system of the elevator control method of the present application;
fig. 4 is a flow chart of a third embodiment of the elevator control method of the present application.
The realization, functional characteristics and advantages of the present application will be further described with reference to the embodiments, referring to the attached drawings.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
As an implementation, referring to fig. 1, fig. 1 is a schematic diagram of a hardware architecture of an elevator according to an embodiment of the application, and as shown in fig. 1, the elevator may include a processor 101, such as a CPU, a memory 102, and a communication bus 103, where the communication bus 103 is used to implement connection communication between these modules.
The memory 102 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. As shown in fig. 1, an elevator control program may be included in a memory 102 that is an elevator detection system; and the processor 101 may be configured to call the elevator control program stored in the memory 102 and perform the following operations:
acquiring image data of passengers in an elevator, and extracting human body contour information in the image data by using a first preset convolutional neural network model;
identifying joint point information in the human body contour information through a second preset convolutional neural network model;
when the joint point information is complete, the joint point information is sent to a server, so that when the server recognizes that the riding state of the passenger is a falling state according to the joint point information, control information corresponding to the falling state is sent to an elevator;
and receiving control information sent by the server, and controlling the elevator to run according to the control information.
In one embodiment, the processor 101 may be configured to call an elevator control program stored in the memory 102 and perform the following operations:
acquiring video data acquired by a camera device;
and decoding the video data to obtain the image data.
In one embodiment, the processor 101 may be configured to call an elevator control program stored in the memory 102 and perform the following operations:
acquiring the number of human bodies contained in the human body contour information;
and when the number of the human bodies is determined to be one, executing the step of identifying joint point information in the human body contour information through a second preset convolutional neural network model.
In one embodiment, the processor 101 may be configured to call an elevator control program stored in the memory 102 and perform the following operations:
determining whether a preset joint point exists in the image data according to the joint point coordinates and the connection relation;
and when the existence of the preset articulation point is determined, determining that the articulation point information is complete.
In one embodiment, the processor 101 may be configured to call an elevator control program stored in the memory 102 and perform the following operations:
receiving joint point information sent by an elevator;
inputting the joint point information into an input layer of a behavior recognition model, and determining the riding state of a passenger according to the output result of the behavior recognition model;
and controlling the operation of the elevator according to the riding state.
In one embodiment, the processor 101 may be configured to call an elevator control program stored in the memory 102 and perform the following operations:
and when the riding state is a falling state, controlling the elevator to stop running.
In one embodiment, the processor 101 may be configured to call an elevator control program stored in the memory 102 and perform the following operations:
acquiring the time length of the passenger maintaining the falling state;
and when the duration reaches a first preset duration, starting an alarm to inform staff of processing.
In one embodiment, the processor 101 may be configured to call an elevator control program stored in the memory 102 and perform the following operations:
acquiring the times of determining the riding state as the falling state in a second preset time period;
and when the times reach the preset times, sending warning information to the staff.
Referring to fig. 3, fig. 3 is a system schematic diagram of an elevator detection system as set forth herein.
The system is composed of a cloud server 1, a cloud server database 2, general edge computing devices 3 and 7, local databases 4 and 8, camera devices 5 and 6 and a property terminal computer 9. The cloud server 1 is respectively connected with the cloud database 2, the edge computing device 3 and the edge computing device 7, the edge computing device 3 is respectively connected with the local database 4, the elevator camera device 5 and the elevator camera device 6, and the edge computing device 7 is respectively connected with the local database 8 and the property terminal computer 9.
Referring to fig. 2, fig. 2 is a schematic flow chart of an embodiment of an elevator control method of the present application, the elevator control method comprising:
step S10, acquiring image data of passengers in an elevator, and extracting human body contour information in the image data by using a first preset convolutional neural network model;
the execution body of this embodiment is an elevator, which may be a straight ladder or an escalator, and the present application does not limit the type of elevator. The elevator comprises a general edge computing means 3 and a local database 4, camera means 5 and 6. The image data of passengers in the elevator can be acquired by adopting the image pick-up devices 5 and 6, so that the image data in different directions can be acquired, and the accuracy of subsequently identifying the states of the passengers is improved. Alternatively, in the present embodiment, it may be determined to activate two image pickup devices or one image pickup device according to the use condition of the user elevator. For example, when it is determined that no passenger exists in the elevator or that a plurality of passengers exist in the elevator, it is determined that only one camera is started, so that the situation that two cameras are started simultaneously without analyzing the states of the passengers is avoided, and energy consumption is excessive.
The human body contour information includes contours of the head, waist, legs, hands, and the like.
In this embodiment, after the elevator acquires the image data by the image pickup device, the image data is sent to the edge computing device, and the human body contour information in the image data is extracted by the edge computing device.
Specifically, the video data acquired by the camera device are acquired, and the video data are decoded to obtain the image data contained in the video data, so that a foundation is provided for the follow-up control of the elevator operation.
The first preset convolutional neural network model is a model which is obtained by training experimental data in advance. Specifically, the specific training process of the first preset convolutional neural network model is as follows:
and acquiring original image data, extracting human body contour information from the image data, and marking the human body contour information contained in the image data.
In this embodiment, image data of a preset number of passengers in different riding states may be acquired and input into an input layer of the convolutional neural network model for training. The method comprises the steps of obtaining human body contour information of an output layer of a convolutional neural network model, comparing the human body contour information of the output layer with labeled human body contour information, reversely adjusting parameter settings of all layers in the neural network model through comparison results, and determining to obtain a first preset convolutional neural network model when the accuracy of the human body contour information of the output layer reaches a preset range. For example, when the accuracy of the human body contour information obtained by the output layer reaches a preset value compared with the labeled human body contour information, a first preset convolutional neural network model is determined and obtained. The quantity of the preset quantity of the convolutional neural network model can influence the data quantity of the training set data, if the quantity of the training set data is larger, the training effect is better, and the training time with smaller quantity is shorter, so that the preset quantity can be 1000 or 5 ten thousand, and the proper quantity can be selected according to the actual application condition, and the method is not particularly limited.
Step S20, identifying joint point information in the human body contour information through a second preset convolutional neural network model;
in this embodiment, after the edge computing device extracts the human body contour information in the image data, the edge computing device further identifies the joint point information in the human body contour information through the second preset convolutional neural network model.
The joint point information includes images of the joints of the left eye, right eye, nose, mouth, chest neck, left shoulder, left elbow, left hand, right shoulder, right elbow, right hand, left hip, left knee, left foot, right hip, right knee, right foot, etc., and the above list is not limiting.
The second preset convolutional neural network model is a model which is obtained by training experimental data in advance. Specifically, the specific training process of the first preset convolutional neural network model is as follows:
the method comprises the steps of obtaining original human body contour information, extracting human body contour information from the human body contour information, and marking the human body contour information contained in the human body contour information.
In this embodiment, human body contour information is input to an input layer of a convolutional neural network model for training. Acquiring joint point information of an output layer of the convolutional neural network model, comparing the joint point information of the output layer with the marked joint point information, reversely adjusting parameter settings of each level in the neural network model according to the comparison result, and determining to acquire a second preset convolutional neural network model when the accuracy of the joint point information of the output layer and the marked joint point information reaches a preset range, for example, when the accuracy of the joint point information of the output layer reaches a preset value compared with the marked joint point information.
It can be understood that the number of sample data of the human body contour information in the second preset convolutional neural network model can be obtained through the output result of the first preset convolutional neural network model, which is not limited in the application.
Step S30, when the joint point information is complete, the joint point information is sent to a server, so that when the server recognizes that the riding state of the passenger is a falling state according to the joint point information, control information corresponding to the falling state is sent to an elevator;
and step S40, receiving control information sent by the server, and controlling the elevator to run according to the control information.
And when the joint point information in the human body contour information is recognized to be complete according to the second preset convolutional neural network model, determining to send the joint point information to the server.
The method for determining the integrity of the joint point information can be realized by judging whether the number of the obtained joint point information reaches a preset value. And when the preset value is reached, the information of the joint point can be determined to be complete.
It can be understood that when the server receives the joint point information and determines the riding state of the passenger according to the joint point information, the passenger riding state can be analyzed only by the fact that the joint point information is complete. Therefore, in the elevator edge computing device, whether the joint point information is complete or not is determined, when the joint point information is complete, the joint point information is sent to the server, the joint point information is prevented from being sent to the server under the condition that the joint point information is incomplete, and the accuracy of the server in determining the riding state of a passenger is reduced.
Optionally, the joint point information includes joint point coordinates and a connection relationship of the joint points, and the edge computing device may further determine whether the image data has a preset joint point according to the joint point coordinates and the connection relationship, and determine that the joint point information is complete when determining that the preset joint point exists. The preset articulation point is an articulation point which can be provided for the server side to accurately analyze the riding state and can be a leg.
In this embodiment, after the joint point information is obtained through the second preset convolutional neural network model, coordinate information of each joint point is obtained, where the coordinates may be constructed based on an output image obtained by an output layer of the second preset convolutional neural network model. And connecting each adjacent joint point to obtain the connection relation of each joint point, and determining whether the preset joint point exists according to the connection relation of the joint point and the joint point coordinates. Whether the information of the joint points is complete is determined by determining whether preset joint points exist in the image data, so that the accuracy of identifying the state of the passenger is improved.
Further, when the edge computing device of the elevator determines that the joint point information is incomplete, the image data is re-acquired through the camera device so as to acquire the complete joint point information, and accuracy of identifying the riding state of the passengers is improved.
After the server acquires the joint point information, the riding state of the passenger is determined according to the joint point information. Specifically, the server inputs the joint point information to the behavior recognition model. Alternatively, the behavior recognition model may be a classifier, and the seating state of the passenger is determined according to the output result of the classifier.
The output results of the riding state are divided into two types by the two classifiers, wherein one type is in a standing state and the other type is in a falling state. The riding state of the passenger is directly judged through the output result of the two classifiers, and the accuracy of identifying the riding state is improved.
It can be understood that the classifier can recognize the riding state of the passenger according to the joint point information, and the recognition accuracy can be improved by using a trained neural network model and the classifier to combine. The training process of the neural network model and the two classifiers refers to the training process of the first preset convolutional neural network model and the second preset convolutional neural network model, and will not be described in detail.
When the server determines that the riding state of the passenger is a falling state, the falling state of the passenger is sent to a property terminal computer, and an alarm is triggered in the property terminal computer to remind workers to process.
Further, in this embodiment, when the server triggers an alarm, the server sends control information for controlling the elevator to stop running to the elevator, and controls the elevator to stop running, so as to avoid safety accidents.
In this embodiment, image data of a passenger in an elevator is acquired, a first preset convolutional neural network model is used to extract human body contour information in the image data, joint point information in the human body contour information is identified through a second preset convolutional neural network model, when the joint point information is complete, the joint point information is sent to a server side, so that when the server side identifies that a passenger sitting state is a falling state according to the joint point information, control information corresponding to the falling state is sent to the elevator, control information sent by the server is received, and elevator operation is controlled according to the control information. And the method of using the convolutional neural network model determines that the riding state of the passenger is a falling state, and controls the operation of the elevator according to the falling state when the riding state is the falling state, so that the safety of using the elevator by the passenger is improved.
Based on the foregoing embodiment, the present application proposes another embodiment, before the step of identifying the joint point information in the human body contour information by using a second preset convolutional neural network model, the method further includes:
step S01, acquiring the number of human bodies contained in the human body contour information;
and step S02, when the number of the human bodies is determined to be one, executing the step of identifying joint point information in the human body contour information through a second preset convolutional neural network model.
In this embodiment, after the human body contour information is obtained, the number of human bodies contained in the human body contour information is determined, and when the number of human bodies is one, the human body contour information is input into a second preset convolutional neural network model for recognition.
It can be understood that if the number of passengers in the elevator is one, if the passengers fall down due to illness and the like, serious safety accidents can be caused if the passengers cannot be handled in time; and when at least two passengers exist in the elevator, other passengers can assist in processing to control the operation of the elevator. Thus, in the present embodiment, when the number of human bodies in the elevator is determined to be one according to the human body profile information, the input of the human body profile information to the second preset convolutional neural network model is performed, reducing the energy loss.
Referring to fig. 4, fig. 4 is a schematic flow chart of another embodiment of the present application. The elevator control method is applied to a server, and comprises the following steps:
step S1, receiving joint point information sent by an elevator;
s2, inputting the joint point information into an input layer of a behavior recognition model, and determining the riding state of the passenger according to an output result of the behavior recognition model;
and step S3, controlling the operation of the elevator according to the riding state.
After the server acquires the joint point information, the riding state of the passenger is determined according to the joint point information. Specifically, the server inputs the joint point information into the two classifiers, and determines the riding state of the passenger according to the output results of the two classes.
The output results of the riding state are divided into two types by the two classifiers, wherein one type is in a standing state and the other type is in a falling state. The riding state of the passenger is directly judged through the output result of the two classifiers, and the accuracy of identifying the riding state is improved.
It can be understood that the classifier can recognize the riding state of the passenger according to the joint point information, and the recognition accuracy can be improved by using a trained neural network model and the classifier to combine. The training process of the neural network model and the two classifiers refers to the training process of the first preset convolutional neural network model and the second preset convolutional neural network model, and will not be described in detail.
When the server determines that the riding state of the passenger is a standing state, controlling the elevator to operate according to a normal state; when the server determines that the riding state of the passenger is a falling state, the falling state of the passenger is sent to a property terminal computer, an alarm is triggered on the property terminal computer, and workers are reminded to process the falling state.
Further, in this embodiment, when the server triggers an alarm, the server sends control information for controlling the elevator to stop running to the elevator, so that the elevator stops running, and safety accidents are avoided.
Alternatively, in the present embodiment, when the riding state is a falling state, the elevator is controlled to stop running. Avoiding secondary damage caused by elevator operation. In this embodiment, the elevator can stop running after reaching the floor closest to the current distance.
In this embodiment, after receiving the joint point information sent by the elevator, the server inputs the joint point information into the input layer of the behavior recognition model, determines a riding state according to the output result of the output layer of the behavior recognition model, and controls the operation of the elevator according to the riding state. The running of the elevator is controlled according to the riding state of passengers, so that the intelligent control of the elevator is realized.
Further, the server acquires the duration of maintaining the falling state when identifying that the passenger is in the falling state according to the uploaded joint point information, and starts an alarm to inform the staff when determining that the duration reaches the first preset duration. Passengers can slip accidentally while using the elevator, but can recover the standing state by themselves. Therefore, in this embodiment, the duration of the passenger maintaining the falling state is acquired, and when the duration exceeds the first preset duration, it is confirmed that the passenger currently needs assistance, an alarm is started, and the staff is notified to perform processing.
In the embodiment, the alarm is started when the time for the passenger to maintain the falling state is longer than the preset time, so that the alarm is prevented from being sent to the staff under the condition that the passenger does not need assistance, and the accuracy of controlling the elevator to operate is improved.
Further, in this embodiment, the server counts the number of times of identifying the falling state within the second preset duration according to the joint information, and when the number of times reaches the preset number of times, sends the warning information to the staff.
In the present embodiment, the server determines the number of times of the fall state according to the interval at which the passenger is identified as being in the fall state. For example, after determining that the user is in the falling state at the present moment, and further, when recognizing that the user is in the standing state, the number of times of recognizing the falling state is determined to be 1. When the falling times of the passengers are recognized to be larger than the preset times in the second preset time, the fact that the passengers possibly slide due to articles such as water stains and the like in the elevator is determined, and warning information is sent to the staff to inform the staff to process, so that the safety of taking the elevator by the passengers is guaranteed.
Based on the above embodiments, the present application also provides an elevator comprising a memory, a processor and an elevator control program stored on the memory and executable on the processor, which elevator control program when executed by the processor implements the steps of the elevator control method according to any one of the above.
Based on the above embodiments, the present application further provides an elevator detection system, on which an elevator control program is stored, which when executed by a processor implements the steps of the elevator control method according to any of the above embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above, including several instructions for causing a terminal device (which may be a smart tv, a mobile phone, a computer, etc.) to perform the method described in the embodiments of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims of the present application.

Claims (9)

1. An elevator control method, characterized by being applied to an edge computing device of an elevator, comprising:
acquiring image data of passengers in an elevator, and extracting human body contour information in the image data by using a first preset convolutional neural network model;
identifying joint point information in the human body contour information through a second preset convolutional neural network model, wherein the joint point information comprises joint point coordinates and connection relations of joint points;
determining whether a preset joint point exists in the image data according to the joint point coordinates and the connection relation;
when the preset articulation point exists, the articulation point information is determined to be complete, wherein the preset articulation point is an articulation point provided for a server side to analyze the riding state;
when the joint point information is complete, the joint point information is sent to a server, so that the server inputs the joint point information into an input layer of a behavior recognition model, the riding state of a passenger is determined according to the output result of the behavior recognition model, and when the riding state of the passenger is recognized to be a falling state according to the joint point information, control information corresponding to the falling state is sent to an elevator;
and receiving control information sent by the server, and controlling the elevator to run according to the control information.
2. The elevator control method of claim 1, wherein the step of acquiring image data of passengers in the elevator comprises:
acquiring video data acquired by a camera device;
and decoding the video data to obtain the image data.
3. The elevator control method according to claim 1, wherein before the step of identifying the joint point information in the human body contour information by the second preset convolutional neural network model, further comprising:
acquiring the number of human bodies contained in the human body contour information;
and when the number of the human bodies is determined to be one, executing the step of identifying joint point information in the human body contour information through a second preset convolutional neural network model.
4. An elevator control method, characterized by being applied to a server, comprising:
receiving joint point information sent by an elevator, wherein the joint point information comprises joint point coordinates and a connection relation of joint points, so that an edge computing device of the elevator determines whether preset joint points exist in image data according to the joint point coordinates and the connection relation, and determines that the joint point information is complete when the preset joint points exist;
inputting the joint point information into an input layer of a behavior recognition model, and determining the riding state of a passenger according to the output result of the behavior recognition model;
and controlling the operation of the elevator according to the riding state.
5. The elevator control method of claim 4, wherein the step of controlling the operation of the elevator according to the passenger status comprises:
and when the riding state is a falling state, controlling the elevator to stop running.
6. The elevator control method according to claim 5, wherein the step of controlling the elevator to stop operating when the riding condition is a falling condition further comprises, after:
acquiring the time length of the passenger maintaining the falling state;
and when the duration reaches a first preset duration, starting an alarm to inform staff of processing.
7. The elevator control method of claim 5, wherein the method further comprises:
acquiring the times of determining the riding state as the falling state in a second preset time period;
and when the times reach the preset times, sending warning information to the staff.
8. Elevator, characterized in that it comprises a memory, a processor and an elevator control program stored on the memory and being operable on the processor, which elevator control program, when being executed by the processor, realizes the steps of the elevator control method according to any one of claims 1-7.
9. Elevator detection system, characterized in that it has stored thereon an elevator control program which, when executed by a processor, implements the steps of the elevator control method according to any of claims 1-7.
CN202111635787.5A 2021-12-28 2021-12-28 Elevator control method, elevator and elevator detection system Active CN114229646B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111635787.5A CN114229646B (en) 2021-12-28 2021-12-28 Elevator control method, elevator and elevator detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111635787.5A CN114229646B (en) 2021-12-28 2021-12-28 Elevator control method, elevator and elevator detection system

Publications (2)

Publication Number Publication Date
CN114229646A CN114229646A (en) 2022-03-25
CN114229646B true CN114229646B (en) 2024-03-22

Family

ID=80744267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111635787.5A Active CN114229646B (en) 2021-12-28 2021-12-28 Elevator control method, elevator and elevator detection system

Country Status (1)

Country Link
CN (1) CN114229646B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107911663A (en) * 2017-11-27 2018-04-13 江苏理工学院 A kind of elevator passenger hazardous act intelligent recognition early warning system based on Computer Vision Detection
CN108750899A (en) * 2018-06-15 2018-11-06 武汉理工大学 Staircase automatic emergency stop device and its control method
CN108840192A (en) * 2018-08-14 2018-11-20 北京瑞特森传感科技有限公司 A kind of monitoring method of elevator, device, electronic equipment and storage medium
WO2019128304A1 (en) * 2017-12-29 2019-07-04 南京阿凡达机器人科技有限公司 Human body fall-down detection method and device
CN113837005A (en) * 2021-08-20 2021-12-24 广州杰赛科技股份有限公司 Human body falling detection method and device, storage medium and terminal equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107911663A (en) * 2017-11-27 2018-04-13 江苏理工学院 A kind of elevator passenger hazardous act intelligent recognition early warning system based on Computer Vision Detection
WO2019128304A1 (en) * 2017-12-29 2019-07-04 南京阿凡达机器人科技有限公司 Human body fall-down detection method and device
CN108750899A (en) * 2018-06-15 2018-11-06 武汉理工大学 Staircase automatic emergency stop device and its control method
CN108840192A (en) * 2018-08-14 2018-11-20 北京瑞特森传感科技有限公司 A kind of monitoring method of elevator, device, electronic equipment and storage medium
CN113837005A (en) * 2021-08-20 2021-12-24 广州杰赛科技股份有限公司 Human body falling detection method and device, storage medium and terminal equipment

Also Published As

Publication number Publication date
CN114229646A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
TWI254254B (en) Person recognizing apparatus, person recognizing method and passage controller
US7734062B2 (en) Action recognition apparatus and apparatus for recognizing attitude of object
CN109516334B (en) Elevator taking protection method, device, system, equipment and storage medium
JP2015097000A (en) Image recognition device and data registration method to the same
JP6573311B2 (en) Face recognition system, face recognition server, and face recognition method
CN111597969A (en) Elevator control method and system based on gesture recognition
WO2021176642A1 (en) Elevator device and elevator control device
KR102203720B1 (en) Method and apparatus for speech recognition
CN107580016A (en) Intelligent Sensing System and its data processing method, storage medium
CN107609474A (en) Body action identification method, device, robot and storage medium
CN115984967A (en) Human body falling detection method, device and system based on deep learning
CN114229646B (en) Elevator control method, elevator and elevator detection system
CN113955594B (en) Elevator control method and device, computer equipment and storage medium
CN110980454A (en) Automatic control method and device for elevator in intelligent community and readable storage medium
CN114332925A (en) Method, system and device for detecting pets in elevator and computer readable storage medium
CN109665387B (en) Intelligent elevator boarding method and device, computer equipment and storage medium
CN110562810A (en) elevator dispatching method, device, computer equipment and storage medium
CN109720945B (en) Elevator allocation method, device, equipment and computer readable storage medium
KR20190002131A (en) User-based elevator apparatus with artificial intelligence type, and method thereof
CN116216470A (en) Elevator door control method and device, electronic equipment and storage medium
CN111386237B (en) User detection device for elevator
CN113903147A (en) Radar-based human body posture distinguishing method, device, equipment and medium
CN114283377A (en) Escalator safety detection method, device, equipment and storage medium
CN109815828A (en) Realize the system and method for initiative alarming or help-seeking behavior detection control
CN115180522A (en) Safety monitoring method and system for hoisting device construction site

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant