CN108622776B - Elevator riding detection system - Google Patents

Elevator riding detection system Download PDF

Info

Publication number
CN108622776B
CN108622776B CN201711474231.6A CN201711474231A CN108622776B CN 108622776 B CN108622776 B CN 108622776B CN 201711474231 A CN201711474231 A CN 201711474231A CN 108622776 B CN108622776 B CN 108622776B
Authority
CN
China
Prior art keywords
user
unit
detection
car
door
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711474231.6A
Other languages
Chinese (zh)
Other versions
CN108622776A (en
Inventor
野田周平
横井谦太朗
村田由香里
田村聪
木村纱由美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Elevator and Building Systems Corp
Original Assignee
Toshiba Elevator Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Elevator Co Ltd filed Critical Toshiba Elevator Co Ltd
Publication of CN108622776A publication Critical patent/CN108622776A/en
Application granted granted Critical
Publication of CN108622776B publication Critical patent/CN108622776B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B13/00Doors, gates, or other apparatus controlling access to, or exit from, cages or lift well landings
    • B66B13/02Door or gate operation
    • B66B13/14Control systems or devices
    • B66B13/143Control systems or devices electrical
    • B66B13/146Control systems or devices electrical method or algorithm for controlling doors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Indicating And Signalling Devices For Elevators (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Elevator Door Apparatuses (AREA)
  • Cage And Drive Apparatuses For Elevators (AREA)
  • Maintenance And Inspection Apparatuses For Elevators (AREA)

Abstract

The invention relates to an elevator riding detection system, which prevents the error work of user detection caused by the photographic environment when using the image shot by a camera to detect the user. An elevator riding detection system of an elevator comprises a camera (12), a user detection part (22), a door opening and closing control part (31), a parameter acquisition part (23) and a false work prevention part (24). A parameter acquisition unit (23) acquires parameters relating to brightness adjustment from a camera (12). An erroneous operation prevention unit (24) determines the brightness of a captured image used by the user detection unit (22) on the basis of the parameter obtained by the parameter acquisition unit (23), and performs processing for preventing erroneous operation of the user detection unit (22) on the basis of the determination result.

Description

Elevator riding detection system
The present application takes Japanese patent application 2017 and 058781 (application date: 3/24/2017) as a basis and enjoys priority based on the application. This application is incorporated by reference into this application in its entirety.
Technical Field
Embodiments of the present invention relate to an elevator riding detection system for detecting an elevator riding a user of a car.
Background
In general, when a car of an elevator arrives at a waiting hall and is opened, the car is closed after a predetermined time has elapsed and then departs. At this time, the user of the elevator does not know when the car is closed. Therefore, when a user gets into the car from the hall, the user may hit or be caught by the door that is closing the door.
In order to prevent such a situation that a user collides with or is caught by a door when riding on the elevator, there is a technique of detecting a user who travels from a waiting hall to a car using a camera and reflecting the detection result to control the opening and closing of the door.
Disclosure of Invention
However, for example, in a case where a hall is extremely dark or extremely bright, there is a situation where a user cannot be accurately detected from a photographed image of a camera. In such a case, the accuracy of detection of the user by the camera may be degraded, and the door may malfunction. For example, when the hall is too bright, shadows of people who have come and go in the hall are reflected in the photographed image, and the user is erroneously detected to open the door and wait. On the contrary, when the waiting hall is too dark and the user and the floor surface cannot be distinguished, the door is closed within a predetermined time even if the user who wants to get into the car comes close.
The present invention has been made to solve the above-mentioned problems, and an object of the present invention is to provide an elevator boarding detection system capable of preventing a malfunction of user detection due to a photographing environment when detecting a user using an image photographed by a camera, and reflecting the user's motion to the control of opening and closing a door only when the user can be correctly detected.
An elevator riding detection system of an elevator according to one embodiment comprises: an imaging unit that can image a predetermined range in a direction from the vicinity of a door of a car to a waiting hall when the car reaches the waiting hall; a user detection unit that detects a user using the image captured by the imaging unit; a control unit for controlling the opening and closing of the door according to the detection result of the user detection unit; a parameter acquisition unit that acquires a parameter related to brightness adjustment from the imaging unit; and an erroneous operation prevention unit that determines brightness of a captured image used in the user detection unit based on the parameter obtained by the parameter obtaining unit, and performs processing for preventing erroneous operation of the user detection unit based on a result of the determination.
According to the elevator boarding detection system of the above configuration, when a user is detected using an image captured by the camera, it is possible to prevent erroneous operation of user detection due to the imaging environment and reflect the user's operation to the door opening/closing control only when the user can be accurately detected.
Drawings
Fig. 1 is a diagram showing a configuration of an elevator boarding detection system of an elevator according to embodiment 1.
Fig. 2 is a diagram showing an example of an image captured by a camera in this embodiment.
Fig. 3 is a diagram showing a state in which a photographed image is divided in units of blocks in the embodiment.
Fig. 4 is a diagram for explaining the detection region in the real space in this embodiment.
Fig. 5 is a diagram for explaining a coordinate system in real space in this embodiment.
Fig. 6 is a flowchart showing a user detection process when the boarding detection system in this embodiment is fully opened.
Fig. 7 is a flowchart showing the user detection operation in the closing process of the boarding detection system in this embodiment.
Fig. 8 is a diagram showing an example of a photographed image in the case where the hall is too dark in the embodiment.
Fig. 9 is a diagram showing an example of a photographed image in a case where the hall is too bright in the embodiment.
Fig. 10 is a flowchart showing the malfunction prevention processing of the boarding detection system in this embodiment.
Fig. 11 is a diagram showing an example of a photographed image in the case where the hall is partially too bright or too dark in embodiment 2.
Fig. 12 is a flowchart showing the malfunction prevention processing of the boarding detection system in this embodiment.
Detailed Description
(embodiment 1)
Fig. 1 is a diagram showing a configuration of an elevator boarding detection system of an elevator according to embodiment 1. Note that, although 1 car is described as an example here, the same configuration is also used in the case of a plurality of cars.
The camera 12 is provided at an upper part of an entrance of the car 11. Specifically, a lens portion of the camera 12 is provided toward the hall 15 in the door lintel plate 11a covering the upper portion of the doorway of the car 11. The camera 12 is a small-sized monitoring camera such as an in-vehicle camera, for example. The camera 12 has a wide-angle lens and is capable of continuously taking images of several frames (for example, 30 frames/sec) in 1 second. When the car 11 reaches each floor and opens the door, the camera 12 captures an image of the elevator hall 15 including a state near the car door 13 in the car 11.
The imaging range at this time was adjusted to L1+ L2(L1 > L2). L1 indicates the imaging range of the lobby side, and is 3m from the car door 13 to the lobby 15, for example. L2 is a car-side imaging range, and is 50cm from the car door 13 to the car back surface, for example. L1 and L2 are depth direction ranges, and the range in the width direction (direction orthogonal to the depth direction) is set to be at least larger than the lateral width of the car 11.
In the hall 15 at each floor, a hall door 14 is openably and closably provided at an arrival entrance of the car 11. When the car 11 arrives, the hoistway door 14 engages with the car door 13 and performs an opening and closing operation. The power source (door motor) is located on the car 11 side, and the hoistway doors 14 are opened and closed only following the car doors 13. In the following description, the hoistway doors 14 are opened when the car doors 13 are opened, and the hoistway doors 14 are closed when the car doors 13 are closed.
Each image (video) captured by the camera 12 is analyzed in real time by the image processing device 20. Note that, although the image processing device 20 is shown as being taken out of the car 11 in fig. 1 for convenience of explanation, the image processing device 20 is actually housed in the header plate 11a together with the camera 12.
Here, the image processing apparatus 20 includes a storage unit 21 and a user detection unit 22. The storage unit 21 sequentially stores images captured by the camera 12, and has a buffer area for temporarily holding data necessary for processing by the user detection unit 22. The user detection unit 22 focuses on the movement of a person or object nearest to the car door 13 among a plurality of images captured by the camera 12 and consecutive in time series, and detects the presence or absence of a user having an intention to take a flight. The user detection unit 22 is functionally divided into an operation detection unit 22a, a position estimation unit 22b, and an elevator boarding intention estimation unit 22 c.
The motion detector 22a compares the brightness of each image in units of blocks to detect the motion of a person or object. The "movement of a person or object" referred to herein is the movement of a moving body such as a person or a wheelchair in the hall 15.
The position estimating unit 22b extracts a block closest to the car door 13 from the blocks having motion detected for each image by the motion detecting unit 22 a. Then, the position estimating unit 22b estimates the coordinate position (Y coordinate shown in fig. 5) of the block in the lobby direction from the center of the car door 13 (center of the door width) as the position of the user (foot position). The riding intention estimating unit 22c determines whether the user has riding intention based on the time-series change of the position estimated by the position estimating unit 22 b.
Here, the image processing apparatus 20 according to the present embodiment includes a parameter acquisition unit 23 and an erroneous operation prevention unit 24 in addition to the above configuration.
The parameter acquisition unit 23 acquires parameters related to brightness adjustment from the camera 12. The "parameter related to brightness adjustment" specifically means exposure time, gain, aperture amount, and the like. These parameters are automatically adjusted to optimum values according to the brightness at the time of photographing.
The malfunction prevention unit 24 determines the brightness of the captured image used by the user detection unit 22 based on the parameter obtained by the parameter acquisition unit 23, and performs processing for preventing malfunction of the user detection unit 22 based on the determination result. Specifically, the malfunction prevention unit 24 calculates the index value LU indicating the brightness of the captured image using 1 or more parameters of the exposure time, gain, and opening amount obtained as the above-described parameters related to brightness adjustment. Then, the malfunction prevention unit 24 performs a process for preventing malfunction when the index value LU is higher than a preset upper limit value THa or lower than a preset lower limit value THb.
In addition, some or all of the functions (the user detection unit 22, the parameter acquisition unit 23, and the malfunction prevention unit 24) provided in the image processing device 20 may be provided in the camera 12, or may be provided in the car control device 30.
The car control device 30 is connected to an elevator control device, not shown, and transmits and receives various signals such as a hall call and a car call to and from the elevator control device. The "hall call" is a signal of a call registered by operating a hall call button, not shown, provided in the hall 15 of each floor, and includes information on the registered floor and destination direction. The "car call" is a signal of a call registered by operating a destination call button, not shown, provided in the car room of the car 11, and includes information on a destination floor.
The car control device 30 includes a door opening/closing control unit 31. The door opening/closing control unit 31 controls opening/closing of the doors of the car doors 13 when the car 11 arrives at the waiting hall 15. Specifically, the door opening/closing control unit 31 opens the car doors 13 when the car 11 arrives at the waiting hall 15, and closes the doors after a predetermined time has elapsed. However, when the user detector 22 of the image processing apparatus 20 detects a person having an elevator riding intention when the car door 13 is opened, the door opening/closing controller 31 prohibits the door closing operation of the car door 13 and maintains the opened state.
Fig. 2 is a diagram showing an example of an image captured by the camera 12. In the figure, E1 represents a position estimation region, and yn represents a Y-coordinate of the detected position of the user's foot. Fig. 3 is a diagram showing a state in which a photographed image is divided in units of blocks. In addition, one side is to be WblockAn image obtained by dividing the original image into a grid is referred to as a "block".
The camera 12 is provided at an upper portion of an entrance of the car 11. Therefore, when the car 11 opens at the waiting hall 15, the predetermined range on the waiting hall side (L1) and the predetermined range in the car (L2) are photographed. Here, if the camera 12 is used, the detection range is widened, and even a user located slightly away from the car 11 can detect the user. However, on the other hand, there is a possibility that the car door 13 is opened by misdetection of only a person who passes through the waiting hall 15 (a person who does not get on the car 11).
Therefore, as shown in fig. 3, the present system is configured to divide the image captured by the camera 12 into blocks of a constant size, detect a block in which a person or object is present, track the block in which the person or object is present, and determine whether or not the user has an intention to take the elevator.
In the example of fig. 3, the vertical and horizontal lengths of the blocks are the same, but the vertical and horizontal lengths may be different. The blocks may be uniformly sized over the entire image area, or may be non-uniform in size such as to shorten the length in the vertical direction (Y direction) toward the top of the image. This enables the estimated leg plate position to be obtained with a higher resolution or with a uniform resolution in the actual space (with a more sparse resolution as the distance from the car door 13 in the actual space increases if the image is divided uniformly).
Fig. 4 is a diagram for explaining a detection region in a real space. Fig. 5 is a diagram for explaining a coordinate system in real space.
In order to detect a movement of a user who intends to take a flight from a captured image, first, a user detection area is set for each block. Specifically, as shown in fig. 4, at least the position estimation area E1 and the boarding intention estimation area E2 are set as the user detection areas.
The position estimation area E1 is an area for estimating a part of the body of the user who moves from the lobby 15 to the car door 13, specifically, the position of the foot board of the user. The boarding intention estimation region E2 is a region in which it is estimated whether or not the user detected in the position estimation region E1 has an intention to board the elevator. The boarding intention estimation area E2 is included in the position estimation area E1 and is an area in which the position of the user's foot is estimated. That is, in the boarding intention estimation area E2, the position of the user's foot is estimated, and the boarding intention of the user is estimated.
In the actual space, the position estimation area E1 has a distance L3 from the center of the car door 13 toward the lobby, and is set to 2m, for example (L3 ≦ photographing range L1 on the lobby side). The lateral width W1 of the position estimation zone E1 is set to a distance equal to or greater than the lateral width W0 of the car doors 13. The boarding intention estimation area E2 has a distance L4 from the center of the car door 13 toward the lobby, and is set to 1m (L4 ≦ L3), for example. The lateral width W2 of the boarding intention estimation zone E2 is set to be substantially the same distance as the lateral width W0 of the car doors 13.
The lateral width W2 of the boarding intention presumption region E2 may be the same as the lateral width W0 or may be slightly wider than the lateral width W0. In the actual space, the position estimation area E1 and the boarding intention estimation area E2 may be trapezoidal except for dead corners on both sides of a triple frame provided on the side of the lobby, instead of rectangular as shown by the broken line in the figure.
The user detection area may be further subdivided, and the approach detection area E3 may be set on the front side of the car door 13.
The approach detection area E3 is an area where the user's foot position is estimated and the user's intention to take the elevator is recognized regardless of whether the user's foot position is approaching the car door 13 from the hall 15. The approach detection area E3 has a distance L5 from the center of the car door 13 toward the lobby, and is set to 30cm (L5 ≦ L4), for example. The lateral width of the approach detection zone E3 is set to be the same as the lateral width W2 of the boarding intention presumption zone E2.
The relationship between each user detection area (E1, E2, E3) and the door opening/closing control process performed by the car control device 30 is as follows.
The position estimation area E1 is an area in which, even if a user is detected in the area, the result thereof is not reflected in the door opening/closing control process performed by the car control device 30.
The riding intention estimation area E2 is an area in which, when a user is detected in the area and the user has riding intention, the result is reflected in the door opening/closing control process performed by the car control device 30. Specifically, the car control device 30 performs a process of extending the door opening time or the like.
The approach detection zone E3 is a zone in which the result of the detection of the user is reflected in the door opening/closing control process performed by the car control device 30 only on the basis of the detection of the user in the zone. Specifically, the car control device 30 performs a process of extending the door opening time or the like.
Here, as shown in fig. 5, the camera 12 captures an image in which a direction horizontal to the car door 13 provided at the doorway of the car 11 is an X axis, a direction from the center of the car door 13 toward the lobby 15 (a direction perpendicular to the car door 13) is a Y axis, and a height direction of the car 11 is a Z axis. In each image captured by the camera 12, the position estimation area E1 and the boarding intention estimation area E2 shown in fig. 4 are compared in units of blocks, and the movement of the position of the foot of the user moving in the direction from the center of the car door 13 to the lobby 15, i.e., in the Y-axis direction is detected.
Next, the operation of the present system will be described in detail.
(user detection action when fully open)
Fig. 6 is a flowchart showing the user detection processing at the time of full open in the present system.
When the car 11 reaches the waiting hall 15 at any floor (yes in step S11), the car control device 30 opens the car doors 13 and waits for a user to get into the car 11 (step S12).
At this time, the camera 12 provided at the upper part of the doorway of the car 11 captures an image of a predetermined range (L1) on the lobby side and a predetermined range (L2) in the car at a predetermined frame rate (e.g., 30 frames/second). The image processing device 20 acquires images captured by the camera 12 in time series, sequentially stores the images in the storage unit 21 (step S13), and executes the following user detection processing in real time (step S14).
The user detection process is executed by the user detection unit 22 provided in the image processing apparatus 20. The user detection process is divided into an operation detection process (step S14a), a position estimation process (step S14b), and a boarding intention estimation process (step S14c) as follows.
(a) Motion detection processing
This operation detection process is executed by the operation detection unit 22a, which is one of the components of the user detection unit 22.
The operation detection unit 22a reads out each image held in the storage unit 21 one by one, and calculates an average luminance value for each block. At this time, the operation detection unit 22a holds the average luminance value for each block calculated when the first image is input as the initial value in the 1 st buffer area, not shown, in the storage unit 21. When the 2 nd and subsequent images are obtained, the motion detector 22a compares the average luminance value of each block of the current image with the average luminance value of each block of the previous 1 st image held in the 1 st buffer area. As a result, when a block having a luminance difference equal to or greater than a preset value exists in the current image, the operation detection unit 22a determines that the block is an operation-existing block.
When it is determined that there is a motion with respect to the current image, the motion detector 22a compares the average luminance value of each block of the image with the average luminance value of the next image and holds the average luminance value in the 1 st buffer area. Thereafter, similarly, the motion detection unit 22a repeatedly compares the luminance values of the respective images captured by the camera 12 in units of blocks in time series, and determines whether or not there is a motion.
(b) Location speculation processing
This position estimation process is executed by the position estimation unit 22b, which is one of the components of the user detection unit 22.
The position estimating unit 22b checks a block in which an operation exists in the current image based on the detection result of the operation detecting unit 22 a. As a result, when there is a block having an operation in the position estimation area E1 shown in fig. 4, the user detection unit 22 extracts a block closest to the car door 13 among the blocks having an operation.
Here, as shown in fig. 1, the camera 12 is provided at an upper part of an entrance of the car 11 toward the hall 15. Therefore, when the user moves from the hall 15 to the car door 13, there is a high possibility that a portion of the right or left foot plate of the user appears on the block closest to the near side of the photographed image, that is, the car door 13 side. Therefore, the position estimating unit 22b obtains the Y coordinate of the block that has moved closest to the car door 13 (the coordinate in the direction of the hall 15 from the center of the car door 13) as the data of the user's foot plate position, and holds the Y coordinate in the 2 nd buffer area (not shown) in the storage unit 21.
Thereafter, similarly, the position estimating unit 22b obtains the Y coordinate of the block having the motion closest to the car door 13 as data of the foot position of the user for each image, and stores the data in the 2 nd buffer area, not shown, in the storage unit 21. The estimation processing of the foot plate position is performed not only in the position estimation area E1 but also in the boarding intention estimation area E2 in the same manner.
(c) Riding intention estimation processing
The riding intention estimating process is executed by the riding intention estimating unit 22c, which is one of the components of the user detecting unit 22.
The riding intention estimating unit 22c smoothes the data of the position of the user's foot plate in each image held in the 2 nd buffer area. As a method of smoothing, for example, a generally known method such as an average value filter or a kalman filter is used, and a detailed description thereof will be omitted here.
When the data of the footrest position is smoothed, if there is data having a variation amount equal to or greater than a predetermined value, the riding intention estimating unit 22c excludes the data as an abnormal value. The predetermined value is determined based on a standard walking speed of the user and a frame rate of the photographed image. In addition, abnormal values may be found and eliminated before smoothing the data of the position of the footboard.
When the user moves from the hall 15 to the car door 13, the Y coordinate value of the user's foot position gradually decreases with time. In addition, in the case of a moving body such as a wheelchair, for example, the data changes in a straight line shape shown by a broken line, but in the case of a user, since the left and right foot plates are alternately detected, the data changes in a curved line shape shown by a solid line. In addition, when some noise enters the detection result, the amount of change in the instantaneous foot plate position becomes large. The data of the foot plate position having a large variation is excluded as an abnormal value.
Here, the boarding intention estimating unit 22c confirms the operation (data change) of the foot plate position within the boarding intention estimating area E2 shown in fig. 2. As a result, when the movement (data change) of the position of the foot plate of the user who moves to the car door 13 in the Y-axis direction can be confirmed in the boarding intention estimation area E2, the boarding intention estimation unit 22c determines that the user has the boarding intention.
On the other hand, when the movement of the foot plate position of the user heading for the car door 13 in the Y-axis direction cannot be confirmed in the boarding intention estimation area E2, the boarding intention estimation section 22c determines that the user has no boarding intention. For example, when a person crosses the front of the car 11 in the X-axis direction, the position of the foot plate that does not change in time in the Y-axis direction is detected in the riding intention estimation region E2. In such a case, it is determined that there is no intention to take the elevator.
In this way, by regarding the block that has moved closest to the car door 13 as the position of the foot plate of the user and tracking the temporal change in the Y-axis direction of the position of the foot plate, it is possible to estimate whether the user has an intention to take the elevator.
When a user who intends to ride the elevator is detected (yes in step S15), a user detection signal is output from the image processing device 20 to the car control device 30. Upon receiving the user detection signal, the car control device 30 prohibits the door closing operation of the car doors 13 and maintains the door open state (step S16).
Specifically, when the car doors 13 are in the fully open state, the car control device 30 starts the door opening time counting operation and closes the doors when the counting operation is performed for a predetermined time T (for example, 1 minute). When a user who intends to take the elevator is detected during this period and a user detection signal is sent, the car control device 30 stops the counting operation and clears the count value. Thereby, the open state of the car door 13 is maintained during the time T.
When a new user with an intention to take the elevator is detected during this period, the count value is cleared again, and the open state of the car door 13 is maintained during the period of time T. However, if the user gets too many times during the time T, the situation in which the car doors 13 cannot be closed continues, and therefore, it is preferable to provide an allowable time Tx (for example, 3 minutes) and forcibly close the car doors 13 when the allowable time Tx has elapsed.
When the counting operation for the time T is completed (step S17), the car control device 30 closes the car doors 13 and moves the car 11 to the destination floor (step S18).
In the case where the approach detection area E3 shown in fig. 4 is set, when the position of the user's foot is confirmed in the approach detection area E3, the boarding intention estimating unit 22c treats the user as having a boarding intention regardless of whether the user's foot position has come close to the car door 13 from the hall 15.
In this way, by analyzing the image of the hall 15 captured by the camera 12 provided at the upper part of the doorway of the car 11, it is possible to detect, for example, a user who is traveling to the car door 13 from a place slightly distant from the car 11, and to reflect the user to the door opening/closing operation.
In particular, by tracking the temporal change in the position of the foot plate in the direction from the car door 13 to the hall 15 (Y-axis direction) while looking at the position of the foot plate of the user in the photographed image, it is possible to prevent erroneous detection of a person passing only in the vicinity of the car, for example. This makes it possible to accurately detect only a user having an intention to take the elevator, and to reflect the detection result to the door opening/closing operation. In this case, the door-open state is maintained during the period in which the user having the intention to take the elevator is detected. Therefore, it is possible to avoid a situation in which the user starts the door closing operation and hits the door when the user is about to ride in the car 11.
(detection action of user in door closing process)
In the flowchart of fig. 6, although the description has been given assuming a state in which the car doors 13 of the car 11 are open in the waiting hall 15, it is possible to detect the presence or absence of a user having an intention to take the elevator using the image captured by the camera 12 even when the car doors 13 are closing (closing). When a user who intends to ride the car is detected, the door opening/closing control unit 31 of the car control device 30 interrupts the door closing operation of the car door 13 and performs the door opening operation again.
Fig. 7 is a flow chart showing the user detection action during the door closing process in the present system.
When a predetermined time has elapsed from a state in which the car doors 13 of the car 11 are fully opened, the door closing operation is started by the door opening/closing control unit 31 (step S21). At this time, the photographing operation of the camera 12 is continued. The image processing device 20 acquires images captured by the camera 12 in time series, sequentially stores the images in the storage unit 21 (step S22), and executes user detection processing in real time (step S23).
The user detection process is executed by the user detection unit 22 provided in the image processing apparatus 20. The user detection process is divided into an operation detection process (step S23a), a position estimation process (step S23b), and a boarding intention estimation process (step S23 c). Since these processes are the same as steps S14a, S14b, and S14c in fig. 6, detailed description thereof will be omitted.
Here, when a user who intends to take the elevator is detected (yes in step S24), a user detection signal is output from the image processing device 20 to the car control device 30. When receiving the user detection signal during the door closing process, the car control device 30 interrupts the door closing operation of the car doors 13 and performs the door opening operation (re-opening) again (step S25).
Thereafter, the same process as described above is repeated. However, if a user having an intention to take the elevator is continuously detected while the door is closed, the door is repeatedly opened again, and the departure of the car 11 is delayed. Therefore, even when a user who intends to take the elevator is detected, it is preferable that the door is closed without opening the door again if the allowable time Tx (for example, 3 minutes) elapses.
When the proximity detection area E3 shown in fig. 4 is set, the following processing is performed.
That is, when the position of the user's foot plate is confirmed in the approach detection area E3, the boarding intention estimating unit 22c treats the user as having the boarding intention regardless of whether the user's foot plate position has come close to the car door 13 from the hall 15.
Thus, even in the process of closing the door, the presence or absence of the user having the intention of riding the elevator can be detected and reflected in the door opening and closing operation. Therefore, a situation in which the user hits the door when the user is about to get into the car 11 that is closing the door can be avoided.
(user detection action during full closure or door opening: Pull-in detection)
When the car door 13 is in the fully closed state, or when the car door 13 is in the process of opening the door, the user detection zone is set to the car 11 side. Although not shown, the user detection region on the car 11 side is set to have a predetermined distance (L6) from the center of the car door 13 toward the car 11, for example, 30cm (L6 ≦ imaging range L2 on the car 11 side). The lateral width is substantially the same as the lateral width W0 of the car door 13.
The user detection area on the car 11 side is used as a "pull-in detection area", and when a user is detected in this area, the result is reflected in the door opening/closing control process performed by the car control device 30. Specifically, the car control device 30 performs processing such as prohibiting the door opening operation of the car doors 13, slowing the door opening speed of the car doors 13, and notifying the user of a message urging the car doors 13 to move away.
(erroneous operation prevention processing)
In the present system, the user is detected using the image captured by the camera 12, and therefore the state of the captured image affects the detection accuracy of the user. For example, when the hall 15 is too dark as shown in fig. 8, or when the hall 15 is too bright as shown in fig. 9 and shadows of people are largely reflected, the user cannot be correctly detected from the photographed image, and the possibility of performing a malfunction is increased. The same applies to the case where the interior of the car 11 is too dark and the case where the interior is too bright. Therefore, in the present system, when the image captured by the camera 12 is too dark or too bright, the detection process (sensing) is temporarily suspended, so that the output of the detection result is prohibited or the sensitivity of the detection process is reduced, thereby preventing malfunction.
Fig. 10 is a flowchart showing the erroneous operation prevention processing in the present system. The erroneous operation prevention processing is executed after the captured image acquisition at step S13 in the flowchart of fig. 6, and is executed after the captured image acquisition at step S22 in the flowchart of fig. 7.
First, as a premise, the camera 12 has a function of automatically adjusting 1 or more of the exposure time, the gain, and the opening amount as a function for performing imaging at a predetermined brightness. Further, the automatic adjustment function may be installed in a device external to the camera.
Opening amount: commonly referred to as an "aperture". The larger the value of the aperture, the smaller the opening amount. Here, the diaphragm is expressed as "the amount of opening", and the larger the value is, the more light is captured, that is, the brighter it appears. For simplicity of explanation, it is assumed that when the value of the opening amount becomes 2 times, the amount of the light to be acquired becomes 2 times.
In general, the amount of light obtained becomes 1/2 every time the value of "aperture" becomes 1.4 times, so that the actual solution value uses the aperture amount of 1/aperture2. The aperture amount has no unit (the aperture is not shown, and for convenience of explanation, it is referred to as an F value). The range is, for example, 1/4-1/64 (aperture is 2-8).
Exposure time: also known as shutter speed. The unit is seconds. The range is, for example, 1/1000-1/30.
Gain: is a value that makes the input signal several times. There is no unit (magnification). The range is, for example, 1 to 30.
Here, the brightness of the image captured by the camera 12 is quantified. This is referred to as "brightness index value". As a principle of the camera, the luminance values of the photographed images have the following proportional relationship.
Luminance value. varies (physical) light quantity x opening quantity x exposure time x gain
If the light amount is regarded as brightness, the brightness index value is obtained by the following expression (1).
Brightness index value/(aperture amount × exposure time × gain) … (1)
The parameter acquisition unit 23 provided in the image processing apparatus 20 acquires the automatically adjusted values of the exposure time, gain, and opening amount from the camera 12 as parameters related to the brightness adjustment (step S31).
The malfunction prevention unit 24 calculates an index value LU indicating the brightness of the captured image used by the user detection unit 22, based on the parameter obtained by the parameter acquisition unit 23 (step S32). The "brightness of the photographed image" referred to herein specifically means the brightness of the photographing range shown by L1+ L2 in fig. 1.
Here, in the present embodiment, the camera 12 is configured to be able to automatically adjust the exposure time and the gain, and the opening amount is fixed. If the luminance value of the above expression (1) is constant, the malfunction prevention unit 24 obtains the brightness index value LU as follows.
LU 1/(exposure time × gain)
The malfunction prevention unit 24 compares the brightness index value LU with the preset upper limit value THa and lower limit value THb of brightness (step S33). THa > THb, for example THa 300 and THb 3.
If LU > THa or LU < THb, that is, if the entire captured image is too bright or too dark (yes in step S33), the malfunction prevention unit 24 determines that the user cannot be correctly detected from the captured image, and performs a specific process for preventing the malfunction of the user detection unit 22 (step S34). Specifically, the malfunction prevention unit 24 performs one of temporarily stopping the detection process of the user detection unit 22, prohibiting the output of the detection result, and reducing the sensitivity of the detection process.
The "temporarily suspending the detection processing" means that the user detection (sensing) is not performed. That is, the user detection processing is not performed on the captured image (steps S14 to S15 in fig. 6/steps S23 to S24 in fig. 7). In this case, since the detection result is not reflected in the door opening/closing control, the normal door opening/closing control is executed.
The "prohibition of output of the detection result" means that the detection result is invalidated and is not output to the car control device 30 even if the user detection is performed. In this case, the detection result is not reflected in the door opening/closing control, and therefore, the normal door opening/closing control is executed.
By "reducing the sensitivity of the detection process" is meant reducing the accuracy of the user's detection. Specifically, in the motion detection processing described in step S14a in fig. 6, the threshold value of the luminance difference when the luminance values of the captured images that are temporally continuous are compared in units of blocks is increased as compared with the normal case, and the detection rate of the blocks in which motion is present is lowered.
In addition, in the case of LU > THa or LU < THb, the sensitivity of the detection processing for the image may be lowered in stages depending on the value of LU at that time (the sensitivity of the image may be lowered as the value of LU is farther from THa or THb).
As described above, according to embodiment 1, in the system for detecting a user using an image captured by the camera 12, it is possible to prevent an erroneous operation of user detection in the case where the captured image is too bright or too dark. This makes it possible to reflect the user's operation to the control of opening and closing the door only when the user can be accurately detected.
(modification example)
As a modification of the above-described embodiment 1, the brightness index value LU may be calculated by the following method.
(1) Case of using average brightness value
The exposure time and gain can be acquired from the camera 12. The malfunction prevention unit 24 obtains the average luminance value of the entire captured image or the region used for detection, and obtains the brightness index value LU as follows.
LU ═ average luminance value/(exposure time × gain)
(2) Method of using 3 of exposure time, gain and opening amount
The exposure time, gain, and aperture amount can be acquired from the camera 12. The malfunction prevention unit 24 uses these values to obtain the brightness index value LU as follows.
LU 1/(exposure time × gain × opening amount)
(3) Combinations of values used
The malfunction prevention unit 24 determines the brightness index value LU as follows from the combination of the exposure time, gain, aperture amount, and average brightness.
The case of using exposure time, gain, aperture amount, and average brightness
LU is the average luminance value/(exposure time × gain × opening amount)
Case of using exposure time (gain and aperture amount are fixed)
LU 1/exposure time
Case of using gain (exposure time and aperture amount are fixed)
LU 1/gain
Even when the brightness index value LU is obtained by the above method, it is possible to prevent a user from detecting an erroneous operation when the photographed image is too bright or too dark, as in the above embodiment 1.
(embodiment 2)
Next, embodiment 2 will be explained.
In the above-described embodiment 1, the malfunction prevention processing is performed with reference to the brightness of the entire image captured by the camera 12. In embodiment 2, an excessively bright portion or an excessively dark portion is detected in a captured image, and the portion is subjected to the erroneous operation prevention processing.
That is, as shown in fig. 11, a bright place or a dark place may be locally generated due to the relationship of illumination light, natural light, or the like. As described with reference to fig. 3, the present system is configured to divide the image captured by the camera 12 into blocks of a constant size, detect a block in which a person or object moves, and track the block in which the person or object moves, thereby determining whether or not the user has an intention to take the elevator. Therefore, the brightness index value LUi (i is the number of blocks) is calculated for each of the blocks, and the malfunction prevention process is performed on a portion higher than the upper limit value THa or a portion lower than the lower limit value THb.
The following describes a specific processing operation.
Fig. 12 is a flowchart showing the erroneous operation prevention processing in embodiment 2. The erroneous operation prevention processing is executed after the captured image acquisition at step S13 in the flowchart of fig. 6, and is executed after the captured image acquisition at step S22 in the flowchart of fig. 7.
As in the case of embodiment 1, the camera 12 has a function of automatically adjusting 1 or more of the exposure time, gain, and aperture amount as a function for capturing images at a predetermined brightness. Further, the automatic adjustment function may be installed in a device external to the camera.
The parameter acquisition unit 23 provided in the image processing apparatus 20 acquires the automatically adjusted values of the exposure time, gain, and opening amount from the camera 12 as parameters related to the brightness adjustment (step S41).
In embodiment 2, the malfunction prevention unit 24 divides the captured image used by the user detection unit 22 into blocks of a predetermined size (step S42), and performs the following processing for each of the blocks. Further, the processing of steps S43 to S46 enclosed by the broken line in the figure is executed in each block.
That is, first, the malfunction prevention unit 24 obtains the luminance value of each block (step S43). In this case, for each block, a value obtained by averaging luminance values of pixels belonging to the block is obtained as a luminance value of the block.
Next, the malfunction prevention unit 24 obtains the brightness index value LUi for each block unit as follows using the luminance value obtained for each block (step S44).
Luminance value/(exposure time × gain) of block i
Further, the exposure time and the gain are automatically adjusted by the camera 12, and the opening amount is fixed. i is the number of blocks.
The malfunction prevention unit 24 compares the brightness index value LUi for each block with the preset upper limit value THa and lower limit value THb of brightness (step S44). THa > THb, for example THa 300 and THb 3.
If the LUi > THa or the LUi < THb is present, that is, if the portion of the imaging range corresponding to the block is too bright or too dark (yes in step S45), the malfunction prevention unit 24 determines that the user cannot be correctly detected from the captured image, and performs a specific process for preventing the malfunction of the user detection unit 22 (step S46). Specifically, the malfunction prevention unit 24 performs any one of temporarily stopping the detection processing of the block by the user detection unit 22, prohibiting the output of the detection result for the block, and reducing the sensitivity of the detection processing for the block.
The phrase "temporarily stopping the detection processing for the block" means that the user is not detected (sensed) for an excessively bright portion or an excessively dark portion in the captured image. That is, the user detection processing is not performed on the excessively bright portion or the excessively dark portion in the captured image (processing in steps S14 to S15 in fig. 6/steps S23 to S25 in fig. 7). In this case, if a user is detected in another part, the detection result is output to the car control device 30 and reflected in the door opening/closing control.
The phrase "prohibiting the output of the detection result for the block" means that even if the user detects an excessively bright portion or an excessively dark portion in the captured image, the detection result is invalidated and is not output to the car control device 30. In this case, if a user is detected in another part of the photographed image, the detection result is output to the car control device 30 and reflected in the door opening/closing control.
"reducing the sensitivity of the detection process for the block" means reducing the accuracy of the user's detection. Specifically, in the motion detection processing described in step S14a in fig. 6, the threshold value of the luminance difference when comparing the luminance values of the temporally successive captured images for each block is increased compared to that in the normal case, only for the block, and the detection rate of the block that moves with respect to the existence of an excessively bright portion or an excessively dark portion is lowered.
In the case where LUi > THa or LUi < THb, the sensitivity of the detection process for the block may be lowered in stages according to the value of LUi at that time (the sensitivity for the block may be lowered as the value of LUi is farther from THa or THb).
As described above, according to embodiment 2, it is possible to prevent erroneous operation of user detection for an excessively bright portion and an excessively dark portion in the imaging range in the hall 15 and the car 11, and to reflect the user's operation to the door opening/closing control only when the user can be correctly detected in the other portions. Thus, for example, when the waiting hall 15 side is dark as a whole and the interior of the car 11 has an appropriate brightness, it is possible to distinguish that sensing is not performed in the waiting hall 15 and sensing is performed in the car 11.
(modification example)
As a modification of the above-described embodiment 2, the brightness index value LUi in block units may be calculated by the following method.
(1) Method of using 3 of exposure time, gain and opening amount
The exposure time, gain, and aperture amount can be acquired from the camera 12. The malfunction prevention unit 24 uses these values to obtain the brightness index value LUi in block units as follows.
Luminance value/(exposure time × gain × aperture amount) of block i
(2) Combinations of values used
The malfunction prevention unit 24 determines the brightness index value LU as follows from a combination of the exposure time, gain, and opening amount.
Case of using exposure time (gain and aperture amount are fixed)
Luminance value/exposure time for block i
Case of using gain (exposure time and aperture amount are fixed)
Luminance value/gain of block i
Even when the brightness index value LUi in block units is obtained by the above method, it is possible to prevent a user from detecting an erroneous operation when the photographed image is locally too bright or too dark, as in the above embodiment 1.
According to at least 1 embodiment described above, it is possible to provide an elevator boarding detection system capable of preventing a malfunction of user detection due to an imaging environment when detecting a user using an image captured by a camera and reflecting the user's motion to the control of opening and closing a door only when the user can be correctly detected.
Several embodiments of the present invention have been described, but these embodiments are provided as examples and are not intended to limit the scope of the invention. These novel embodiments can be implemented in other various forms, and various omissions, substitutions, and changes can be made without departing from the spirit of the invention. These embodiments and modifications thereof are included in the scope and gist of the invention, and are included in the invention described in the claims and the equivalent scope thereof.

Claims (7)

1. An elevator riding detection system is characterized by comprising:
an imaging unit that can image a predetermined range in a direction from the vicinity of a door of a car to a waiting hall when the car reaches the waiting hall;
a user detection unit that detects a user using the image captured by the imaging unit;
a control unit for controlling the opening and closing of the door according to the detection result of the user detection unit;
a parameter acquisition unit that acquires a parameter related to brightness adjustment from the imaging unit; and
and an erroneous operation prevention unit that determines brightness of the captured image used by the user detection unit based on the parameter adjusted at the time of capturing the image obtained by the parameter acquisition unit, and performs processing for preventing erroneous operation of the user detection unit when it is determined that the user cannot be correctly detected from the captured image.
2. The elevator boarding detection system according to claim 1,
the parameter includes at least one of exposure time, gain, and aperture opening amount.
3. The elevator boarding detection system according to claim 2,
the malfunction prevention unit calculates an index value indicating brightness of the captured image using 1 or more of the exposure time, the gain, and the opening amount, and performs a process for preventing malfunction of the user detection unit when the index value is higher than a preset upper limit or lower than a preset lower limit.
4. The elevator boarding detection system according to claim 2,
the malfunction prevention unit divides the captured image in units of blocks, calculates an index value indicating brightness in units of the blocks using 1 or more of the exposure time, the gain, and the opening, and performs a process for preventing malfunction of the user detection unit on the block when the index value is higher than a preset upper limit or lower than a preset lower limit.
5. The elevator boarding detection system according to claim 1,
the erroneous operation preventing unit temporarily stops the detection process as a process for preventing erroneous operation of the user detecting unit.
6. The elevator boarding detection system according to claim 1,
the erroneous operation preventing section prohibits the output of the detection result to the control section as a process for preventing the erroneous operation of the user detecting section.
7. The elevator boarding detection system according to claim 1,
as a process for preventing erroneous operation of the user detection section, the erroneous operation prevention section decreases the sensitivity of the detection process.
CN201711474231.6A 2017-03-24 2017-12-29 Elevator riding detection system Active CN108622776B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-058781 2017-03-24
JP2017058781A JP6377797B1 (en) 2017-03-24 2017-03-24 Elevator boarding detection system

Publications (2)

Publication Number Publication Date
CN108622776A CN108622776A (en) 2018-10-09
CN108622776B true CN108622776B (en) 2020-05-01

Family

ID=63249998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711474231.6A Active CN108622776B (en) 2017-03-24 2017-12-29 Elevator riding detection system

Country Status (4)

Country Link
JP (1) JP6377797B1 (en)
CN (1) CN108622776B (en)
MY (1) MY190207A (en)
SG (1) SG10201800801TA (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6646169B1 (en) * 2019-01-18 2020-02-14 東芝エレベータ株式会社 Elevator system
JP6716741B1 (en) * 2019-03-20 2020-07-01 東芝エレベータ株式会社 Elevator user detection system
JP6693627B1 (en) * 2019-05-16 2020-05-13 東芝エレベータ株式会社 Image processing device
JP6795266B1 (en) * 2019-08-08 2020-12-02 東芝エレベータ株式会社 Elevator user detection system
JP6881853B2 (en) * 2019-08-09 2021-06-02 東芝エレベータ株式会社 Elevator user detection system
JP6896808B2 (en) * 2019-08-09 2021-06-30 東芝エレベータ株式会社 Elevator user detection system
JP6871324B2 (en) * 2019-08-28 2021-05-12 東芝エレベータ株式会社 Elevator user detection system
JP6843935B2 (en) * 2019-09-05 2021-03-17 東芝エレベータ株式会社 Elevator user detection system
JP6833942B1 (en) * 2019-09-10 2021-02-24 東芝エレベータ株式会社 Elevator user detection system
JP6874104B1 (en) * 2019-12-06 2021-05-19 東芝エレベータ株式会社 Elevator system and elevator control method
JP7009537B2 (en) * 2020-03-23 2022-01-25 東芝エレベータ株式会社 Elevator user detection system
JP7019740B2 (en) * 2020-03-23 2022-02-15 東芝エレベータ株式会社 Elevator user detection system
JP6985443B2 (en) * 2020-03-23 2021-12-22 東芝エレベータ株式会社 Elevator user detection system
JP7183457B2 (en) * 2020-03-23 2022-12-05 東芝エレベータ株式会社 Elevator user detection system
JP6968943B1 (en) * 2020-07-15 2021-11-24 東芝エレベータ株式会社 Elevator user detection system
JP7135144B1 (en) * 2021-03-18 2022-09-12 東芝エレベータ株式会社 Elevator user detection system
JP7276992B2 (en) * 2021-08-06 2023-05-18 東芝エレベータ株式会社 Elevator user detection system
JP7242820B1 (en) 2021-12-10 2023-03-20 東芝エレベータ株式会社 Elevator control system and elevator control method
JP7516582B1 (en) 2023-01-12 2024-07-16 東芝エレベータ株式会社 Elevator System

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11261994A (en) * 1998-03-11 1999-09-24 Mitsubishi Electric Corp Object detector and user number detector for elevator
CN103663068A (en) * 2012-08-30 2014-03-26 株式会社日立制作所 Elevator door system and elevator having elevator door system
CN104709782A (en) * 2013-12-12 2015-06-17 株式会社日立制作所 Elevator system
JP6068694B1 (en) * 2016-01-13 2017-01-25 東芝エレベータ株式会社 Elevator boarding detection system
JP6092434B1 (en) * 2016-01-13 2017-03-08 東芝エレベータ株式会社 Elevator system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11261994A (en) * 1998-03-11 1999-09-24 Mitsubishi Electric Corp Object detector and user number detector for elevator
CN103663068A (en) * 2012-08-30 2014-03-26 株式会社日立制作所 Elevator door system and elevator having elevator door system
CN104709782A (en) * 2013-12-12 2015-06-17 株式会社日立制作所 Elevator system
JP6068694B1 (en) * 2016-01-13 2017-01-25 東芝エレベータ株式会社 Elevator boarding detection system
JP6092434B1 (en) * 2016-01-13 2017-03-08 東芝エレベータ株式会社 Elevator system

Also Published As

Publication number Publication date
JP6377797B1 (en) 2018-08-22
SG10201800801TA (en) 2018-10-30
JP2018162118A (en) 2018-10-18
MY190207A (en) 2022-04-04
CN108622776A (en) 2018-10-09

Similar Documents

Publication Publication Date Title
CN108622776B (en) Elevator riding detection system
EP3192762B1 (en) Elevator system
JP6068694B1 (en) Elevator boarding detection system
JP5969147B1 (en) Elevator boarding detection system
JP6139729B1 (en) Image processing device
JP6242966B1 (en) Elevator control system
CN113428752B (en) User detection system for elevator
CN109879130B (en) Image detection system
CN110294391B (en) User detection system
JP2018090351A (en) Elevator system
JP6271776B1 (en) Elevator boarding detection system
JP2018162116A (en) Elevator system
JP2018158842A (en) Image analyzer and elevator system
JP2005126184A (en) Control device of elevator
CN111689324B (en) Image processing apparatus and image processing method
CN110294371B (en) User detection system
CN113428750B (en) User detection system for elevator
CN112340581B (en) User detection system for elevator
CN115703609A (en) Elevator user detection system
CN111453588B (en) Elevator system
JP7135144B1 (en) Elevator user detection system
JP6729980B1 (en) Elevator user detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1259469

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant