CN111311786A - Intelligent door lock system and intelligent door lock control method thereof - Google Patents

Intelligent door lock system and intelligent door lock control method thereof Download PDF

Info

Publication number
CN111311786A
CN111311786A CN201811402696.5A CN201811402696A CN111311786A CN 111311786 A CN111311786 A CN 111311786A CN 201811402696 A CN201811402696 A CN 201811402696A CN 111311786 A CN111311786 A CN 111311786A
Authority
CN
China
Prior art keywords
door lock
image data
region
neural network
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811402696.5A
Other languages
Chinese (zh)
Inventor
袁坡
潘生俊
赵俊能
丹尼尔马里尼克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Eyecloud Technology Co ltd
Original Assignee
Hangzhou Eyecloud Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Eyecloud Technology Co ltd filed Critical Hangzhou Eyecloud Technology Co ltd
Priority to CN201811402696.5A priority Critical patent/CN111311786A/en
Priority to US16/238,489 priority patent/US20200005573A1/en
Publication of CN111311786A publication Critical patent/CN111311786A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00563Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00571Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys operated by interacting with a central unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Lock And Its Accessories (AREA)

Abstract

The application relates to an intelligent door lock system and an intelligent door lock control method thereof. The intelligent door lock system is used for collecting image data of a moving object adjacent to an electronic control type door lock by integrating a camera system into the electronic control type door lock, processing and analyzing the image data collected by the camera system based on an artificial intelligence algorithm, and then sending at least part of the image data to a mobile terminal carried by a user after at least one preset condition is met, so that the user is allowed to remotely control the electronic control type door lock, and an area near the door is monitored by using the camera system.

Description

Intelligent door lock system and intelligent door lock control method thereof
Technical Field
The present invention relates generally to a door lock system, and more particularly, to an intelligent door lock system having a monitoring function and an intelligent door lock control method thereof.
Background
To ensure personal and property safety, a door lock (e.g., a mechanical door lock, an electronically controlled door lock, etc.) is typically installed on a door of a residence to prevent a potential intruder from entering the room. The unlocking/locking function of the conventional electronic control type door lock is controlled by a security confirmation mechanism such as a password or a fingerprint. For example, when the electronically controlled door lock is controlled by fingerprint recognition, after confirming that the inputted fingerprint matches the pre-registered fingerprint information, the electronically controlled door lock is controlled to switch to the unlock position to unlock and unlock the door.
In particular applications, the existing electronically controlled door locks suffer from a number of inconveniences. Firstly, on the premise of not revealing security information, the homeowner must personally present a password or a fingerprint required for providing a security confirmation mechanism to complete unlocking. However, in some scenarios the homeowner cannot be present on site by itself but has a need for remote unlocking, for example, when a friend or relative of the homeowner visits the homeowner while the homeowner is not in the home.
Secondly, current lock does not possess the monitoring function, that is to say, when the thief got into indoorly through the picklock, the lock can't provide control and early warning function for the homeowner, leads to personal and property safety to suffer the loss. To avoid this, some homeowners may choose to install an additional monitoring camera near the door to monitor whether a thief is present near the door. However, such surveillance cameras are typically suspended from the door or mounted on a wall near the door, not only adding additional wiring and destroying the overall aesthetics, but the cameras are also susceptible to damage (e.g., theft by a person, etc.).
Therefore, a need exists for an intelligent door lock with a monitoring function.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides an intelligent door lock system and an intelligent door lock control method thereof, wherein a camera system is integrated in an electronic control type door lock and used for acquiring image data of a moving object adjacent to the door lock, the image data acquired by the camera system is processed and analyzed based on an artificial intelligence algorithm, and then at least part of the image data is sent to a control type mobile terminal carried by a user after at least one preset condition is met, so that the user is allowed to remotely control the electronic door lock, and the camera system is utilized to monitor an area adjacent to the door.
According to an aspect of the present application, there is provided an intelligent door lock control system including: an electronically controlled door lock, wherein the electronically controlled door lock is mounted to the door for controlling the door to close; the camera system is integrated with the electronic control type door lock, and comprises a motion detector and a first camera device, wherein the motion detector is used for detecting whether a moving object exists in a field of view of the camera system, the first camera device is arranged on a door and faces the outer side of the door, and the first camera device is used for acquiring image data of the moving object in an area adjacent to the outer side of the door; and a door lock controller comprising a processor and a memory having stored thereon computer program instructions that, when executed by the processor, the processor is configured to: processing at least a portion of the image data to determine that at least one condition is satisfied, wherein the at least one condition includes determining that a human is included in an object included in the image data or determining that a face region is included in the image data; outputting, by the processor, at least a portion of the image data to a mobile terminal in response to determining that at least one condition is satisfied; receiving an unlocking control command from the mobile terminal, wherein the unlocking control command is used for triggering the electronic control type door lock installed on the door to an unlocking position; and controlling the electronically controlled door lock to an open position in response to receiving the unlocking control command to unlock the electronically controlled door lock.
In the above-mentioned intelligent door lock control system, the camera system further includes a second camera device, wherein the second camera device is disposed on a door and faces the inner side of the door, and is configured to collect image data of the moving object adjacent to the inner side area of the door.
In the above intelligent door lock control system, the processor is further configured to: processing at least a portion of the image data using a first deep neural network model to determine that an object contained in the image data is a human; processing the at least a portion of the image data using a second deep neural network model to determine that the image data includes a face region; and determining that at least one condition is satisfied in response to determining that a human is included in an object included in the image data or that a face region is included in the image data.
In the above-mentioned intelligent door lock control system, the first neural network model and the second neural network model have the same basic model architecture, and only the last layer is different.
In the above-mentioned intelligent door lock control system, the first neural network model and the second neural network model respectively include N layers of depth separable convolutional layers for obtaining a feature map of the image data, where N is a positive integer and belongs to 4 to 12, where each depth separable convolutional layer includes a depth convolutional layer for applying a single filter to each input channel and a point-by-point convolutional layer for linearly combining outputs of the depth convolutions to obtain an updated feature map, where the second neural network model includes N layers of depth separable convolutional layers for obtaining a feature map of the image data, where N is a positive integer and belongs to 4 to 12, where each depth separable convolutional layer includes a depth convolutional layer and a point-by-point convolutional layer, the depth convolutional layers, for applying a single filter per input channel, said point-wise convolution layer, for linearly combining the outputs of said depth convolutions to obtain an updated feature map.
In the above intelligent door lock control system, the processor is further configured to: identifying a different image region between a first image and a second image contained in at least a portion of the image data; aggregating different image regions between the first and second images to obtain at least one region of interest; carrying out gray level processing on the at least one region of interest; processing the at least one region of interest after the gray scale processing by using the first deep neural network model so as to classify the object contained in the at least one region of interest; and determining that the object contained in the at least one region of interest comprises a human.
In the above intelligent door lock control system, the processor is further configured to: identifying a different image region between a first image and a second image contained in at least a portion of the image data; aggregating different image regions between the first and second images to obtain at least one region of interest; carrying out gray level processing on the at least one region of interest; and processing the at least one region of interest after the gray scale processing by using the first deep neural network model to determine that the at least one region of interest comprises a human face region.
According to another aspect of the present application, there is provided an intelligent door lock control method, including: detecting whether a moving object exists in a field of view of a camera system, wherein the camera system comprises a first camera device which is arranged on a door and faces to the outer side of the door and is used for acquiring image data of the moving object adjacent to the outer side area of the door; in response to detecting the presence of a moving object within the field of view of the camera system, capturing, by the camera system, image data of the moving object; processing at least a portion of the image data by a door lock controller to determine that at least one condition is satisfied, wherein the at least one condition includes determining that a subject contained in the image data includes a human or determining that the image data includes a face region; outputting at least a portion of the image data to a mobile terminal via the door lock controller in response to determining that at least one condition is satisfied; the door lock controller receives an unlocking control command from the mobile terminal, wherein the unlocking control command is used for triggering an electronic control type door lock installed on the door to an open position, and the electronic control type door lock is connected with the door lock controller in a communication mode and used for controlling the opening and closing of the door; and controlling the electronic control type door lock to an unlocking position through the door lock controller in response to receiving the unlocking control command so as to unlock the electronic control type door lock.
In the above-mentioned method for controlling an intelligent door lock, the camera system further includes a motion detector for detecting whether a moving object exists in a field of view of the camera system.
In the above intelligent door lock control method, the camera system further includes a second camera device, where the second camera device is disposed on a door and faces the inner side of the door, and is configured to collect image data of the moving object adjacent to the inner side area of the door.
In the above-described intelligent door lock control method, the imaging system is integrated with the electronically controlled door lock.
In the above-mentioned intelligent door lock control method, processing at least a part of the image data by a door lock controller to determine that at least one condition is satisfied includes: processing, by a door lock controller, at least a portion of the image data using a first deep neural network model to determine that a human is included in an object contained in the image data; processing, by the door lock controller, the at least a portion of the image data using a second deep neural network model to determine that a face region is included in the image data; and determining that at least one condition is satisfied in response to determining that a human is included in an object included in the image data or that a face region is included in the image data.
In the above intelligent door lock control method, the first neural network model and the second neural network model have the same basic model architecture, and only the last layer is different.
In the above-mentioned intelligent door lock control method, the first neural network model and the second neural network model respectively include N layers of depth separable convolutional layers for obtaining a feature map of the image data, where N is a positive integer and belongs to 4 to 12, where each depth separable convolutional layer includes a depth convolutional layer for applying a single filter to each input channel and a point-by-point convolutional layer for linearly combining outputs of the depth convolutions to obtain an updated feature map, where the second neural network model includes N layers of depth separable convolutional layers for obtaining a feature map of the image data, where N is a positive integer and belongs to 4 to 12, where each depth separable convolutional layer includes a depth convolutional layer and a point-by-point convolutional layer, the depth convolutional layers, for applying a single filter per input channel, said point-wise convolution layer, for linearly combining the outputs of said depth convolutions to obtain an updated feature map.
In the above-mentioned intelligent door lock control method, processing, by a door lock controller, at least a part of the image data using a first deep neural network model to determine that an object included in the image data includes a human being, includes: identifying a different image region between a first image and a second image contained in at least a portion of the image data; aggregating different image regions between the first and second images to obtain at least one region of interest; carrying out gray level processing on the at least one region of interest; processing the at least one region of interest after the gray scale processing by using the first deep neural network model so as to classify the object contained in the at least one region of interest; and determining that the object contained in the at least one region of interest comprises a human.
In the above intelligent door lock control method, processing, by the door lock controller, the at least a part of the image data using a second deep neural network model to determine that the image data includes a face region, including: identifying a different image region between a first image and a second image contained in at least a portion of the image data; aggregating different image regions between the first and second images to obtain at least one region of interest; carrying out gray level processing on the at least one region of interest; and processing the at least one region of interest after the gray scale processing by using the first deep neural network model to determine that the at least one region of interest comprises a human face region.
The intelligent door lock control system provided by the application is used for collecting image data of a moving object adjacent to the door lock by integrating the camera system into the electronic control type door lock, processing and analyzing the image data collected by the camera system based on an artificial intelligence algorithm, and then sending at least part of the image data to a mobile terminal carried by a user after at least one preset condition is met, so that the user is allowed to remotely control the electronic control type door lock, and the camera system is used for monitoring the area adjacent to the door.
Drawings
These and/or other aspects and advantages of the present invention will become more apparent and more readily appreciated from the following detailed description of the embodiments of the invention, taken in conjunction with the accompanying drawings of which:
fig. 1 illustrates a schematic diagram of an intelligent door lock system according to an embodiment of the present application.
Fig. 2 illustrates another schematic diagram of an intelligent door lock system according to an embodiment of the present application.
Fig. 3 illustrates a flow chart of a process in which the door lock controller processes at least a portion of the image data using a first deep neural network model to determine that a human is included in an object included in the image data according to the embodiment of the application.
Fig. 4 is a flowchart illustrating a process in which the door lock controller processes at least a portion of the image data using a first deep neural network model to determine a face region included in the image data according to the embodiment of the application.
Fig. 5 illustrates a flowchart of an intelligent door lock control method according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are merely some embodiments of the present application and not all embodiments of the present application, with the understanding that the present application is not limited to the example embodiments described herein.
According to the technical content disclosed by the application, the application provides an intelligent door lock system for controlling the opening and closing of a door, wherein the intelligent door lock system integrally arranges a camera system in an electronic control type door lock so as to acquire image data of a moving object adjacent to the door by using the camera system, further, the image data is processed and analyzed based on an artificial intelligence algorithm, and after the result of processing the image data is determined to meet at least one preset condition, at least one part of the image data is transmitted to a remote mobile terminal, so that the intelligent door lock system provides the authority for controlling the electronic control type door lock to be unlocked to the remote mobile terminal, and the area near the door can be remotely monitored by the mobile terminal. In other words, in the technical content disclosed in the present application, the intelligent door lock system provides the remote mobile terminal with the right to unlock the electronically controlled door lock, so as to allow the homeowner to remotely unlock the electronically controlled door lock without the need that the homeowner must personally visit the site to provide the password or fingerprint required by the security authentication mechanism.
Accordingly, the intelligent door lock system can intelligently transmit at least one part of image data to a remote mobile terminal after determining that the processing result of the image data meets at least one preset condition. Optionally, the at least one preset condition includes: the object included in the image data includes a human, and the image data includes a face region and the like. In this way, the homeowner can examine the moving object in the image data through the mobile terminal and determine whether to transmit an unlocking control command to remotely unlock the electronically controlled door lock.
In such a way, the intelligent door lock system can provide the permission for opening the electronic control type door lock for a remote mobile terminal, so that on one hand, a homeowner can conveniently get rid of physical limitation to remotely operate the electronic control type door lock, and on the other hand, the homeowner is allowed to monitor the area near the door through the mobile terminal, so that property safety is guaranteed.
And the intelligent door lock system analyzes and processes the image data through an artificial intelligence algorithm, and transmits at least a part of the image data to a remote mobile terminal after determining that at least one preset condition is met, so that on one hand, the energy consumption of the intelligent door lock system can be effectively reduced (the energy consumption for transmitting the image data) and on the other hand, invalid or wrong prompts to the mobile terminal can be effectively reduced.
In addition, in the present application, the artificial intelligence algorithm for performing analysis processing on the image data to determine whether the analysis result satisfies at least one preset condition employs a specific deep neural network model, which can achieve a good balance between the calculation cost and the detection accuracy. And the deep neural network model has a relatively small model size, can be directly deployed at a programmable embedded chip end, and is used for analyzing and processing the image data so as to be beneficial to application and popularization of a deep learning network in an embedded terminal.
Schematic intelligent door lock system
Fig. 1 illustrates a block diagram schematic of an intelligent door lock system 10 according to an embodiment of the present application. As shown in fig. 1, an intelligent door lock system 10 according to an embodiment of the present application includes: the electronic control type door lock comprises an electronic control type door lock 12, a door lock control interface 14, a door lock controller 16, a camera system 18 and a mobile terminal 20, wherein the electronic control type door lock 12 is installed on a door and can be switched between an opening position and a closing position so as to control the opening or closing of the door.
In the embodiment of the present application, as shown in fig. 1, the door lock control interface 14 is located at a side portion of the door for implementing a safety confirmation mechanism to control the electronically controlled door lock 12 to switch between the open position and the closed position. Optionally, the door lock control interface 14 may include a keypad (e.g., a numeric keypad, an alphanumeric keypad, or other type of keypad) for receiving an input code (e.g., manually entered by a user) for selectively controlling the electronically controlled door lock 12 to switch between the open position and the closed position in response to a match between the input code and an unlock code.
In some specific examples of the present application, the door lock control interface 14 may include a voice recognition interface, a fingerprint recognition interface, an iris recognition interface, or other biometric interface for implementing a biometric-based security verification mechanism to selectively control the electronically controlled door lock 12 to switch between the open position and the closed position.
In this embodiment of the present application, the camera system 18 is integrally disposed within the electronically controlled door lock 12 (e.g., the camera system 18 may be embedded within the electronically controlled door lock 12) for capturing image data of a moving object adjacent to the door. It should be appreciated that the camera system 18 may be considered a door monitoring camera system 18 that is integrally disposed within the electronically controlled door lock 12 for monitoring the area proximate the door.
It should be noted that, since the camera system 18 is integrally disposed in the electronically controlled door lock 12, no additional wiring is required and the aesthetic appearance of the door is not damaged, and the camera system 18 is effectively protected in the electronically controlled door lock 12 and is not easily damaged.
More specifically, in the embodiment of the present application, the camera system 18 includes a motion detector 185 and at least one camera device, wherein the motion detector 185 is configured to detect whether a moving object exists in a field of view of the camera system 18, and the at least one camera device is configured to acquire image data of the moving object adjacent to the area near the door. Here, the image data of the moving object may represent video data and/or still picture data of the moving object. In particular, in this embodiment of the present application, the motion detection result of the motion detector 185 is set as a control signal for the at least one camera device to perform image data acquisition, wherein when the motion detector 185 detects that there is object movement within the field of view of the camera system 18, the at least one camera device is activated for acquiring image data of the moving object adjacent to the area near the door in response to the motion detection result of the motion detector 185, in such a way that the power consumption of the camera system 18 can be effectively reduced.
In a specific example of the present application, the camera system 18 includes a first camera device 181, the first camera device 181 is embedded in the electronically controlled door lock 12 and faces the outside of the door, wherein the first camera device 181 has a first field of view covering a range from an area outside the door (e.g., 1 meter, 1.5 meters, etc. from the outside of the door), such that when the motion detector 185 detects that there is object movement within the field of view of the first camera device 181, the first camera device 181 is activated to capture image data of the moving object adjacent to the area outside the door in response to the motion detection result of the motion detector 185. That is, in this specific example, the image pickup system 18 includes the first image pickup apparatus 181 directed to the outside of the door for monitoring the vicinity of the outside of the door.
In another specific example of the present application, the camera system 18 further comprises a second camera device 183, the second camera device 183 being embedded in the electronically controlled door lock 12 and facing the inside of the door, wherein the second camera device 183 has a second field of view covering a range from an area inside the door (e.g., 1 meter, 1.5 meters, etc. from the inside of the door), such that when the motion detector 185 detects that there is object movement within the field of view of the second camera device 183, the second camera device 183 is activated for capturing image data of the moving object adjacent to the area inside the door in response to the motion detection result of the motion detector 185. That is, in this specific example, the imaging system 18 includes the second imaging apparatus 183, wherein the second imaging apparatus 183 is disposed opposite to the first imaging apparatus 181 and cooperates with each other to simultaneously monitor the door outer side and the outer side vicinity area.
It is noted that in implementations, the camera system 18 can include a greater or lesser number of camera devices. For example, in order to increase the overall field of view range of the camera system 18, the camera system 18 may further include a third camera device embedded in the electronically controlled door lock 12 and facing the outside of the door, wherein the third camera device has a different installation height from the first camera device 181, and/or the third camera device and the first camera device 181 have optical lenses with different field angles, so that the first camera device 181 and the third camera module have different field areas to increase the field of view range of the camera system 18 as a whole. As another example, in implementations of the present application, the imaging system 18 may include only the first imaging device 181 and no second imaging device 183. And is not intended to limit the scope of the present application.
It is noted that in implementations, the camera module (the first camera device 181 or the second camera device 183) can be implemented as and/or include any imaging sensor or device for capturing image data (e.g., video data or still image data, etc.) of a moving object in response to detecting the presence of object movement within the camera module's field of view. The camera module can store a certain amount of image data in a data buffer, for example, a circular buffer (which can store a certain amount of image data within a preset time period). In some embodiments of the present application, the image data can be stored in a computer readable storage medium of the door lock controller 1616.
In this embodiment of the present application, the door lock controller 16 includes at least a processor and a memory (computer readable storage medium), wherein computer program instructions are stored on the memory, and when executed by the processor, the processor is configured to implement the intelligent door lock control function as described below. Here, the processor may include, but is not limited to, a microprocessor, a controller, a digital signal processor, an application specific integrated circuit, a matrix of programmable gates, or other separate or integrated logic circuits having data processing capabilities. The computer-readable storage medium may take any combination of one or more readable media. For example, the computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In a specific operation process, first, the door lock controller 16 processes at least a part of the image data based on a preset algorithm to determine whether the processing result of the image data satisfies at least one condition. In particular, in this embodiment of the present application, the at least one condition includes: the object included in the image data includes a human being, or the image data includes a face region. Here, when the result of processing the image data does not satisfy at least one condition (i.e., no human is included in the object included in the image data, or no human face region is included in the image data), the door lock controller 16 stops further operation, and the imaging system 18 returns to the original state: only the motion detector 185 is active for detecting the presence of object motion within the field of view of the camera system 18. When the processing result of the image data satisfies at least one condition, the door lock controller 16 outputs at least a portion of the image data to a mobile terminal 20 (e.g., a smart phone, a tablet computer, etc.) through a wireless communication network in response to determining that the at least one condition is satisfied. Here, the wireless communication network may include, but is not limited to, a satellite communication network, a cellular communication network, a wireless internet communication network (e.g., WiFi), a wireless radio frequency communication network, and the like.
Further, the homeowner can examine an object or a face area included in the image data through the mobile terminal 20 to determine whether to send an unlocking control command. Here, when the homeowner finds that the object included in the image data is a potential intruder (e.g., stranger, etc.), it may refuse to transmit the unlocking control command, and further may transmit an alert signal (e.g., voice message, etc.) to the door lock controller 16 through the mobile terminal 20 to alert the potential intruder. When the homeowner determines that the object included in the image data is a secure object (e.g., a relative or friend of the homeowner, etc.), it may send the unlock control command to the door lock controller 16 for remote unlocking of the door. The door lock controller 16, after receiving an unlocking control command from the mobile terminal 20, controls the electronically controlled door lock 12 to an open position to unlock the electronically controlled door lock 12 in response to receiving the unlocking control command.
More specifically, in this embodiment of the present application, the door lock controller 16 processes at least a part of the image data through an artificial intelligence algorithm to determine that at least one condition is satisfied, that is, that a human is included in an object included in the at least one image data, or that a human face region is included in the at least one image data. That is, the door lock controller 16 performs human object detection and face detection by an artificial intelligence algorithm to determine whether or not a human is included in the objects included in the image data, or whether or not a face region is included in the image data.
For example, the door lock controller 16 processes at least a part of the image data in a motion-based object detection method as disclosed in application No. US 16/078,253, performs human object detection to determine whether or not a human is included in the objects included in the image data, and mainly includes the following steps.
First, at least a portion of the image data is processed to obtain at least one region of interest. In the field of image processing, a region of interest refers to an image region containing candidate objects potentially belonging to a given category, which is part of the overall image. As mentioned above, the object included in the image data is a moving object, and thus, the at least one region of interest can be obtained by identifying a moving part in the image data acquired by the camera system 18. For ease of understanding and explanation, this region of interest extraction method is defined as a motion-based region of interest extraction method in the present application.
In image representation, a moving part in image data is an image area having different image contents between images. Therefore, in order to acquire the region of interest, first, at least two images (e.g., a first image and a second image) are provided to obtain a moving part in the images through a contrast between the images. In other words, in this embodiment of the present application, the image data comprises at least two images (a first image and a second image), wherein the different image areas between the second image and the first image characterize the moving object. Thus, the at least one region of interest can be obtained by comparing the first image and the second image to identify moving parts in the images and clustering the moving parts in the images.
It is noted that the first image and the second image may be set as two images taken by the camera system 18 at a certain time interval, for example, the time interval between taking the first image and the second image may be set to 0.5 s. Of course, the time interval between the first image and the second image may be set to other values. For example, the first image and the second image may be from video data (having a particular time window, e.g., 15s) captured by the camera system 18, and the first image and the second image are two consecutive frames of the video data. In other words, the shooting time interval between the first image and the second image is the video frame rate.
Further, in the process of obtaining the first image and the second image by using the first image capturing apparatus 181 or the second image capturing apparatus 183 of the image capturing system 18, the image capturing apparatus (the first image capturing apparatus 181 or the second image capturing apparatus 183) itself may physically move (e.g., translate, rotate, etc.) to cause a background in the first image and the second image to shift. Accordingly, in order to avoid adverse effects due to physical offset, physical movement generated by the image pickup apparatus needs to be compensated before a different image area between the first image and the second image is recognized. The second image may be translated to compensate for the physical movement, for example, by position data provided by a position sensor (e.g., gyroscope) integrated with the camera device. Here, the purpose of translating the second image to compensate for the physical movement is to: aligning the background in the second image with the background in the first image.
Further, after the at least one region of interest is obtained by the motion-based region of interest extraction method, the at least one region of interest is subjected to gray scale processing to convert the at least one region of interest into a gray scale image. As will be appreciated by those skilled in the art, in order to richly represent the characteristics of an object, images captured by conventional camera devices are typically color images (e.g., RGB format or YUM format), which include luminance information and color information. Compared to a grayscale image, a color image has more data channels (R, G, B three channels). However, the color characteristics of the object under test do not help much or even are not necessary at all in some applications to detect the class to which the object under test belongs.
Accordingly, the purpose of performing the gray scale processing on the at least one region of interest is formally as follows: on one hand, the at least one region of interest is converted into a gray image to filter color information in the at least one region of interest, so that the calculation cost of the deep neural network model is reduced; on the other hand, the color information in the at least one region of interest can be effectively prevented from adversely affecting object detection and identification.
To further reduce the computational cost of the deep neural network, the size of the at least one region of interest may also be reduced to a particular size, e.g., 128 × 128 pixels. Here, the reduced size of the at least one region of interest depends on the accuracy requirements for object detection in the specific application scenario, and the subsequently mentioned first deep neural network model for processing the grayscale image. In other words, the reduced size of the at least one region of interest needs to be adjusted based on the architectural features of the first deep neural network model and the accuracy requirements of object detection. The present application is not limited in this respect.
Further, the at least one region of interest after the ashing is processed by the first deep neural network model to classify the objects contained in the at least one region of interest, and further determine whether the objects contained in the at least one region of interest include humans.
In particular, the first deep neural network model is structured based on deep separable convolutional layers (depthwiseSparable convolution layers), wherein the deep separable convolutional layers replace conventional convolution operations with deep separable convolution operations to solve the problems of computational efficiency and parameter number of the deep neural network model. Here, the depth separable Convolution operation refers to decomposing a conventional Convolution operation into a depth Convolution (depthwiseconvolation) for applying a single filter to each input channel and a point-by-point Convolution (Pointwise Convolution) for linearly combining outputs of the depth Convolution to obtain an updated feature map. The computation cost of the deep neural network model is effectively reduced and the model size is effectively reduced through convolution operation decomposition. In other words, in this embodiment of the present application, each of the depth separable convolutional layers comprises a depth convolutional layer for applying a single filter to each input channel, and a point-by-point convolutional layer for linearly combining the outputs of the depth convolutions to obtain an updated feature map. In other words, in this embodiment of the present application, the first deep neural network model is optimized in a compression manner by adjusting convolution operation, so that it meets the application requirements of the embedded platform.
More specifically, in this embodiment of the present application, the first deep neural network model includes N depth separable convolution layers for obtaining a feature map of the at least one region of interest, where N is a positive integer and belongs to 4-12. Here, the number of layers of the depth separable convolutional layer depends on the requirements for delay and accuracy in a specific application scenario. In particular, taking as an example that the object detection method is used in the security monitoring field as described above, the deep neural network model includes 5 layers of the depth separable convolutional layers, where a first of the depth separable convolutional layers includes 32 filter factors (the depth convolutional layers) with a size of 3 × 3 and a corresponding number of filter factors (the stagnation convolutional layers) with a size of 1 × 1; a second said depth-separable layer, which is connected to said first said depth-separable layer, comprises 64 filter factors (said depth convolution layers) of size 3 x 3 and a corresponding number of filter factors (said stagnation point convolution layers) of 1 x 1; a third said depth-separable layer, which is connected to said second said depth-separable layer, comprises 128 filter factors (said depth convolution layers) of size 3 x 3 and a corresponding number of filter factors (said stagnation point convolution layers) of 1 x 1; a fourth said depth-separable layer, which is connected to said third said depth-separable layer, comprises 256 filter factors (said depth convolution layers) of size 3 x 3 and a corresponding number of filter factors (said stagnation point convolution layers) of 1 x 1; and a fifth said depth-separable layer, which is connected to said fourth said depth-separable layer, comprises 1024 filter factors (said depth convolution layers) of size 3 x 3 and a corresponding number of filter factors (said stagnation point convolution layers) of 1 x 1.
After obtaining the feature map of the grayscale image by a predetermined number of the depth separable convolution layers, the first depth neural network model further classifies the objects included in the at least one region of interest and determines whether the objects included in the at least one region of interest include humans. In particular, in this embodiment of the present application, the deep network neural network model classifies the candidate objects included in the grayscale image by a Softmax multi-classification model.
In summary, the door lock controller 16 processes at least a portion of the image data with a first deep neural network model to determine that the object included in the image data includes a human being. Fig. 3 illustrates a flowchart of a process in which the door lock controller 16 processes at least a portion of the image data using a first deep neural network model to determine that a human is included in an object included in the image data according to the embodiment of the application. As shown in fig. 3, the process of the door lock controller 16 processing at least a portion of the image data using a first deep neural network model to determine that the object included in the image data includes a human, includes: s310, identifying a different image area between a first image and a second image contained in at least a part of the image data; s320, gathering different image areas between the first image and the second image to obtain at least one region of interest; s330, carrying out gray level processing on the at least one region of interest; s340, processing the at least one region of interest after the gray processing by using the first deep neural network model to classify the object contained in the at least one region of interest; and S350, determining that the object contained in the at least one region of interest comprises a human.
It should be noted that, in another embodiment of the present application, the door lock controller 16 can also process the at least one image data by other human object detection methods to determine that the object included in the at least one image data includes a human. And is not intended to limit the scope of the present application.
As mentioned above, in this embodiment of the present application, the at least one condition further includes: the at least one image data includes a face region. For example, the door lock controller 16 may process the at least a portion of the image data in such a manner as to determine that a human face region is included in the image data.
Firstly, processing at least one part of the image data by the motion-based region of interest extraction method to extract at least one region of interest; and performing gray processing on the at least one region of interest to reduce the calculation cost of a second neural network model subsequently used for processing the at least one region of interest. Here, the process of extracting at least one region of interest by using the motion-based region of interest extraction method and performing gray processing on the at least one region of interest is consistent with the above description, and therefore, the description thereof is omitted here.
Further, the at least one region of interest after the gray processing is processed by the second deep neural network model to determine that the at least one region of interest includes a human face region. Here, the second deep neural network model may be configured to have the same basic model architecture as the first neural network model, that is, the second deep neural network model may be also configured based on a depth separable convolution layer (Depthwise separable convolution layers). For example, the first deep neural network model and the second deep neural network model have the same front hierarchy and only the last few layers are different. For another example, the first deep neural network and the second deep neural network model differ only by the last layer. In this way, the first deep neural network model and the second deep neural network model are further subjected to model compression to reduce the storage capacity thereof.
In operation, the door lock controller 16 may process at least a portion of the image data using parallel processing to determine that at least one condition is satisfied. For example, the door lock controller 16 processes at least a portion of the image data with the first deep neural network model in a first thread to determine that an object contained in the image data is a human; simultaneously, in a second thread, processing the at least a portion of the image data with the second deep neural network model to determine that the image data contains a face region; further, in response to determining that a human is included in an object included in the image data or that a face region is included in the image data, it is determined that at least one condition is satisfied.
Fig. 4 illustrates a flowchart of a process in which the door lock controller 16 processes at least a portion of the image data using a first deep neural network model to determine a face region included in the image data according to the embodiment of the application. As shown in fig. 4, the process includes the steps of: s410, identifying different image areas between a first image and a second image contained in at least one part of the image data; s420, gathering different image areas between the first image and the second image to obtain at least one region of interest; s430, performing gray scale processing on the at least one region of interest; s440, processing the at least one region of interest after the gray processing by using the first deep neural network model to determine that the at least one region of interest includes a human face region.
Further, when the processing result of the image data satisfies at least one condition, in response to determining that the at least one condition is satisfied, the door lock controller 16 outputs at least a portion of the image data to a mobile terminal 20 (e.g., a smartphone, a tablet computer, etc.) through a wireless communication network. Here, the wireless communication network may include, but is not limited to, a satellite communication network, a cellular communication network, a wireless internet communication network (e.g., WiFi), a wireless radio frequency communication network, and the like.
Further, the homeowner can examine an object or a face area included in the image data through the mobile terminal 20 to determine whether to send an unlocking control command. Here, when the homeowner finds that the object included in the image data is a potential intruder (e.g., stranger, etc.), it may refuse to transmit the unlocking control command, and further may transmit an alert signal (e.g., voice message, etc.) to the door lock controller 16 through the mobile terminal 20 to alert the potential intruder. When the homeowner determines that the object included in the image data is a secure object (e.g., a relative or friend of the homeowner, etc.), it may send the unlock control command to the door lock controller 16 for remote unlocking of the door. The door lock controller 16, after receiving an unlocking control command from the mobile terminal 20, controls the electronically controlled door lock 12 to an open position to unlock the electronically controlled door lock 12 in response to receiving the unlocking control command.
When the processing result of the image data satisfies at least one condition, the door lock controller 16 outputs at least a portion of the image data to a mobile terminal 20 (e.g., a smart phone, a tablet computer, etc.) through a wireless communication network in response to determining that the at least one condition is satisfied. In this way, the homeowner can examine the object or the face area included in the image data through the mobile terminal 20 to determine whether to transmit an unlocking control command. Here, when the homeowner finds that the object included in the image data is a potential intruder (e.g., stranger, etc.), it may refuse to transmit the unlocking control command, and further may transmit an alert signal (e.g., voice message, etc.) to the door lock controller 16 through the mobile terminal 20 to alert the potential intruder. When the homeowner determines that the object included in the image data is a secure object (e.g., a relative or friend of the homeowner, etc.), it may send the unlock control command to the door lock controller 16 for remote unlocking of the door. The door lock controller 16, after receiving an unlocking control command from the mobile terminal 20, controls the electronically controlled door lock 12 to an open position to unlock the electronically controlled door lock 12 in response to receiving the unlocking control command.
In summary, the present exemplary intelligent door lock system 10 is illustrated, which integrates a camera system 18 into an electronically controlled door lock 12, and is configured to collect image data of a moving object adjacent to the door lock, process and analyze the image data collected by the camera system 18 based on an artificial intelligence algorithm, and then send at least a portion of the image data to a mobile terminal 20 carried by a user after determining that at least one preset condition is met, so as to allow the user to remotely control the electronically controlled door lock 12, and monitor an area near the door by using the camera system 18.
Schematic intelligent door lock control method
Fig. 5 illustrates a flowchart of an intelligent door lock control method according to an embodiment of the present application.
As shown in fig. 5, the intelligent door lock control method according to the embodiment of the present application includes: s510, detecting whether a moving object exists in a field of view of a camera system, wherein the camera system comprises a first camera device, the first camera device is arranged on a door and faces the outer side of the door, and is used for acquiring image data of the moving object in an area adjacent to the outer side of the door; s520, in response to the fact that a moving object exists in the field of view of the camera system, capturing image data of the moving object through the camera system; s530, processing at least one part of the image data through a door lock controller to determine that at least one condition is met, wherein the at least one condition comprises that an object contained in the image data comprises a human being or that the image data comprises a human face area; s540, in response to determining that at least one condition is met, outputting at least a part of the image data to the mobile terminal through the door lock controller; s550, the door lock controller receives an unlocking control command from the mobile terminal, and the unlocking control command is used for triggering an electronic control type door lock installed on the door to an unlocking position, wherein the electronic control type door lock is connected with the door lock controller in a communication mode and used for controlling the opening and closing of the door; and S560, in response to receiving the unlocking control command, controlling the electronically controlled door lock to an open position through the door lock controller to unlock the electronically controlled door lock.
In one example, in the above intelligent door lock control method, the camera system further includes a motion detector for detecting whether a moving object exists in a field of view of the camera system.
In one example, in the above intelligent door lock control method, the camera system further includes a second camera device, wherein the second camera device is disposed on a door and faces an inner side of the door, and is configured to capture image data of the moving object adjacent to an inner area of the door.
In one example, in the above-described intelligent door lock control method, the camera system is integrated with the electronically controlled door lock.
In one example, in the above-mentioned intelligent door lock control method, processing at least a part of the image data by a door lock controller to determine that at least one condition is satisfied includes: processing, by a door lock controller, at least a portion of the image data using a first deep neural network model to determine that a human is included in an object contained in the image data; processing, by the door lock controller, the at least a portion of the image data using a second deep neural network model to determine that a face region is included in the image data; and determining that at least one condition is satisfied in response to determining that a human is included in an object included in the image data or that a face region is included in the image data.
In one example, in the above intelligent door lock control method, the first neural network model and the second neural network model have the same basic model architecture, and only the last layer is different.
In one example, in the above intelligent door lock control method, the first neural network model and the second neural network model respectively include N layers of depth separable convolutional layers for obtaining a feature map of the image data, where N is a positive integer and belongs to 4 to 12, wherein each depth separable convolutional layer includes a depth convolutional layer for applying a single filter to each input channel and a point-by-point convolutional layer for linearly combining outputs of the depth convolutions to obtain an updated feature map, wherein the second neural network model includes N layers of depth separable convolutional layers for obtaining a feature map of the image data, where N is a positive integer and belongs to 4 to 12, wherein each depth separable convolutional layer includes a depth convolutional layer and a point-by-point convolutional layer, the depth convolution layer is configured to apply a single filter to each input channel, and the point-by-point convolution layer is configured to linearly combine outputs of the depth convolution to obtain an updated feature map.
In one example, in the above-described intelligent door lock control method, processing, by a door lock controller, at least a portion of the image data using a first deep neural network model to determine that a subject included in the image data includes a human, includes: identifying a different image region between a first image and a second image contained in at least a portion of the image data; aggregating different image regions between the first and second images to obtain at least one region of interest; carrying out gray level processing on the at least one region of interest; processing the at least one region of interest after the gray scale processing by using the first deep neural network model so as to classify the object contained in the at least one region of interest; and determining that the object contained in the at least one region of interest comprises a human.
In one example, in the above intelligent door lock control method, processing, by the door lock controller, the at least a portion of the image data using a second deep neural network model to determine that the image data includes a human face region includes: identifying a different image region between a first image and a second image contained in at least a portion of the image data; aggregating different image regions between the first and second images to obtain at least one region of interest; carrying out gray level processing on the at least one region of interest; and processing the at least one region of interest after the gray scale processing by using the first deep neural network model to determine that the at least one region of interest comprises a human face region.

Claims (16)

1. An intelligent door lock control method comprises the following steps:
detecting whether a moving object exists in a field of view of a camera system, wherein the camera system comprises a first camera device which is arranged on a door and faces to the outer side of the door and is used for acquiring image data of the moving object adjacent to the outer side area of the door;
in response to detecting the presence of a moving object within the field of view of the camera system, capturing, by the camera system, image data of the moving object;
processing at least a portion of the image data by a door lock controller to determine that at least one condition is satisfied, wherein the at least one condition includes determining that a subject contained in the image data includes a human or determining that the image data includes a face region;
outputting at least a portion of the image data to a mobile terminal via the door lock controller in response to determining that at least one condition is satisfied;
the door lock controller receives an unlocking control command from the mobile terminal, wherein the unlocking control command is used for triggering an electronic control type door lock installed on the door to an open position, and the electronic control type door lock is connected with the door lock controller in a communication mode and used for controlling the opening and closing of the door; and
and in response to receiving the unlocking control command, controlling the electronic control type door lock to an unlocking position through the door lock controller so as to unlock the electronic control type door lock.
2. The intelligent door lock control method of claim 1, wherein the camera system further comprises a motion detector for detecting whether a moving object is within a field of view of the camera system.
3. The intelligent door lock control method according to claim 2, wherein the camera system further comprises a second camera device, wherein the second camera device is provided at a door and faces the inner side of the door for collecting image data of the moving object adjacent to the inner area of the door.
4. The intelligent door lock control method according to claim 3, wherein the camera system is integrated with the electronically controlled door lock.
5. The intelligent door lock control method according to any one of claims 1-4, wherein processing at least a portion of the image data by a door lock controller to determine that at least one condition is satisfied comprises:
processing, by a door lock controller, at least a portion of the image data using a first deep neural network model to determine that a human is included in an object contained in the image data;
processing, by the door lock controller, the at least a portion of the image data using a second deep neural network model to determine that a face region is included in the image data; and
determining that at least one condition is satisfied in response to determining that a human is included in an object included in the image data or that a face region is included in the image data.
6. The intelligent door lock control method of claim 5, wherein the first neural network model and the second neural network model have the same basic model architecture, only the last layer being different.
7. The intelligent door lock control method according to claim 6, wherein the first neural network model and the second neural network model respectively include N layers of depth separable convolutional layers for obtaining a feature map of the image data, where N is a positive integer and belongs to 4 to 12, wherein each depth separable convolutional layer includes a depth convolutional layer for applying a single filter to each input channel and a point-by-point convolutional layer for linearly combining outputs of the depth convolutions to obtain an updated feature map, wherein the second neural network model includes N layers of depth separable convolutional layers for obtaining a feature map of the image data, where N is a positive integer and belongs to 4 to 12, wherein each depth separable convolutional layer includes a depth convolutional layer and a point-by-point convolutional layer, the depth convolution layer is configured to apply a single filter to each input channel, and the point-by-point convolution layer is configured to linearly combine outputs of the depth convolution to obtain an updated feature map.
8. The intelligent door lock control method of claim 7, wherein processing at least a portion of the image data by a door lock controller using a first deep neural network model to determine that an object contained in the image data includes a human comprises:
identifying a different image region between a first image and a second image contained in at least a portion of the image data;
aggregating different image regions between the first and second images to obtain at least one region of interest;
carrying out gray level processing on the at least one region of interest;
processing the at least one region of interest after the gray scale processing by using the first deep neural network model so as to classify the object contained in the at least one region of interest; and
determining that the object contained in the at least one region of interest includes a human.
9. The intelligent door lock control method of claim 7, wherein processing, by the door lock controller, the at least a portion of the image data using a second deep neural network model to determine that the image data includes a face region comprises:
identifying a different image region between a first image and a second image contained in at least a portion of the image data;
aggregating different image regions between the first and second images to obtain at least one region of interest;
carrying out gray level processing on the at least one region of interest; and
processing the at least one region of interest after the gray scale processing with the first deep neural network model to determine that the at least one region of interest includes a face region.
10. An intelligent door lock system for controlling the opening and closing of a door, comprising:
an electronically controlled door lock, wherein the electronically controlled door lock is mounted to the door for controlling the door to close;
the camera system is integrated with the electronic control type door lock, and comprises a motion detector and a first camera device, wherein the motion detector is used for detecting whether a moving object exists in a field of view of the camera system, the first camera device is arranged on a door and faces the outer side of the door, and the first camera device is used for acquiring image data of the moving object in an area adjacent to the outer side of the door; and
a door lock controller comprising a processor and a memory having stored thereon computer program instructions that, when executed by the processor, the processor is operable to:
processing at least a portion of the image data to determine that at least one condition is satisfied, wherein the at least one condition includes determining that a human is included in an object included in the image data or determining that a face region is included in the image data;
outputting, by the processor, at least a portion of the image data to a mobile terminal in response to determining that at least one condition is satisfied;
receiving an unlocking control command from the mobile terminal, wherein the unlocking control command is used for triggering the electronic control type door lock installed on the door to an unlocking position; and
and controlling the electronically controlled door lock to an open position in response to receiving the unlocking control command to unlock the electronically controlled door lock.
11. The intelligent door lock system according to claim 10, wherein the camera system further comprises a second camera device, wherein the second camera device is provided at a door and faces an inner side of the door for capturing image data of the moving object adjacent to an inner area of the door.
12. The intelligent door lock system of claim 11, wherein the processor is further configured to:
processing at least a portion of the image data using a first deep neural network model to determine that an object contained in the image data is a human;
processing the at least a portion of the image data using a second deep neural network model to determine that the image data includes a face region; and
determining that at least one condition is satisfied in response to determining that a human is included in an object included in the image data or that a face region is included in the image data.
13. The intelligent door lock system of claim 12, wherein the first neural network model and the second neural network model have the same basic model architecture, differing only by a last layer.
14. The intelligent door lock system of claim 13, wherein the first and second neural network models each include N layers of depth separable convolutional layers for obtaining a feature map of the image data, where N is a positive integer and belongs to 4-12, wherein each depth separable convolutional layer includes a depth convolutional layer for applying a single filter to each input channel and a point-wise convolutional layer for linearly combining outputs of the depth convolutions to obtain an updated feature map, wherein the second neural network model includes N layers of depth separable convolutional layers for obtaining a feature map of the image data, where N is a positive integer and belongs to 4-12, wherein each depth separable convolutional layer includes a depth convolutional layer and a point-wise convolutional layer, the depth convolution layer is configured to apply a single filter to each input channel, and the point-by-point convolution layer is configured to linearly combine outputs of the depth convolution to obtain an updated feature map.
15. The intelligent door lock system of claim 14, wherein the processor is further configured to:
identifying a different image region between a first image and a second image contained in at least a portion of the image data;
aggregating different image regions between the first and second images to obtain at least one region of interest;
carrying out gray level processing on the at least one region of interest;
processing the at least one region of interest after the gray scale processing by using the first deep neural network model so as to classify the object contained in the at least one region of interest; and
determining that the object contained in the at least one region of interest includes a human.
16. The intelligent door lock system of claim 14, wherein the processor is further configured to:
identifying a different image region between a first image and a second image contained in at least a portion of the image data;
aggregating different image regions between the first and second images to obtain at least one region of interest;
carrying out gray level processing on the at least one region of interest; and
processing the at least one region of interest after the gray scale processing with the second deep neural network model to determine that the at least one region of interest includes a human face region.
CN201811402696.5A 2018-06-29 2018-11-23 Intelligent door lock system and intelligent door lock control method thereof Pending CN111311786A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811402696.5A CN111311786A (en) 2018-11-23 2018-11-23 Intelligent door lock system and intelligent door lock control method thereof
US16/238,489 US20200005573A1 (en) 2018-06-29 2019-01-02 Smart Door Lock System and Lock Control Method Thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811402696.5A CN111311786A (en) 2018-11-23 2018-11-23 Intelligent door lock system and intelligent door lock control method thereof

Publications (1)

Publication Number Publication Date
CN111311786A true CN111311786A (en) 2020-06-19

Family

ID=71146493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811402696.5A Pending CN111311786A (en) 2018-06-29 2018-11-23 Intelligent door lock system and intelligent door lock control method thereof

Country Status (1)

Country Link
CN (1) CN111311786A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112700576A (en) * 2020-12-29 2021-04-23 成都启源西普科技有限公司 Multi-modal recognition algorithm based on images and characters

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913519A (en) * 2016-04-13 2016-08-31 深圳市通力科技开发有限公司 Door lock control method and system thereof
US20160308859A1 (en) * 2015-04-14 2016-10-20 Blub0X Technology Holdings, Inc. Multi-factor and multi-mode biometric physical access control device
CN106285418A (en) * 2016-09-23 2017-01-04 江阴格罗克建筑智能科技有限公司 Intelligent household-entry safety door system
CN107023224A (en) * 2017-03-01 2017-08-08 曹汉添 Door lock method for controlling opening and closing and control system based on active security protection
CN107564144A (en) * 2017-08-20 2018-01-09 聚鑫智能科技(武汉)股份有限公司 A kind of intelligent robot gate control system and control method
CN108229533A (en) * 2017-11-22 2018-06-29 深圳市商汤科技有限公司 Image processing method, model pruning method, device and equipment
CN108596042A (en) * 2018-03-29 2018-09-28 青岛海尔智能技术研发有限公司 Enabling control method and system
CN108629306A (en) * 2018-04-28 2018-10-09 北京京东金融科技控股有限公司 Human posture recognition method and device, electronic equipment, storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160308859A1 (en) * 2015-04-14 2016-10-20 Blub0X Technology Holdings, Inc. Multi-factor and multi-mode biometric physical access control device
CN105913519A (en) * 2016-04-13 2016-08-31 深圳市通力科技开发有限公司 Door lock control method and system thereof
CN106285418A (en) * 2016-09-23 2017-01-04 江阴格罗克建筑智能科技有限公司 Intelligent household-entry safety door system
CN107023224A (en) * 2017-03-01 2017-08-08 曹汉添 Door lock method for controlling opening and closing and control system based on active security protection
CN107564144A (en) * 2017-08-20 2018-01-09 聚鑫智能科技(武汉)股份有限公司 A kind of intelligent robot gate control system and control method
CN108229533A (en) * 2017-11-22 2018-06-29 深圳市商汤科技有限公司 Image processing method, model pruning method, device and equipment
CN108596042A (en) * 2018-03-29 2018-09-28 青岛海尔智能技术研发有限公司 Enabling control method and system
CN108629306A (en) * 2018-04-28 2018-10-09 北京京东金融科技控股有限公司 Human posture recognition method and device, electronic equipment, storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112700576A (en) * 2020-12-29 2021-04-23 成都启源西普科技有限公司 Multi-modal recognition algorithm based on images and characters

Similar Documents

Publication Publication Date Title
TWI785312B (en) Vehicle door unlocking method and device thereof, vehicle-mounted face unlocking system, vehicle, electronic device and storage medium
US10769914B2 (en) Informative image data generation using audio/video recording and communication devices
KR101387628B1 (en) Entrance control integrated video recorder
US10593174B1 (en) Automatic setup mode after disconnect from a network
WO2018175328A1 (en) Dynamic identification of threat level associated with a person using an audio/video recording and communication device
KR101838858B1 (en) Access control System based on biometric and Controlling method thereof
KR101730255B1 (en) Face recognition digital door lock
CN106803943A (en) Video monitoring system and equipment
US20100052947A1 (en) Camera with built-in license plate recognition function
CN105279872A (en) Anti-theft monitoring device and monitoring system for entrance door
US20130215276A1 (en) Enhanced-security door lock system and a control method therefor
US11064167B2 (en) Input functionality for audio/video recording and communication doorbells
KR101682311B1 (en) Face recognition digital door lock
US11217076B1 (en) Camera tampering detection based on audio and video
US10713928B1 (en) Arming security systems based on communications among a network of security systems
US11151828B2 (en) Frictionless building access control system with tailgate detection
US20190340904A1 (en) Door Surveillance System and Control Method Thereof
US11349707B1 (en) Implementing security system devices as network nodes
CN111917967A (en) Door monitoring system and control method thereof
KR20160072386A (en) Home network system using face recognition based features and method using the same
CN112991585A (en) Personnel entering and exiting management method and computer readable storage medium
CN111800617A (en) Intelligent security system based on Internet of things
CN113870508A (en) Intelligent security gateway arming and disarming method combining facial recognition
Jahnavi et al. Smart anti-theft door locking system
KR101182986B1 (en) Monitoring system and method using image coupler

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200619

RJ01 Rejection of invention patent application after publication