NO20171729A1 - A security system - Google Patents

A security system Download PDF

Info

Publication number
NO20171729A1
NO20171729A1 NO20171729A NO20171729A NO20171729A1 NO 20171729 A1 NO20171729 A1 NO 20171729A1 NO 20171729 A NO20171729 A NO 20171729A NO 20171729 A NO20171729 A NO 20171729A NO 20171729 A1 NO20171729 A1 NO 20171729A1
Authority
NO
Norway
Prior art keywords
input
person
user interface
rules
security system
Prior art date
Application number
NO20171729A
Other versions
NO343993B1 (en
Inventor
Patrick J Westerby
Original Assignee
Hypervig As
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hypervig As filed Critical Hypervig As
Priority to NO20171729A priority Critical patent/NO343993B1/en
Priority to PCT/NO2018/050260 priority patent/WO2019088845A1/en
Publication of NO20171729A1 publication Critical patent/NO20171729A1/en
Publication of NO343993B1 publication Critical patent/NO343993B1/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19695Arrangements wherein non-video detectors start video recording or forwarding but do not generate an alarm themselves
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/20Individual registration on entry or exit involving the use of a pass
    • G07C9/22Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder
    • G07C9/25Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B19/00Alarms responsive to two or more different undesired or abnormal conditions, e.g. burglary and fire, abnormal temperature and abnormal rate of flow
    • G08B19/005Alarms responsive to two or more different undesired or abnormal conditions, e.g. burglary and fire, abnormal temperature and abnormal rate of flow combined burglary and fire alarm systems
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/008Alarm setting and unsetting, i.e. arming or disarming of the security system

Landscapes

  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Emergency Management (AREA)
  • Computer Security & Cryptography (AREA)
  • Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Alarm Systems (AREA)
  • Burglar Alarm Systems (AREA)

Description

A SECURITY SYSTEM
TECHNICAL FIELD
[0001] The present invention relates to security systems such as home security systems and security systems for commercial premises such as office buildings, stores, sport arenas and the like.
BACKGROUND
[0002] A security system is a system designed to detect intrusion into a building or an area. Security systems are typically used in residential buildings, office buildings, factories and other commercial properties, as well as military facilities, sport arenas and the like. The intention is to protect against burglary, property damage and espionage as well as personal protection against intruders that may represent a threat against the safety and well-being of individuals.
[0003] A basic security system typically includes sensors for detection of intruders or potential intruders, a premises control unit that reads sensor inputs and controls the status of the system, human-machine interfaces for manual control of the system, alerting devices for sounding alarms or transmitting alarm messages to remote locations, and interconnections between the various components of the system.
[0004] Modern security systems are often capable of detecting several types of threats, including threats that are not related to intruders, such as fire, power failure etc. In addition, a system may include several types of sensors and sensors in multiple locations, and the control unit may be able to combine input from these several sensors in order to better identify the nature of a potential threat, track an intruder through premises, etc.
[0005] However, while security systems can be quite sophisticated with respect to collection of data, they are often difficult to program to operate in different modes of operation beyond simple settings like armed/disarmed, home/away and the like, and while they may combine several sensors and time delay before determining to activate an alarm mode, they typically do not offer any interaction with a potential intruder beyond the ability to allowing the potential intruder to identify themselves, for example by entering a code on a keypad, and by activating an alarm status if the potential intruder fails to identify themselves.
[0006] Consequently, there is a need for security systems that are capable of more sophisticated monitoring of a location, systems that can be configured to operate in different modes based on different circumstances without requiring programming by a trained person, and systems that are able to learn and adapt based on sensor input and user input over time.
SUMMARY OF THE DISCLOSURE
The present invention is a system that alleviates at least some of the needs presented above. In particular, a security system is provided having a plurality of sensors including at least one video camera, at least one user interface, and a control unit configured to receive sensor input from the plurality of sensors. The control unit includes a face recognition module, and local or distributed memory holding a representation of an alert level, a set of rules for how to respond to sensor input, and a collection of data identifying known persons. The alert level can be adjusted by user input, and the control unit is configured to receive input from the plurality of sensors indicative of the presence of a person, use the face recognition module to identify a person based on input from the at least one video camera and image data stored as part of the collection of data identifying known persons, and use output from the face recognition module and a current representation of the alert level as input to at least one rule from the set of rules to select a response to the sensor input. The response may include at least one of an update of the representation of an alert level, a message to be presented by the at least one user interface, an acceptance of the presence of an identified persons, and an activation of an alarm.
In various embodiments the control unit may be a local control unit, i.e. a unit installed on the premises the security system is installed to monitor, it may be a remote unit, such as a web service, or it may be implemented as a combination of a local control unit and remote units which may be accessed by the local control unit.
In some embodiments the message to be presented is a request for user input, the request including at least one of a request for a user identity, a user credential, and a purpose for presence to be entered via the user interface.
While the user interface may include any number of user interfaces known in the art such as keyboards and touch screens, some embodiments provide a user interface including a microphone, and the control unit further comprises a speech recognition module.
In some embodiments the control unit is further configured to use input from the user interface, output from the face recognition module and a current representation of the alert level as input to at least one rule from the set of rules to select a response to the input from the user interface.
The representation of an alert level may include a variable adjustable by user input. In some embodiments the representation of an alert level includes a variable adjustable by output from a rule selected from the set of rules. The variable adjustable by output from a rule may be the same variable as the user adjustable variable, or it may be a separate variable.
The representation of an alert level may also, in some embodiments, include an internal state of the control unit, the state being a result of a progress through a sequence of repeated application of one or more rules selected from the set of rules.
In some embodiments the face recognition module is not a part of a control unit except in the sense that the control unit includes an application programming interface (API) enabling the control unit to transmit image data as part of a request to a remote face recognition service and receive a face recognition processing result as a response to the request. Similarly, in some embodiments the voice recognition module includes an application programming interface (API) allowing the control unit to transmit audio data as part of a request to a remote face recognition service and receive a face recognition processing result as a response to the request.
Some embodiments include a notification module configured to transmit a notification of a result of the outcome of the application of a rule to a recipient. Such a notification may, for example, be in the form of a text message, an email or a phone call.
In some embodiments the control unit includes at least one module installed on a local device and at least one module installed on a remote server, wherein the local device and the remote server are in communication over a communication network.
In some embodiments the control unit is configured to forward at least one of a first video stream and a first audio stream received from the plurality of sensors to a second user interface and to forward at least one of a second audio stream and a second video stream from the second user interface to the at least one user interface. The second user interface may be implemented as an app on a remote device. This may allow users to view the current state of the premises the security system is installed to monitor and to communicate with persons detected on those premises by the security system.
In some embodiments the premises the security system is installed to monitor may be divided into a plurality of zones. Sensor input from the plurality of sensors may then be associated with one of a plurality of zones, and the control unit may be configured to use an association with one of the plurality of zones as input to the at least one rule from the set of rules to select a response to the sensor input. In this manner the system may operate with different levels of alertness for different zones, depending, for example, on whether a zone is normally considered accessible or off limits to unknown persons. The alert level for a given zone may be user adjustable, but it may also be a function of one or more of the alert level for the system as a whole, a categorization of the zone, and the time of day.
In some embodiments the security system includes one or more devices that are in communication with and configured to be activated by the local control unit as a selected response to the sensor input. The one or more devices may be selected from the group consisting of: loudspeakers, sirens, spotlights, floodlights, stroboscopic lights, and actuators. These devices may be used to give a person access (e.g.by unlocking door), request a person to vacate the premises (e.g. by playing a message to that effect), try to scare the person away from the premises (e.g. by playing a message warning that police may be notified, activating a siren, activating floodlights or stroboscopic lights etc.).
In another aspect a method is provided. The method includes receiving sensor input from at least one of a plurality of sensors including at least one video camera, the input being indicative of the presence of a person, using a face recognition module to identify a person based on input from the at least one video camera and information stored in the database of known persons, and using output from the face recognition module and a current representation of an alert level as input to at least one rule from the set of rules to select a response to the sensor input. The response may including at least one of updating the representation of an alert level, presenting a message by the at least one user interface, accepting the presence of an identified person, and activating an alarm.
In some embodiments the presenting of a message includes requesting user input representative of a user identity, a user credential, and a purpose for a person's presence to be entered via the user interface.
In some embodiments the at least one user interface includes a microphone, the method further includes receiving a speech signal from the microphone and processing the speech signal by a speech recognition module.
In some embodiments the method further includes using input from the user interface, output from the face recognition module and a current representation of the alert level as input to at least one rule from the set of rules to select a response to the input from the user interface.
In some embodiments the methods includes receiving user input to adjust the representation of an alert level. The method may also include adjusting the value of a variable based on output from the rule selected from the set of rules. The update of the alert level based on output from a rule may be an adjustment of an explicitly defined variable, but it may also be the progressing through a sequence of repeated applications of one or more rules selected from the set of rules, i.e. an internal state of a control unit.
The use of a face recognition module may, in some embodiments, include transmitting image data over an application programming interface (API) as part of a request to a remote face recognition service and receive a face recognition processing result as a response to the request. Similarly, the processing the speech signal by a speech recognition module may include transmitting speech data over an application programming interface (API) as part of a request to a remote speech recognition service and receive a speech recognition processing result as a response to the result.
In some embodiments, the method includes transmitting a notification of a result of the outcome of the application of a rule to a recipient.
In some embodiments the method is performed in a distributed manner, such that at least one step of the method is performed by a device installed locally and at least one step is performed by a remote server, the device installed locally and the remote server being in communication over a communication network.
In some embodiments the method includes forwarding at least one of a first video stream and a first audio stream received from the plurality of sensors to a second user interface and forwarding at least one of a second audio stream and a second video stream received from the second user interface to the at least one user interface. The second user interface may be implemented as an app on a remote device.
In some embodiments the received sensor input is associated with one of a plurality of zones, and the association with one of the plurality of zones can be used as input to the at least one rule from the set of rules to select a response to the sensor input.
In some embodiments the method includes activation of one or more devices as a selected response to the sensor input, the one or more devices being selected from the group consisting of: loudspeakers, sirens, spotlights, floodlights, stroboscopic lights, and actuators.
In another aspect a computer program product is provided. The computer program product may be carried on computer readable media and include instructions enabling a computing device to perform any one of the methods described above.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] FIG. 1 shows an overview of a premises that may be monitored by a security system;
[0008] FIG.2 shows a block diagram of a local control unit in a security system and a number of local devices connected to or in communication with the local control unit;
[0009] FIG. 3 shows a number of modules in a security system;
[0010] FIG. 4 shows a number of modules in a security system where some functionality is provided from remote servers;
[0011] FIG. 5 shows a diagram of a database that may be part of a security system;
[0012] FIG. 6 shows a flow chart illustrating part of a method performed in a security system;
[0013] FIG. 7 shows a flow chart illustrating part of a method performed in a security system;
[0014] FIG. 8 shows a flow chart illustrating further details of an embodiment of a method performed in a security system; and
[0015] FIG. 9 shows a flow chart illustrating a method for updating the rules implemented in a security system.
DETAILED DESCRIPTION
[0016] In the following description of various embodiments, reference will be made to the drawings, in which like reference numerals denote the same or corresponding elements. The drawings are not necessarily to scale. Instead, certain features may be shown exaggerated in scale or in a somewhat simplified or schematic manner, wherein certain conventional elements may have been left out in the interest of exemplifying the principles of the invention rather than cluttering the drawings with details that do not contribute to the understanding of these principles.
[0017] It should be noted that, unless otherwise stated, different features or elements may be combined with each other whether or not they have been described together as part of the same embodiment below. The combination of features or elements in the exemplary embodiments are done in order to facilitate understanding of the invention rather than limit its scope to a limited set of embodiments, and to the extent that alternative elements with substantially the same functionality are shown in respective embodiments, they are intended to be interchangeable, but for the sake of brevity no attempt has been made to disclose a complete description of all possible permutations of features.
[0018] Furthermore, those with skill in the art will understand that the invention may be practiced without many of the details included in this detailed description. Conversely, some well-known structures or functions may not be shown or described in detail, in order to avoid unnecessarily obscuring the relevant description of the various implementations. The terminology used in the description presented below is intended to be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain specific implementations of the invention.
[0019] Reference is first made to FIG. 1, which shows a perspective drawing of a property 100 with a house 101, a garage 102, a driveway 103, a front yard 104, a backyard 105 and a garden 106. The property 100 may be monitored by a plurality of sensors, and the sensors may be of several types, for example video cameras, microphones, motion sensors, etc. In addition to the sensors a number of devices may be provided for issuing sound and/or light, for example spotlights, loudspeakers and sirens, as well as user interface devices such as keyboards, keypads and switches. These devices, which may be collectively referred to as peripherals, will be described in further detail below.
[0020] Each one of the various sensors and other peripherals may be configured to cover one or more of the areas 101-106. Correspondingly, each area may be covered by one or more sensor and also include other peripherals. In addition, corresponding devices may exist inside the house 101 and the garage 102. Areas that are inside buildings are in principle not different from the areas outside buildings, and the invention may equally well be used to monitor a combination outdoor and indoor areas, and also only indoor areas, for example in an office building, a shopping mall or the like. For the purposes of facilitating understanding without cluttering the description of the invention with unnecessary alternatives and features, the substantial part of the following examples will focus on embodiments where all areas are outdoors as shown in the drawing. It should, however, be born in mind that these examples may be generalized to other embodiments with only indoor areas or with a combination of outdoor and indoor areas, as mentioned. It should also be understood that certain features of the invention may be utilized even in embodiments with only one area.
[0021] Security systems that are capable of monitoring a plurality of areas are often difficult to configure. The most basic systems simply allow different areas to be independently armed or disarmed. More sophisticated systems can be configured to take movement between areas or zones into consideration, and they may be configured to react differently to intrusion detection in different zones. Even security systems with only one zone may be difficult to configure for unsophisticated users, or they may give a limited set of configuration options that are inadequate and therefore hardly ever utilized, for example because they add inconvenience and increase the risk of false alarms or a false sense of security.
[0022] According to the present invention the various zones are categorized, but a user does not have to specify specific rules for each zone. Similarly, the system may be placed in a plurality of modes by the user, but the modes are designed to reflect a mental state or the current needs of the user. Thus, ali a user has to do in order to configure the system is to define the zones and assign a category to each zone. During operation, all a user has to do is to define their own mental state to the system. The system will then handle sensor input based on where activity has been detected and its own learning over time, as will be described in further detail below.
[0023] The mental state of the user may be thought of as a stress level or a vigilance level which the system will adopt when reacting to sensor input. In the following description the term "alert level" will be used to refer to this system state.
[0024] FIG. 2 shows the local devices that may be installed at the premises of a property in an exemplary embodiment. The first device is a Local Alarm Unit 201 which is the main management device for the system at the premises. This device may be implemented as a computer and may perform some or all of the various tasks and methods associated with receiving data from and controlling peripheral devices, interpreting data, controlling system states and communication with remote parts of the system.
[0025] The peripheral devices associated with the various areas are shown on the left of the drawing. The drawing shows only one peripheral of each type, but it will be understood by those with skill in the art that there may be a plurality of devices of any of the various illustrated types. A system may also include additional types of peripheral devices or only a subset of the types shown in the drawing.
[0026] A first type of peripheral in this embodiment is identified as cameras 202. One or more cameras may be configured to obtain a view of the various areas of the property. The cameras 202 may be associated with respective areas 101-106, or a camera may be mounted and configured to have a view of more than one area. Conversely, some areas may require more than one camera for coverage of the entire area. When one or more cameras combine to view one or more areas, the respective video streams received by the local alarm unit 201 may be processed such that they are combined in a manner that is consistent with the division of the property into areas. Similarly, parts of video images that show a neighboring property or a public space may be removed prior to further processing or storage of video information in order to respect laws relating to video surveillance.
[0027] In addition to keeping surveillance of parts of or entire areas, cameras 202 may be dedicated to view people that approach the building 101 or stand immediately outside a gate or a door, for example in order to view people who ring a doorbell.
[0028] A second type of peripheral in this embodiment are shown in the drawing as microphones 203. Microphones 203 may serve several purposes. A first purpose may be to detect noises that are indicative of a situation that may raise the systems alert level or require attention, such as the sound of footsteps, breaking glass, gunfire, rain, thunder, or whatever other distinct sound a user may associate with an unwanted or suspicious situation. A second purpose for microphones 203 may be to allow a human that is detected in one of the areas 101-106 and addressed by the system to deliver speech input to the system in order to identify themselves or their purpose for approaching.
[0029] In addition to cameras 202 and microphones 203, additional types of sensors 204 may also be part of embodiments of the invention. Such sensors may, for example, include infrared cameras and sensors, ultrasonic transducers, photoelectric sensors, microwave sensors, seismic sensors, etc.
[0030] When the system has detected a person, whether that person is categorized as a potential intruder, a neutral, or a friendly person, or not yet categorized, the system may be configured to address that person audibly. For this purpose the local alarm unit 201 may store or otherwise have access to pre-recorded audio or text that can be converted using a text-tospeech system and played over loudspeakers 205. The loudspeakers 205 are peripherals that may be positioned in strategic locations such as near gates, driveways, doors, patios and garages, or they may more widely cover more or less the entire area. It will be understood that loudspeakers that are used to play messages that represent a prompt for a spoken response from the detected person should be positioned sufficiently close to a microphone 203 capable of recording the spoken response.
[0031] Some embodiments of a system according to the invention may also include other user interfaces 206 in the form of e.g. a keypad, a touchscreen, switches, locks and other input devices that allow a person to enter information or commands into the system.
[0032] While the loudspeakers 205 may be considered sufficient in some embodiments, other embodiments may require additional sirens or horns 207 that are capable of issuing sounds that are considerably louder than the spoken messages issued by the loudspeakers 205. Such sounds may be intended to alert people of a detected danger, to scare off unwanted persons, or both. A similar purpose may be addressed by spotlights, floodlights or stroboscopic lights 207 that cover relevant parts of the property 100.
[0033] The peripherals 202-207 described above collectively contribute to detect various situations or conditions, communicate with detected persons, and otherwise react to present conditions. FIG. 2 also show a user configuration interface 208, which may be connected to the local alarm unit 201 and allow a user to configure or program the local alarm unit 201. The user configuration interface 208 may, for example, be used to define areas and categorize them, create user profiles, and respond to system requests for additional input such as categorization of newly detected persons. This or additional user interfaces may also be used to communicate with persons detected by the system, for example using the microphones 203 and loudspeakers 205 described above.
[0034] One or more network communication interfaces 209 may also be connected to the local alarm unit 201. This allows the local alarm unit 201 to communicate with remote devices or users over the Internet, over a cellular phone network, or by any combination of public and private networks using any combination of communication protocols known in the art.
Communication with remote devices and users will be discussed in further detail below.
[0035] A final type of peripheral represented as actuators 210 and includes any locks, latches and other mechanisms that may be activated to allow a person to open a gate or a door or otherwise gain access to the property 100, a particular zone or the house itself.
[0036] Reference is now made to FIG. 3 which shows a first embodiment of the present invention in which a number of modules or components are provided in substantially the same location, for example in the same building. Apart from the peripherals 202-207, which by necessity have to be located in the positions where they are required as determined by their functionality, the modules may even be part of the same device. Flowever, as will be seen when other embodiments are described below, the embodiment illustrated in FIG. 3 may also be thought of as a conceptual illustration of the most basic modules in various implementations of the invention, to which additional modules, components and capabilities may be added - including the remote location of some modules, which may, for example, be provided as could services.
[0037] The local alarm unit 201 has been described above with reference to FIG. 2. In the embodiment in FIG. 3 the local alarm unit 201 is connected to peripherals 201-207, which also were described above. Also connected to the local alarm unit 201 is a local storage unit 301 which may be one or more hard drives or some other type of known persistent memory, possibly in combination with volatile memory circuits. The local memory storage unit 301 may store configuration files, computer program instructions, logs of sensor data and located events, text or recorded speech messages, as well as rules and algorithms for interpretation of sensor data, detection of and categorization of events, and algorithms for machine learning based on sensor data and user input.
[0038] Additional units include a speech recognition module 302, a face recognition module 303 and communication modules 304. The speech recognition module 302 receives audio input from microphones 203, converts speech to text and interprets the text for commands and other input. The face recognition module 303 receives video input from cameras 202 and attempts to identify persons that are registered in a database of users and previously identified persons in a database in the local storage unit 301. One or more communications modules 304, which may include the network communication interfaces 209 as well as communication protocol implementations that enable communication with specific remote devices or services. This may include communication over wireless or wired local area network, the Internet, a cellular network and other networks, in order, for example, to send alarm messages to a user, a fire department, police, a security company. One or more of the speech recognition modules 302, face recognition module 303 and communication module(s) 304 may be implemented as hardware/software combinations that are part of the local alarm unit 201, or as separate devices connected to the local alarm unit 201. As will be described in further detail below, these modules as well as additional modules may also be implemented remotely, for example as cloud services.
[0039] FIG. 4 shows a block diagram corresponding to the one shown in FIG. 3, but where additional functionality has been added, and where a number of modules are explicitly shown as being remotely located from the local alarm unit 201 and in communication with it over a wide area network 405 which may be any combination of the Internet, telephone or cellular networks, satellite communication etc.
[0040] The various remotely located modules may be distributed such that they are all implemented in individual systems, or two or more modules may be implemented in the same system. For convenience, reference numbers used in the drawing are the same as those used in FIG. 3 for the modules that have corresponding modules shown in FIG. 3. This is not intended to imply anything about whether the modules are implemented remotely or locally with respect to the local alarm unit 201. Any module may, in principle, be implemented remotely or locally and in the same or in a different system. The exception is, of course, modules that by necessity have to be located locally because they include sensors or user interfaces that are intended to register or interact with local phenomena such as persons or events.
[0041] Furthermore, the various units may be implemented as dedicated to an embodiment of the present invention, or they may be implemented, at least partly, as a commercially available cloud based service, as will be described in further detail below.
[0042] Again, the local alarm unit 201 is connected to a number of local peripherals 202-207 and local storage unit 301. Communication units or interfaces are not shown in FIG. 4, but they can be assumed present as part of the local alarm unit 201. Over the wide area network 405, or over local connections or a local area network for peripherals and modules that are implemented locally, the local alarm unit 201 is in communication with the other modules of the system. The local alarm unit 201 receives data from the peripherals 202-207 and will attempt to detect and identify persons and events, and interact with persons if required. In order to do this the local alarm unit 201 will utilize the resources provided by the other modules.
[0043] For example, one or more of the peripherals may be a 270° area camera for detecting and tracking people. A PTZ (Pan Tilt Zoom) camera may be used for zooming onto persons and taking snapshots of them. These snapshots may be uploaded to the face recognition service 303 for face recognition. As already mentioned, some embodiments will implement this service in a local module, while other embodiments will implement this service in a centralized server or as a commercially available cloud service. An example of a commercially available service that provides face recognition is Microsoft Cognitive Services.
[0044] Face recognition may be used to determine whether a visitor should be granted access to an area controlled by the security system, as well as to determine whether to issue an alarm, send a notification, or change the status of the security system. In embodiments where Microsoft Cognitive Services (MSCS) are used for face recognition, both the local alarm unit 201 and the web service 401 may communicate with these services to verify identities and managing people based on their group membership, as well as to assign such membership or request identification and group membership assignment from an authorized user if a previously unknown person is registered for the first time.
[0045] The web service 401 will administer persons and person groups on the face recognition service 303. Examples of person groups include a group of persons that are allowed on the property, such as family and friends, and a group of persons who are unwanted/not allowed on the premises. The web service 401 may receive images from an authorized user operating an app 405, or images may be received as snapshots from the local alarm unit 201, classified by input from an authorized user, and then uploaded to the face recognition service 303. The web service 401 will upload images to the face recognition service 303, or to a database 404 accessible by the face recognition service 303. By adding additional images for the registered persons the face recognition service 303 can be trained to more accurately and quickly identify recognize visitors.
[0046] When the local alarm unit 201 registers a snapshot of a person on the property 100, it will upload the snapshot to the face recognition service 303 for comparison against persons already registered in the person groups. The face recognition service 303 will return the identity and group membership of the person if recognition is positive.
[0047] The local alarm unit 201 can interact with people that have been detected through loudspeakers 205 and microphones 203, as well as over other user interfaces 206. The way the local alarm unit 201 addresses a detected person may depend on the state of the system as well as the categorization of the person or event that has been detected, as will be described in further detail below. Spoken replies made by a person will be registered by microphones 203 and the local alarm unit 201 will forward the speech signal to a speech recognition module 302. Again the speech recognition module 302, or service, may be implemented locally or remotely and as a dedicated module or as a commercially available cloud based service. An example of a commercially available speech recognition service is provided by the Google Speech API, which may be used to translate the speech to text. The text returned from the speech recognition module 302 may be returned to the local alarm unit 201 (or in some embodiments to a centralized unit) where the response is compared with valid responses and commands.
[0048] The interpretation of the validity and/or content of the response will be used to determine any change in state for the system as well as any other action that should be taken by the system, as described below.
[0049] A security system web service 401 is a centrally localized part of the security system in many embodiments. The web service 401 may operate as a main communications channel and storage unit for both the local alarm unit 201 as well as for web or app based communication with the system by users.
[0050] The web service 401 may include a web API server, and a job scheduler for maintenance tasks. The web API server may be configured to respond to https requests from the local alarm unit 201 as well as from web browsers or apps 405. These requests may include functionality for reading and writing to storage units, sending commands by an operator, and sending events and notifications from the local alarm unit 201.
[0051] In addition to the web API the web service 401 may include some scheduled jobs running at different time intervals. Examples include cleaning up image captures of unknown visitors older than a certain time limit, for example 15 minutes.
[0052] The web service 401 may be implemented on a cloud based platform such as for example Microsoft Azure using Asp.net Web API Server.
[0053] A streaming server 402 is included in various embodiments of the invention. Some embodiments may include a streaming server locally, i.e. as part of the local alarm unit 201. However, the embodiment illustrated in FIG. 4 includes a streaming server 402 implemented as a remote server accessible to the local alarm unit 201 over the wide area network 405. The local alarm unit may be in communication with the web service 401 such that the web service 401 operates as a front end to the streaming server. Thus the web service 401 may receive requests for initiation of video streams between the local alarm unit 201 and an app or a browser 405. The video stream may be captured by cameras 202 to allow users to remotely view the parts of the property 100 covered by cameras 202. The web service 401 may also receive requests to set up two way video and/or audio streams between peripherals (i.e. cameras 202 and/or microphones 203) and an app or browser 405. When such two way streaming has been initiated communication may take place between a device with an app or a browser 405 and a person in the vicinity of the appropriate peripherals at the premises.
[0054] In many embodiments it will only be deemed necessary to allow a user with an app or browser connection to the web service 401 and the appropriate access rights to be able to request initiation of a video or audio stream. However, certain embodiments may allow a person who is at the front door or gate of a property 100 to request a communication link to be initiated between the camera 202 and microphone 203 at his or her location and a specified user, for example the front desk of one of several companies located at the premises or one of several persons inhabiting the property 100.
[0055] The various protocols and set up procedures used to initiate a multimedia stream are well known in the art and will not be described in further detail herein.
[0056] A notification service 403 is yet another module that may be implemented as part of a dedicated server or as a commercially available cloud based service. The notification service 403 may be used to send push messages to an app or some other user agent connected to the wide area network 405. Notifications will in embodiments such as the one illustrated in FIG. 4 typically be received by the web service 401 as events from the local alarm unit 201 and sent via the notification service 403 to the correct user agent. The system may also be configured to allow of transmission of global notifications to all registered phones, for instance in the case of alarms that should go out to all users associated with a given property 100, or in the case of important security updates being released and of relevance to all users associated with multiple properties served by the same or multiple web services 401. Microsoft Azure Notifications Hub is one example of a service that provides the necessary functionality to provide the functionality of the notification service 403.
[0057] The currently most widespread platforms for implementing app based user agents are Android (Android is a trademark owned by Google Inc.) and iOS (IOS is a trademark owned by Cisco Systems Inc. but licensed by Apple Inc.) mobile operating systems. The following discussion relates to exemplary embodiments using a Microsoft Azure notification service to send notifications to Android and iOS apps. This is not intended as a limiting example.
[0058] Notifications sent to Android and iOS have slightly different formatting. In both cases messages are sent on a JSON object. The notification service 403 will convert the message into the correct format based on data received from the app 405. For Android and iOS the message formats may be as follows
Android: {"data":{"message":"Notification message text"}}
iOS: {"a ps":{"alert" :"Notification message text"}}
[0059] Notification messages to Android and iOS use different protocols. Android uses Firebase Cloud Messaging (FCM, earlier GCM) while iOS uses Apple Push Notification Service (APNS). Azure Notifications hub is able to handle both protocols, so the system only have to send messages in the correct format to the notification service 403. Azure Notifications Hub is accessed through a REST interface from the web service 401 using a prebuilt library called Microsoft.Azure.NotificationHubs. It exposes a NotificationHubClient which handles the communication mainly through two functions:
// Send a notification to an Android device. The tagExpression identifies a unique app public Task<NotificationOutcome> SendGcmNativeNotificationAsync(string jsonPayload, string tagExpression);
// Send a notification to an iOS device. The tagExpression identifies a unique app
public Task<NotificationOutcome> SendAppleNativeNotificationAsync(string jsonPayload, string tagExpression);
[0060] REST is an acronym for Representational State Transfer. It is a term that is well known in the art and will not be explained here. The invention is not limited to embodiments using REST interfaces.
[0061] A database 404 may be provided for example in association with the web service 401. The database 404 may be used to hold any information that is not stored in the local storage unit 301, and may of course also store back up of information stored locally. Examples of information held by the database 404 include an event log, file storage (e.g. configuration files, multimedia files and other files such as photos captured by the peripherals 202-207), user data, preference settings etc. The database 404 may be implemented as an SQL database. The database may be provided on a cloud service platform, for example on Microsoft Azure which provides Azure Table Storage and Azure Blob Storage. In some embodiments logs, for example event logs and error logs, are stored in the former, while binary files such as captured images, videos and audio files are stored in the latter.
[0062] FIG. 4 shows a smartphone 405 as representative for an app, a browser or any other user agent that a user may utilize in order to communicate with the system. For the purposes of this disclosure these terms will be used interchangeably in order to exemplify various types of devices and software that can be used in various embodiments. There is, however, no intention to imply that these alternatives represent different technical features or possibilities. In principle, any user agent (which is the most general term and representative of any software acting on behalf of a user and running on a computing device) may be implemented with any functionality that the hardware of the device and the software platform allows. For simplicity the client device and user agent combination will primarily be referred to as the app 405.
[0063] The app 405 may be configured to interact with the security system using web pages and http or https requests as well as notifications. The requests and responses may primarily go directly between the web service 401 and the app 405, while notifications may be provided by the notification service 403. Other communication protocols and interaction between other modules of the system are possible in alternative embodiments.
[0064] The app 405 may store certain user information such as user credentials and local preferences for how the app should be displayed on the device, but in most embodiments the majority of data that is retained by the system will be stored in the database 404. Some data may also be stored at the premises in the local storage unit 301 as already described.
[0065] If the app 405 sends a request to the local alarm unit 201 using RestClient, the web service 401 receives the request first and the app 405 gets a status message in return from the web service 401. The web service 401 will then push the request further on to local alarm unit 201. A typical request may be "Set Alarm" or "Set System State", which may adjust the alert level or otherwise change variables the system uses to determine the outcome of a rule for how to respond to sensor input.
[0066] The app 405 may send data or request data to/from the web service 401 using RestClient. In this case the web service 401 may return Acknowledge messages with status and data if data has been requested. Typical requests may be all data belonging to a specific visitor, or a list of all known visitors.
[0067] Conversely, when the local alarm unit 201 sends a message to the app 405, the local alarm unit 201 sends the message to the web service 401 using RestClient. The web service 401 then sends a notification to the app 405 using the notification service 403 containing the data the app 405 needs. A typical scenario may be when an unknown visitor has entered the property and the acceptance or rejection of that person must be given by an authorized user because there is insufficient rules in the system for making a determination, or because the determination is to reject the person but an authorized user should be given the opportunity to override this decision.
[0068] The term web service as used above is a term of art which is intended to refer to any piece of software executed by a networked computer, e.g. a server, that makes itself available over the Internet. A web service will typically use a standardized XML messaging system, it will not be tied to any one operating system or programming language, it will be self-describing via a common XML grammar, and it should be discoverable via a simple find mechanism. Web services are usually built on top of open standards such as TCP/IP, HTTP, Java, HTML, and XML, and may use SOAP to transfer messages, WSDL to describe the availability of the service. However, the present invention is not intended to be limited to these specific platforms and protocols, and the term is therefore intended to also cover other implementations where software modules are accessible remotely.
[0069] Reference is now made to FIG. 5, which illustrates an exemplary database 404 consistent with various embodiments of the invention. The drawing is a crow's foot database diagram showing a number of tables, or entities that may be present in the database. The drawing is not intended to be a complete illustration of all information that may be present in a database in typical embodiments of the invention. For example, junction tables are omitted and only represented as many-to-many relations between entity tables. The drawing should, however, give those with skill in the art the required understanding of how a database can be designed in embodiments of the invention.
[0070] A first table is the Persons table 501, which includes all known data about any person that is registered in the system, whether that person is a resident, a regular visitor, an occasional visitor, or a person that has only been observed by the system once. However, the system may be configured to discard rather than register visitors in accordance with various rules, such as quality of available data such as image, zone in which the person was observed, time when the person was observed, length of time the person remained on the premises, as well as user input by an authorized user. These and other rules may be determined and configured in accordance with the needs specified by an operator or a designer on a case by case basis.
[0071] In this illustrative embodiment all persons registered in the database are registered with the following attributes, to the extent that the information is actually known and available. The PersonID may be a unique identifier that is internal to the system and ensures that persons may be uniquely identified. Image is a snapshot showing the face of the person. In some embodiments images may be stored in external files and this field holds a reference to that file. In embodiments where several images may be stored for each person, there may instead be a one-to-many relation between the person and the image files, for example in the form of a separate table with one entry for each image with a reference to the image file and a foreign key reference to the PersonID. The image or images should have sufficient quality to allow facial recognition. FirstName and LastName are self-explanatory text string. It may be desirable to allow registration of persons for whom name is not known, in which case some default such as "John" and "Doe", "Unknown" or "NN" may be entered as the person's name.
[0072] A comment may be a text string that allows users to add a description or some additional fact that is known about the person. EntrancePassword is the person's password for being allowed access to the premises. This password may be required by the system before the person is allowed access to the premises (e.g. before a door is unlocked) in addition to or instead of facial recognition. The password may also be required in order to allow certain user commands such as change of alert level. In some embodiments, unidentified persons may be allowed to identify themselves by speaking their name and password when prompted by the security system to do so. Some embodiments may use the Persons table to also identify a person's user account in the security system. However, it is consistent with the principles of the invention to keep user accounts separate from the Persons table. For example, some user accounts may be purely administrative and not associated with a person that ever requires access to the premises, and in such embodiments it may be decided to implement user accounts separately from the entries in the Persons table.
[0073] The Persons table may, of course, include additional information not shown here, such as date of birth, nationality, etc.
[0074] The Roles table 502 lists the different roles a person may have in the system, i.e. the person's relationship with the system. Examples include resident, family member, neighbor, friend, service personnel, gardener, mailman, etc. In addition to a RolelD the relation includes a RoleName attribute and a Role attribute. The latter may include a description of the role. In embodiments where a person can only have one role, the role may be defined directly in the Persons table 501 or by a foreign key reference from the Persons table 501 to the Role table 502. However, the illustrated embodiment allows a person to have several roles, and a role may be common to several people. For example, a person may be a 1ASECUR and a I ARTYSYR and the role RYN TYSCmay be shared by several people. Allowing a many-to-many relationship between persons and roles enables more detailed definition of access rights. For example, a janitor who is also a gardener may have access to additional zones that a person who is only a janitor (or only a gardener) does not have access to. The many-to-many relationship may be implemented by a junction table not shown in the drawing.
[0075] Each role may be subject to several access rules, and in this exemplary embodiment the access rules are listed in RoleAccessRule table 503. In addition to the primary key RoleAccessRulelD, the table may include the attributes RolelD, which is a foreign key reference to the role a specific access rule applies to, and a RoleAccessRuleName, which is a text string. This table may include additional attributes holding, for example, a description of the rule and foreign key references to additional information in other tables. However, in this example it is assumed that there are many-to-many relationships with the tables that hold additional information and these relationships may be implemented by junction tables that are not shown in the drawing.
[0076] An access rule may be limited to one or more time intervals. These can be defined in the AccessTimePermissions table 504. An entry in this table includes an AccessTimePermissionID, a unique ID for the given interval, an AccessTimePermissionName which is a text string describing the access time permission in terms understandable to humans, an AccessPreDefCategory which may be a predefined categorization of the access time permission, and an AccessTimePermission. The latter is the actual definition of the time interval. This interval may actually be defined as two entries, for example a start time and an end time, a start time and a duration. The interval, or several intervals, may also be defined in one or more additional tables. The interval may also be associated with additional rules, for example such that the access time permission expires immediately as soon as it has been used once in that time interval.
[0077] In the illustrated embodiment it is assumed that one access rule may be associated with several access time permissions, and that an access time permission may be associated with several access rules. It is also assumed that a rule will have to be associated with at least one time permission, which of course may be to always give permission. Those with skill in the art will realize that this is not the only possibility and that the relationships between rules and the definition of the time intervals when they apply may be structured differently from this example.
[0078] An access rule in the RoleAccessRules table 503 may apply to one or more zones, and the zones are listed in the Zones table 505. A zone as a unique ZonelD, a ZoneName, a definition of the zone, and a current AlertLevel. The definition of the zone may be done in a number of different ways and will not be discussed in detail. One possibility is to simply assign the different sensors to respective zones, and any input that is received from a sensor is considered as originating from that zone. However, some embodiments will include sensors that may at least partly cover more than one zone, for example video cameras that have a view of more than a zone. For this purpose, the zones may be defined as polygons that enclose specific parts of the property 100, as rooms inside the house 101, as sections of one or more video images, or in any other way that allows sufficiently accurate definition of the zones to allow consistent interpretation of sensor input and application of access rules. These definitions may be held in additional tables in the database that are not shown in this example.
[0079] It will be realized that the Zones table 505 will be relevant when peripherals are registered in the system and assigned to the various zones. In order to preserve space and for the sake of brevity the example does not show tables holding information about peripherals and how they are related to zones.
[0080] A role access rule will typically be associated with one or more actions to be performed by the system, and in this exemplary embodiment the actions are listed in the Actions table 506. For each defined action this table may hold a unique ActionID, an ActionName, a definition of one or more alarms (sirens 208) that should be triggered, floodlights 208 that should be turned on, actuators 210 that should be activated (in order to lock or unlock doors for example), and messages that should be played over loudspeakers 205. This may require additional tables in order to identify the appropriate peripherals, messages, etc., but such additional tables are not shown in order not to clutter the drawing and extend this description more than what is required in order to convey the necessary understanding to those with skill in the art.
[0081] It will, of course, be understood that the various actions that may be taken may depend on whether the rule is found to be fulfilled or not (e.g. the identified person is inside or outside his access time permission interval, in the right zone etc.), what the alert level is, and other factors that may apply based on additional configuration of the system. According to the illustrated example, a rule may be associated with a plurality of actions, and an action may be associated with (or triggered by) a plurality of rules. This may of course be structured differently in other embodiments. It will also be understood that since a rule may be associated with several actions, each action may relate to only one peripheral. It may then not be necessary to have several attributes in the Actions table that refer to different types of peripherals. Some of this information may also be entered in additional tables, for example in tables listing the various peripherals. Such tables are not shown in the drawing for the sake of brevity.
[0082] In addition to actions taken on the premises it may be desirable to send notifications to users of the system. These notifications are listed in the Notifications table 507. An entry in the Notifications table 507 may include a unique NotificationID, a notification name, an identification of whom to notify, when to notify, and which action to perform in order to notify.
[0083] The definition of when to notify may be a condition that need to be fulfilled in addition to the conditions that are associated with the access rule itself, or it may, for example, determine whether the notification should be sent immediately, only as a delayed report at a predefined time, or only upon request from an authorized user.
[0084] The notification action may define the content of the notification and how the notification should be sent. Typically it will be sent as a notification by the notification service 403 as described above. However, it may also be possible, at least in some embodiments, to send notifications as email, text messages, or as alarm signals to an alarm central or for example to the police.
[0085] In the embodiment illustrated in FIG. 5 potential notification recipients are listed in a separate table called the NotificationRecipients table 508. In some embodiments this table may be omitted and the potential recipients can be included in the Persons table 501.
However, some notification recipients may not be associated directly with identifiable persons, such as for example alarm centrals, police departments, fire departments, etc., and they may be associated with information that is irrelevant for many persons, such as phone numbers, user names, and email addresses. It will be understood by those with skill in the art that this information may be structured in a number of different ways and that the illustrated embodiment is only an example.
[0086] In this example the NotificationRecipients table 508 holds entries that include a unique NotificationRecipientID, which may be a primary key that is referenced by the identification of whom to notify in the Notifications table 507. An entry may further include a recipient name and a recipient address. The address may have different formats, e.g. a user name, a phone number, an email address, a URL, etc., and it may be desirable to include additional fields for each address type, or to include some of this information in additional tables.
[0087] Whenever a person is observed by the system, this observation is entered in a log. In the exemplary embodiment in FIG. 5 that log is in the form of ActivityTimes table 509. An entry in that table includes an ActivityTimelD, which is a unique identification of an entry in the table. A VisitorlD field holdes the identity of the person, provided that the system has been able to identify the person using the facial recognition service 303 described above. If the person was not identified the system may discard the observation, or the person may be registered as unknown. For this purpose the Persons table 501 may hold a default entry for unknown persons. However, if the person is registered with a photo of sufficient quality to allow registration of a new person entry in the Persons table 501, such an entry may be created automatically. An authorized user may later add to the information the system is capable of entering automatically, or choose to delete the entry if it is believed to be irrelevant.
[0088] The log entry may further include time stamps for when the person arrived and when the person left. Finally, the entry may include an identification of the zone in which the person was observed. If a person is observed moving from zone to zone, this may be registered as separate entries in the log. Alternatives include keeping one entry for the entire duration of the person's presence on the property 100 and a separate list of zones entered by the person.
[0089] In the example a table called StandardRules table 510 is also included. This table lists rules that are not associated with roles, but either with individuals or with all visitors. This table is shown as not being associated with any time permissions, notifications or zones (in other embodiments they could be) and represent actions that always apply, for example to always allow residents to enter, to always turn on floodlights when people are walking towards the house, to always activate sirens 208 if a fire is detected, etc.
[0090] As already mentioned, the database structure illustrated in FIG. 5 is only an example. In other embodiments information may be structured differently, additional information may be included, and in some embodiments some of the information discussed above may be excluded. Also, the database 404 does not have to be realized as a relational database. Those with skill in the art will realize that other ways of collecting and storing data are known in the art and may be utilized in conjunction with the present invention.
[0091] Turning now to FIG. 6 an example of how a system according to the invention may process sensor input and determine how to respond will be given. The description of this and the following example will not mention all steps of e.g. transferring data between modules, all details of the many rules a system may include, and all the details of the learning algorithms a system may use to update its own rules. Rather, emphasis is placed on the conceptual progress of methods that are in accordance with the invention and not on the details of the various features that in and of themselves are known in the art.
[0092] The process starts in a first step 601 when the system receives sensor input from one or more of the sensor devices among the several peripherals 202-207. This sensor input may in principle be anything the system is configured to act on, for example movement in a video image, interruption of a signal from a photoelectric sensor, a sound signal from a microphone or a microphone array, a signal from an IR or ultrasound detector, from a smoke detector, a magnetic switch attached to a door or a gate, etc.
[0093] In step 602 the sensor input is processed in order to classify the event that was detected by the sensor. Which events a system is able to detect and classify is dependent on the particular requirements of a given embodiment and it is not possible to give an exhaustive list of the types of signals that a system may receive and the many ways they may be classified. Some embodiments may even forego the classification and treat all events equally, in which case some of the following steps will be omitted and the method will proceed as if the event had been classified as the one type of event the system is configured to handle. A typical example would be embodiments where all sensor input is assumed to be the result of a person present on the property 100.
[0094] In the next step 603 it is determined whether to proceed based on successful classification of the event or not. If the event has not been successfully classified, the method proceeds to step 604 where additional sensor input is aggregated and used to improve the classification. This step may continue until it is determined in step 605 that the event has ended, i.e. that no more sensor input is received. When this happens, the event may be logged in step 606 and the system continues to monitor the property 100 until new sensor input is received indicating the beginning of a new event. It is not necessarily a requirement that unclassified events are logged, but as will be described in further detail below, a log of unclassifiable events may be used to update the system, either through manual updates or through machine learning, or both.
[0095] It should be noted that while the flowchart in the drawing illustrates this process as distinct iterative steps, it does not have to be implemented as such in the various hardware and software modules that establishes this functionality. The stream of information received from the sensors may be continuous, and artifacts or features detected in that stream may be continuously interpreted until an event can be classified.
[0096] If the event is successfully classified in step 602 or as a result of additional sensor input received in step 604, the process continues to step 607 where it is determined whether the event is a person detected on the premises. If this is not the case the system will not have to initiate interaction with such a person and the process continues to step 608 where it is determined whether there is a predefined rule for handling the event. Such a rule may, for example, be stored in the StandardRules table 510 in database 404 or in a local database in the local storage unit 301.
[0097] Examples of rules that are not associated with the detection of a person may be detection of fire, of high winds that require securing of objects outdoors etc.
[0098] If such rules are determined to exist in step 608 the process proceeds to step 609 where the system responds in accordance with the predefined rule. Such a response may be to send a notification, to open or close doors, gates or screens, etc. After the rule has been fulfilled in step 609, or if it is determined in step 608 that no rule is defined, the process may move to step 606 where the event is logged, including, if applicable, the results of the execution of the actions determined by the predefined rule.
[0099] If it is determined in step 607 that the event is the detection of a person on the property 100, the process may now proceed to step 610 where it attempts to obtain an image of the person's face 610. This process may continue to track the person in a process much similar to the process steps 603, 604, 604 where the system continues to try to obtain an image until this can be done successfully or is overridden by some other rule, for example a time out, or the entry by the person into a zone with a higher alert level. This tracking in step 610 may continue while several images are transmitted to the face recognition module 303 such that step 610 and step 611 may operate in parallel, where step 610 continuously tries to obtain better data and step 611 works with the data already available. Step 610 may also involve interaction with the person, for example a request to look into a camera mounted next to a gate or a door.
[0100] It should be noted that the event may, of course, involve multiple persons, in which case the system will attempt to obtain images of all faces. The description of this example will, however, assume only one person for the sake of convenience. This does not result in any loss of generality and a skilled person will readily understand how the system can process several face images in parallel.
[0101] If and when an image of the person's face can be obtained in step 610, the process proceeds to step 611 where face recognition is performed on the obtained image. This process is performed by the face recognition module 303 which, as described above, may be a local module or a remote web service. In the latter case information is transmitted back and forth over the wide area network 405.
[0102] After face recognition has been performed in step 611 the system will attempt to initiate interaction with the person. This part of the process will now be described with reference to FIG. 7.
[0103] The first step of the interaction process will be, in step 701, to determine whether the person was successfully identified. If it was not possible in step 610 to obtain a sufficiently good image of the persons face for identification, or if it was determined in step 611 that the person is a person who is not registered in the database 404, Persons table 501, the system may attempt to prompt the person for credentials in step 702. Credentials may, for example, be a code that has been provided to a person that has not been at the premises before, but that has a legitimate need to enter. This would typically be service personnel or craftsmen that have been requested to perform a job but that cannot be registered with name and photo in advance of their arrival.
[0104] It should be noted that in some embodiments the system may, dependent on alert level or overriding rules, go directly to step 713 and ask the person to vacate the premises if the person cannot be identified. This option is not illustrated in FIG. 7, but embodiments of the invention may include rules that makes this decision regardless of which step of the process is currently being performed. It may, for example, be desirable to allow the attempts at classification or identification continue for a predetermined time, but interrupt them when that time has expired in order to ensure that the system does not go into a loop of repeatedly trying to fulfill a rule unsuccessfully and end in a loop that prevents it from taking necessary action against an intruder.
[0105] If the person was identified by face recognition, as determined in step 701, or if the person is identified by credentials, as determined in step 703, the process will proceed to step 704 where it is determined if that person has access to the property or the zone of the property that he or she attempts to enter. Whether the person has access may depend on how the person is classified, i.e. which role in the Roles table 502. However, it is consistent with the principles of the invention to assign access rules to individuals without going through a role definition.
[0106] It should be noted that as discussed above with reference to FIG. 5, the database may include definitions of persons that are unknown. Some such definitions, or database entries, may be for "virtual persons" that make it possible to associate access credentials with a role prior to a person's arrival. After such a person has been at the property once his or her face will be registered as an image in the database and an individual record may be established.
[0107] If it is determined in step 704 that the person has access the process proceeds to step 705 where the system may unlock or open a door or a gate by activating an actuator 210, if applicable. In some instances this will not be required, for example if the required access is only to an outdoor area 103-106 which does not require the passing through any locked gate or door. Some embodiments may not implement any actuators, in which case step 705 is, of course, not performed.
[0108] In a following step 706 it is determined whether it is necessary to communicate the event, for example by sending a notification to an app 405 from the notification service 403, or by sending a text message, an email or any other type of message. If so, this is performed in step 707. If it is not necessary to send a notification, or after the notification has been sent, some embodiments of the invention will update the system's alert level in step 708. The update of the alert level may be to increase the alert level, for example of the person who was granted access is a person with only limited access rights such as a relatively unknown service person, or it may be to decrease the alert level, if the person is a resident. Update of alert level is discussed in further detail below.
[0109] The steps 704 through 708 may all be determined based on one or more rules.
Subsequent to the execution of the actions determined by these rules the event, which was first detected by the person's approach as indicated by the sensor input in step 601, has ended and can be logged in step 709. The person may, of course, trigger new events while on the property. Some embodiments of the invention may log the continued presence of the person on the premises as a continuous event that is only logged when the person leaves. Other embodiments may consider the person leaving the property, or moving from zone to zone and then leaving the property, as separate events. These are implementation details that may determine how the relevant rules are written and how the database - in particular the ActivityTimes table 509 - is designed. These design choices all fall under the contemplated scope of the invention.
[0110] If it is determined in step 703 that the person cannot be identified (identified here includes the determination of a valid access code that identifies the person as someone who is expected, but not necessarily known by name and face image), or if it is determined in step 704 that the person is identified but does not have access, the process proceeds to step 710 where the person is prompted to state the purpose of the visit. The exception may be if the system maintains a list of persons who are blacklisted, i.e. persons who are explicitly defined as not wanted on the premises, in which case it is determined in step 711 that the system should immediately request the person to leave.
[0111] The stated purpose for the person's presence on the property 100 may in principle by anything, since it may be received as spoken input to a microphone 203. Some embodiments may in addition allow a person to select a purpose, for example a person to visit, from a user interface 206 such as a touch screen.
[0112] If the received input is spoken, the speech signal is transmitted to the speech recognition module 302 and interpreted. The resulting interpretation is processed, in some embodiments locally by the local alarm unit 201 and in other embodiments by the web service 401, in order to determine whether the stated purpose is consistent with a predefined rule in step 712. If the stated purpose is not consistent with a predefined rule, i.e. if the system has no way of processing the received input further, it will ultimately determine that the person should be asked to vacate the premises in step 713. Otherwise, additional processing is performed in step 714 in accordance with one or more rules.
[0113] Steps 713 and 714 are sub-processes that may be performed in a number of different ways, both depending on the embodiment, the configuration or the system, which rules have been defined, and other circumstances. An example will now be described with reference to FIG. 8. The steps illustrated in this drawing are prefixed with 713 and 714 indicating which of the two sub-processes they are part of. This is not intended to limit the invention to how these steps are implemented in the form of, for example, computer code instructions.
[0114] In a first step of sub-process 714, namely step 7141, the stated purpose of the visitor is interpreted as an intention to visit a named party, for example a person or a company. This purpose is consistent with one or more rules - or in other words, the system is capable of further processing of this request, so the process continues to step 7142.
[0115] It should be noted that this stated purpose is only one of many possibilities. Another alternative could be that the person, who may be unidentified, as determined in step 703, or identified, but determined not to have access, as determined in step 704, states that he has been hired to cut down a tree in the garden 106. A rule may have been defined specifying that such an assignment has been given and that a person stating such a purpose should be given access to the garden 106. Although some embodiments of the invention may not allow rules to be defined with this level of flexibility, other embodiments will.
[0116] Returning to the illustrated example, in step 7142 the system proceeds by examining one or more conditions specified by the rule, in this example a first such condition may be to verify that the named party is indeed a resident at the property 100. If this is the case, the rule may further specify that the resident should receive a notification, so the process may proceed to step 7143 where a notification is sent, for example to the resident's app 405 installed on his or her smartphone by way of web service 401 and notification service 403. The notification may include the identification of the person if the person was identified, or the person may be asked to state their name which may then be included in the notification along with a photo of the person. If it is subsequently determined in step 7144 that the resident has responded by transmitting an approval, the process may return to step 705 illustrated in FIG. 7 and proceed as described above.
[0117] If, however, it is determined in step 7142 that the named party is not a resident at the property 100, the process may proceed to step 7145 where the person is informed that the requested party is not a resident. The rule may also include an instruction to increase alert level at this point. Whether this is the case depends primarily on whether a particular embodiment actually allows alert level to be adjusted by rules that are triggered by events, and also on whether, if it does, a particular rule actually specifies conditions for increasing alert level.
[0118] After the person has been informed in step 7145 it is determined in step 7146 whether the person should be prompted to state another purpose, for example ask for a different named party. The result of this determination may depend on alert level, which means that as alert level increases when the process passes through step 7145 the person will only get a limited number of chances before being asked to leave. In embodiments where the rule does not increase alert level, there may instead be a counter or a time out function that determines when a person has been given enough chances to state a valid purpose before being asked to leave.
[0119] If the person is prompted to state a different purpose the process returns to step 7141 where the stated purpose is interpreted. Otherwise, if the person is asked to leave, the process proceeds to sub-process 713.
[0120] If it is determined in step 7144 that the resident that the person asked for refuses to give access the process may proceed directly to sub-process 713. This may typically be the case if the property includes only one house with only one residence, e.g. the home of a single family. However, if the property is an apartment building with multiple residences or an office building with several companies, the system (or the rule) may instead be configured to proceed to step 7146 in order to determine whether the person should be allowed to state a different purpose.
[0121] If it is determined, either in step 7144 or in step 7146 that the person should not be given access and the process proceeds to sub-process 713, this sub-process starts with step 7131 where a message is selected for asking the person to vacate the premises. The system may include a number of possible messages and which rule to use may be selected based on alert level, zone, and other factors. Such factors may be determined by rules which again may be associated with roles, as described above.
[0122] In step 7132 the message is given to the person by way of loudspeakers 205, and may also include a message displayed on a display, lights or flashing lights from spotlights or floodlights and sounds from a siren. If it is determined in step 7133 that the person does indeed leave the property 100, the process proceeds to step 7136 where, if applicable, the alert level may be reduced. The system then logs the event in step 7137 and returns to waiting for sensor input indicating the beginning of a new event.
[0123] If the person does not leave the premises, the process may proceed to step 7134 where the alert level is increased. Following this the event may be logged in step 7135 after which the process may return to step 7131 where a new message is selected, this time in accordance with the increased alert level. The new message may be delivered in a different tone of voice and may include stricter warnings, and more light and sound from floodlights and sirens.
[0124] As long as the person does not leave, the cycle is repeated and the alert level is increased until alarms are set off, possibly including transmissions of notifications or messages to users/residents as well as for example an alarm central or the police. The system may remain at a high alert level until the person finally leaves.
[0125] It should be noted that the alert level described above can be implemented in many different ways. In some embodiments the alert level is a single value, for example stored a single numerical variable, which dictates the state of the entire system. In other embodiments there is a separate alert level for each zone. However, it may not be desirable to require users to specify alert levels for each zone, so some embodiments have a first alert level that describes the overall state of the system as a whole, and derived alert levels that apply to individual zones. This may be used to reflect that the system is less alert with respect to persons walking up the driveway 103 than people entering the backyard 105 by climbing over a fence. In these embodiments the system may hold a table that specifies alert levels for each zone based on the global alert level and time of day.
[0126] As described above, in some embodiments the application of a rule may change the alert level. In some embodiments this change is a direct change of the global alert level or the alert level of one or more zones. In other embodiments, however, an additional value or state is associated with the rule itself. This may be a variable or a counter, or even a timer, associated with the rule and adjusted in accordance with the application of the rule, but it may also be implicitly defined simply by states or stages that the processing of the rule progresses through. In other words, if the rule requires branching to a different set of instructions based on certain input, this may be thought of as a change in alert level.
[0127] Consequently, when this description refers to alert level this alert level may be anything from a single value, a combination of a global value and values associated with individual zones, and a combination of a global value, a zone value and a rule associated value or state.
[0128] Increasing (or decreasing) the alert level may then refer to increasing (or decreasing) any one of these values as well as changing to a state or to a stage in a sub-process where the system is more (or less) likely to reject a visitor or initiate an alarm.
[0129] Reference is now made to FIG. 9 which illustrates steps that may be performed regularly, after the logging of an event, or after a certain amount of information has been aggregated in a log.
[0130] In a first step 901 of this process existing rules are enhanced based on aggregated data. The aggregated data may be sensor data collected by the sensors during the progression of one or more events, for example images, as well as the various outcomes resulting from processing. In some embodiments machine learning is used to improve rules. A number of different approaches to machine learning are known in the art, and specific algorithms will not be discussed here as they will be well known to those with skill in the art. In embodiments where the system uses API's to access online recognition services for face recognition and voice recognition, machine learning may be part of these services. In other words, machine learning may be used to improve one or more of the rules stored in the local storage unit 301 and/or the database 404, the voice recognition 302 and the face recognition 303. The learning algorithms used may be independent of each other.
[0131] While step 901 only enhances existing rules, a next step 902 generates new rules based on aggregated data. This may, for example, be the registration of a new person in the database 404, the association of that person with a role, and the creation of any new rules that can be derived from the aggregated data. Again, machine learning may be used to generate new rules, for example by classifying a sequence of events to a rule representing a class of events with a pre-defined outcome.
[0132] In a third step 903 of this process the system receives user input to refine the results of the two preceding steps, to add additional information such as name and other data associated with a newly registered person, to resolve conflicts that cannot be resolved based only on the aggregated data.
[0133] Those with skill in the art will realize that a number of modifications and alternatives to the details disclosed herein would be consistent with the principles of the invention. For example, in embodiments with alert levels that vary between zones and over different periods of the day, the alert level for a specific zone and time period combination could be defined explicitly, as application of defined minimum alert levels for all zones and/or time periods, or as a function of a global alert level for the premises and rules that apply to zones and time periods. As already mentioned, the alert level can be defined as one value or as a combination of values and/or states that relate to different parameters, for example to time period, zone, progress through a sequence of sensor data processing routines etc.

Claims (31)

1. A security system, comprising:
a plurality of sensors including at least one video camera;
at least one user interface;
a control unit configured to receive sensor input from said plurality of sensors, said control unit comprising:
a face recognition module;
local or distributed memory holding
1 a representation of an alert level,
1 a set of rules for how to respond to sensor input, and
1 a collection of data identifying known persons;
wherein said alert level can be adjusted by user input; and
said control unit is configured to:
receive input from said plurality of sensors said input being indicative of the presence of a person;
use said face recognition module to identify a person based on input from said at least one video camera and image data stored as part of said collection of data identifying known persons; and
use output from the face recognition module and a current representation of said alert level as input to at least one rule from said set of rules to select a response to said sensor input, said response including at least one of:
an update of said representation of an alert level,
a message to be presented by said at least one user interface,
an acceptance of the presence of an identified persons, and
an activation of an alarm.
2. A security system according to claim 1, wherein said message to be presented is a request for user input, the request including at least one of a request for a user identity, a user credential, and a purpose for presence to be entered via said user interface.
3. A security system according to claim 1 or 2, wherein said user interface includes a microphone, and said control unit further comprises a speech recognition module.
4. A security system according to one of claims 2 or 3, wherein said control unit is further configured to use input from said user interface, output from the face recognition module and a current representation of said alert level as input to at least one rule from said set of rules to select a response to said input from said user interface.
5. A security system according to one of the previous claims, wherein said representation of an alert level includes a variable adjustable by user input.
6. A security system according to one of the previous claims, wherein said representation of an alert level includes a variable adjustable by output from a rule selected from said set of rules.
7. A security system according to one of the previous claims, wherein said representation of an alert level includes an internal state of said control unit, said state being a result of a progress through a sequence of repeated application of one or more rules selected from said set of rules.
8. A security system according to one of the previous claims, wherein said face recognition module includes an application programming interface (API) enabling said control unit to transmit image data as part of a request to a remote face recognition service and receive a face recognition processing result as a response to said request.
9. A security system according to one of the claims 2 to 8, wherein said voice recognition module includes an application programming interface (API) allowing said control unit to transmit audio data as part of a request to a remote face recognition service and receive a face recognition processing result as a response to said request.
10. A security system according to one of the previous claims, further comprising a notification module configured to transmit a notification of a result of the outcome of the application of a rule to a recipient.
11. A security system according to one of the previous claims, wherein said control unit includes at least one module installed on a local device and at least one module installed on a remote server, wherein said local device and said remote server are in communication over a communication network.
12. A security system according to one of the previous claims, wherein said control unit is configured to forward at least one of a first video stream and a first audio stream received from said plurality of sensors to a second user interface and to forward at least one of a second audio stream and a second video stream from said second user interface to said at least one user interface.
13. A security system according to claim 12, wherein said second user interface is implemented as an app on a remote device.
14. A security system according to one of the previous claims, wherein sensor input from said plurality of sensors can be associated with one of a plurality of zones; and
said control unit is further configured to use an association with one of said plurality of zones as input to said at least one rule from said set of rules to select a response to said sensor input.
15. A security system according to one of the previous claims, further comprising:
one or more devices that are in communication with and configured to be activated by said local control unit as a selected response to said sensor input, said one or more devices being selected from the group consisting of: loudspeakers, sirens, spotlights, floodlights, stroboscopic lights, and actuators.
16. A method in a security system, comprising:
receiving sensor input from at least one of a plurality of sensors including at least one video camera, said input being indicative of the presence of a person;
using a face recognition module to identify a person based on input from said at least one video camera and image data stored as part of a collection of data identifying known persons; and
using output from the face recognition module and a current representation of an alert level as input to at least one rule from said set of rules to select a response to said sensor input, said response including at least one of:
updating said representation of an alert level,
presenting a message by said at least one user interface,
accepting the presence of an identified person, and
activating an alarm.
17. A method according to claim 16, wherein said presenting of a message includes requesting user input representative of a user identity, a user credential, and a purpose for a person's presence to be entered via said user interface.
18. A method according to claim 16 or 17, wherein said at least one user interface includes a microphone, the method further comprising receiving a speech signal from said microphone and processing said speech signal by a speech recognition module.
19. A method according to claim 17 or 18, further comprising:
using input from said user interface, output from said face recognition module and a current representation of said alert level as input to at least one rule from said set of rules to select a response to said input from said user interface.
20. A method according to one of the claims 16 to 19, further comprising receiving user input to adjust said representation of an alert level.
21. A method according to one of the claims 16 to 20, wherein said update of said alert level includes adjusting the value of a variable based on output from said rule selected from said set of rules.
22. A method according to one of the claims 16 to 21, wherein said update of said alert level includes progressing through a sequence of repeated applications of one or more rules selected from said set of rules.
23. A method according to one of the claims 16 to 22, wherein said using a face recognition module includes transmitting image data over an application programming interface (API) as part of a request to a remote face recognition service and receive a face recognition processing result as a response to said request.
24. A method according to one of the claim 18, wherein said processing said speech signal by a speech recognition module includes transmitting speech data over an application programming interface (API) as part of a request to a remote speech recognition service and receive a speech recognition processing result as a response to said result.
25. A method according to one of the claims 16 to 24, further comprising transmitting a notification of a result of the outcome of the application of a rule to a recipient.
26. A method according to one of the claims 16 to 25, wherein at least one step of said method is performed by a device installed locally and at least one step is performed by a remote server, said device installed locally and said remote server being in communication over a communication network.
27. A method according to one of the claims 16 to 26, further comprising forwarding at least one of a first video stream and a first audio stream received from said plurality of sensors to a second user interface and forwarding at least one of a second audio stream and a second video stream received from said second user interface to said at least one user interface.
28. A method according to claim 27, wherein said second user interface is implemented as an app on a remote device.
29. A method according to one of the claims 16 - 28, further comprising associating said received sensor input with one of a plurality of zones; and
using an association with one of said plurality of zones as input to said at least one rule from said set of rules to select a response to said sensor input.
30. A method according to one of the claims 16 - 29, further comprising activating one or more devices as a selected response to said sensor input, said one or more devices being selected from the group consisting of: loudspeakers, sirens, spotlights, floodlights, stroboscopic lights, and actuators.
31. A computer program product installed on computer readable media and including instructions enabling a computing device to perform the steps of any one of the claims 15 -30.
NO20171729A 2017-10-30 2017-10-30 A security system NO343993B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
NO20171729A NO343993B1 (en) 2017-10-30 2017-10-30 A security system
PCT/NO2018/050260 WO2019088845A1 (en) 2017-10-30 2018-10-30 Face recognition based access control system with user definable alert level

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NO20171729A NO343993B1 (en) 2017-10-30 2017-10-30 A security system

Publications (2)

Publication Number Publication Date
NO20171729A1 true NO20171729A1 (en) 2019-05-01
NO343993B1 NO343993B1 (en) 2019-08-12

Family

ID=64477246

Family Applications (1)

Application Number Title Priority Date Filing Date
NO20171729A NO343993B1 (en) 2017-10-30 2017-10-30 A security system

Country Status (2)

Country Link
NO (1) NO343993B1 (en)
WO (1) WO2019088845A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517381A (en) * 2019-08-20 2019-11-29 一石数字技术成都有限公司 A kind of visitor management system and method based on face registration mechanism
CN111160785A (en) * 2019-12-31 2020-05-15 北京明略软件***有限公司 Early warning deployment and control method and device, computer equipment and readable storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675585A (en) * 2019-09-23 2020-01-10 北京华毅东方展览有限公司 Exhibition safety control system
CN111126167B (en) * 2019-12-02 2023-08-04 武汉虹信技术服务有限责任公司 Method and system for quickly identifying serial activities of multiple specific personnel

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150109112A1 (en) * 2012-09-21 2015-04-23 Google Inc. Occupant notification of visitor interaction with a doorbell at a smart-home
US9064394B1 (en) * 2011-06-22 2015-06-23 Alarm.Com Incorporated Virtual sensors
US20160189532A1 (en) * 2014-12-30 2016-06-30 Google Inc. Automatic security system mode selection
US20160196728A1 (en) * 2015-01-06 2016-07-07 Wipro Limited Method and system for detecting a security breach in an organization

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2359269A1 (en) * 2001-10-17 2003-04-17 Biodentity Systems Corporation Face imaging system for recordal and automated identity confirmation
EP2204783A1 (en) * 2008-12-31 2010-07-07 Thales Security system comprising sensors in a corridor for uncovering hazardous items
US9652915B2 (en) * 2014-02-28 2017-05-16 Honeywell International Inc. System and method having biometric identification intrusion and access control
US10305895B2 (en) * 2015-04-14 2019-05-28 Blubox Security, Inc. Multi-factor and multi-mode biometric physical access control device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9064394B1 (en) * 2011-06-22 2015-06-23 Alarm.Com Incorporated Virtual sensors
US20150109112A1 (en) * 2012-09-21 2015-04-23 Google Inc. Occupant notification of visitor interaction with a doorbell at a smart-home
US20160189532A1 (en) * 2014-12-30 2016-06-30 Google Inc. Automatic security system mode selection
US20160196728A1 (en) * 2015-01-06 2016-07-07 Wipro Limited Method and system for detecting a security breach in an organization

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517381A (en) * 2019-08-20 2019-11-29 一石数字技术成都有限公司 A kind of visitor management system and method based on face registration mechanism
CN111160785A (en) * 2019-12-31 2020-05-15 北京明略软件***有限公司 Early warning deployment and control method and device, computer equipment and readable storage medium

Also Published As

Publication number Publication date
WO2019088845A1 (en) 2019-05-09
NO343993B1 (en) 2019-08-12

Similar Documents

Publication Publication Date Title
US10991236B2 (en) Detecting of patterns of activity based on identified presence detection
US10917618B2 (en) Providing status information for secondary devices with video footage from audio/video recording and communication devices
US10147308B2 (en) Method and system for consolidating events across sensors
US10991218B2 (en) Sharing video stream during an alarm event
WO2019088845A1 (en) Face recognition based access control system with user definable alert level
US11893795B2 (en) Interacting with visitors of a connected home environment
US20130329047A1 (en) Escort security surveillance system
JP2016524209A (en) Security and / or monitoring device and system
US10621838B2 (en) External video clip distribution with metadata from a smart-home environment
US10217350B2 (en) Adaptive exception handling in security system
US9799182B1 (en) Systems and methods for a smart door chime system
US9613514B2 (en) Systems and methods for providing a smart notifications system
US20180197399A1 (en) Adaptive exit arm times based on real time events and historical data in a home security system
JP7271799B2 (en) Determining User Presence and Absence Using WIFI Connections
US11393306B2 (en) Intruder detection method and apparatus
US20230316764A1 (en) Actuating plumbing systems based on surveillance data

Legal Events

Date Code Title Description
MM1K Lapsed by not paying the annual fees