US20170032182A1 - System for adaptive real-time facial recognition using fixed video and still cameras - Google Patents

System for adaptive real-time facial recognition using fixed video and still cameras Download PDF

Info

Publication number
US20170032182A1
US20170032182A1 US15/294,786 US201615294786A US2017032182A1 US 20170032182 A1 US20170032182 A1 US 20170032182A1 US 201615294786 A US201615294786 A US 201615294786A US 2017032182 A1 US2017032182 A1 US 2017032182A1
Authority
US
United States
Prior art keywords
person
rules
facial characteristics
camera
database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/294,786
Inventor
Krishna V Motukuri
Motilal Agrawal
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vcognition Technologies Inc
Original Assignee
Vcognition Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vcognition Technologies Inc filed Critical Vcognition Technologies Inc
Priority to US15/294,786 priority Critical patent/US20170032182A1/en
Publication of US20170032182A1 publication Critical patent/US20170032182A1/en
Assigned to VCOGNITION TECHNOLOGIES, INC reassignment VCOGNITION TECHNOLOGIES, INC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AGRAWAL, MOTILAL, MOTUKURI, KRISHNA
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • G06K9/00288
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F25REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
    • F25DREFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
    • F25D27/00Lighting arrangements
    • F25D27/005Lighting arrangements combined with control means
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F25REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
    • F25DREFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
    • F25D29/00Arrangement or mounting of control or safety devices
    • G06K9/00248
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00204Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a digital computer or a digital computer system, e.g. an internet server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F25REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
    • F25DREFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
    • F25D2500/00Problems to be solved
    • F25D2500/06Stock management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/65Control of camera operation in relation to power supply

Definitions

  • the present invention generally relates to the recognition of persons based on their facial characteristics.
  • Systems for processing persons based on their identity have several common characteristics. They include a centralized database of information about the person or persons who can be processed, and one or more cameras. For entry systems, these cameras will be in fixed locations, with different lighting between cameras and depending on the time of day, weather conditions, etc.
  • What is needed is a system for determining the identify of persons which can rapidly identify persons in varying lighting conditions, while overcoming some possible ways of getting around the system.
  • a system for facial recognition consisting of a global database of facial characteristics of known users, a server which accepts from registration pictures taken on a user's camera device, accepts and processes training pictures and videos of other users to enable algorithm improvement, and a static camera which processes user pictures to determine whether the user is a known user or not based on the global database and a local database associated with each static camera.
  • FIG. 1 shows an embodiment of the physical system including that necessary to enroll, train, and detect a known person.
  • FIG. 2 shows the details of one embodiment of the components of the system necessary to enroll, train and detect a known person.
  • FIG. 3 shows one or more embodiments of the details of the process needed to detect a known person.
  • FIG. 4 shows one or more embodiments of the details needed to train the system.
  • FIG. 1 shows one embodiment of the system.
  • a computer 104 is configured to accept pictures from one or more enrollment camera devices 102 , such as cell phones, tablets or computers. The computer will determine facial characteristics of a known person in the picture, and determine the quality of the information. In one or more embodiments, the computer 104 will interact with the user via the enrollment camera device 102 , prompting the user for identification information and verifying that the pictures contain adequate information by making sure that only one person is visible in the picture, and that there are adequate views of the person. In one or more embodiments, an action module 114 is called by the computer 104 based on the identification of the known person and location of the remote camera 110 .
  • the action module 114 will be to trigger a request to unlock that door.
  • the action module 114 is associated with a database which defines the attributes of the action. For instance, a person could only have access to certain doors, and then only during certain time periods.
  • a known person is a person of interest to the system, such as a person who needs access to a facility, or is a member of a group of interest, where the system will notify one or more action modules to react.
  • An other person is a person not of interest but known to the system, whose facial characteristics can be compared to different persons (other or known) to improve the identification rules. In one such embodiment, this could be implements using a defined set of rules. For example, a rule which says if a person appears at a door during a specific time period, then the door is unlocked. Otherwise, the appearance is logged and ignored.
  • FIG. 2 shows one embodiment of the system with its software modules.
  • the enrollment manager 202 accepts picture data and known person information from the enrollment camera 204 .
  • the enrollment manager 202 determines the quality of the picture data on several levels. First, is there a single identifiable face in the pictures. Is it difficult to tell whether there is more than one person in the picture? Second, is the picture quality such that facial characteristics are difficult to recognize? Third, are there multiple useful views of the face (direct, left profile, right profile) such that adequately detailed facial characteristic data can be extracted from the information. In one or more embodiments, the features detected in different views are weighted by importance so that the values associated with features face forward get more weight than values associated with profile views, but the profile views still help to improve the calculations.
  • the software used to define the rules for finding known persons from the facial features uses an algorithm based on the convolution neural network to discover the relationships between facial features and known persons.
  • Convolutional Neural Network is a type of neural network inspired by how the animal visual cortex works. They have a wide range of applications in image and video (see Wikipedia, https://en.wikipedia.org/wiki/Convolutional_neural_network).
  • the universal database 208 contains location independent information about a known person.
  • the universal rules module 216 compares multiple known persons information to create a set of rules to differentiate them. For instance, if there were only two known persons, and one had a ratio of distance between the eyes to distance between the ears of 0.7 and the other was 0.6, one rule might be if the ratio is less than 0.65 it is one known person and greater than 0.6 it is the other. In one or more embodiments, the ratios and values will vary based on the angle of view have a mean and variance, so that one has a probability of whether it is one known person or another.
  • Each deployment camera 214 is pointed at a background that will vary during the day in terms of light and objects. For instance, one location may be the entrance of a building that gets natural light during the day and one or more floodlights at night. Another may be in a hallway where different carts, pictures or other objects are stored against the opposite wall.
  • the placement of the deployment camera 214 is such that the expected size and distance of a person can be regulated. For instance, by placing the deployment camera above the height of an average person, then directing the person to stand some specified distance from the camera, the size of the face will not vary from a tall to a short person very much.
  • each deployment camera 214 Associated with each deployment camera 214 is a deployment camera manager module 210 .
  • the deployment camera manager module 210 accepts picture data from the deployment camera 214 and removes the background information. This is done by recording the background information on a periodic basis so that it can be removed from the object that has inserted itself into the foreground.
  • the deployment camera manager module 210 adjusts the identification algorithm for changes in lighting. For instance, features may appear thicker in darker lighting and thinner in lighter conditions.
  • the deployment camera manager module 210 identifies the face from the facial characteristics, then aligns the face to get a standard size face to work with.
  • a fixed probability threshold is used to determine whether or not this is a known person. On not exceeding that threshold, more pictures are accepted until either some lower threshold is reached or some time limit is exceeded. In one or more embodiments, this lower threshold will be a function of the number of acceptable quality pictures. For instance, there could be a fixed threshold of 95%, a threshold after 5 pictures or less of 90%, after 10 pictures of 90% and a timeout of 5 seconds.
  • a per camera adapted database 212 is maintained for each deployment camera 214 .
  • the per camera adapted database 212 is maintained over some period of time or some number of known users. Based on the picture quality and the facial data, this can be used to update rules stored in both the universal database 208 and the per camera adapted database 212 .
  • the universal database there are two ways to update the universal database from the per camera adapted database. First, calculate an image quality value for each picture based on contrast, and angle of view (profile vs face forward). Add high quality images to the universal database and associate them with the known person, effectively re-enrolling him.
  • the deployment camera manager module 210 can look for multiple views of the user, either by examining multiple input frames or requesting that the user turn his head.
  • the head size can be assumed to be within a certain range such that if it is smaller or larger, it is assumed to be invalid (i.e. trying to hold a picture up to the camera).
  • the deployment camera manager module interacts with a display that shows a short phrase for the user to read and examine the movement of the lips using a lip reading module associated with the deployment camera module.
  • the lip reading module tracks the location of landmarks on the lower and upper lip using standard motion based tracking. If the lips motion appears to be in line with utterances of that phrase within a determined probability score, then it is assumed to be a real person.
  • FIG. 3 shows one or more embodiments of the process of accepting new picture data.
  • a background filter 302 is used to remove the background information from the person image.
  • the person image is scanned to attempt feature recognition 304 .
  • the features are detected and a process of alignment of the features are done to using the universal database 106 and the per camera adapted database 108 to detect what known person(s) it might be.
  • the alignment is done using convolutional neural network techniques to align and center a face.
  • the rules associated with known persons can then be applied to calculate a match score. Based on a calculated match score 306 , the result is to reject it or execute an action 308 .
  • the match score is calculated by computing the distance of aligned features of an image from the new picture data compared to that of the universal database or per camera adapted database.
  • the distance can be computed using either the dot product or euclidean distance between features.
  • the database is updated through a process called re-enroll 310 , where the new information is compared against a random subset of other known persons to improve the matching algorithms.
  • the rules are further improved using training data from various sources, such as a set of photos of a known person, TEDTM talks or YouTubeTM videos.
  • the training manager module 206 accepts a video or picture input. A user interacts with the training manager to identify the person of interest and provide some identifying information for him. In one or more embodiments, the training manager module will track that person through the video or set of pictures, extracting in some cases hundreds of usable pictures of that person and identifying facial features. That data so extracted can be used against the universal database to further refine rules for finding known persons.
  • FIG. 4 shows one embodiment of the process of using training data. A video is selected by a user and presented to the training manager 402 .
  • the Training manager module 206 interacts with the user to select a face of interest 404 , after which the training manager module 206 can track the person as he she moves through the video and acquire as many usable images of the face as possible. These images are then processed to recognize features 406 , and the resulting data is compared to random sets of users to improve the rules 408 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Mechanical Engineering (AREA)
  • Thermal Sciences (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Cold Air Circulating Systems And Constructional Details In Refrigerators (AREA)
  • Collating Specific Patterns (AREA)

Abstract

A system for facial recognition, consisting of a global database of facial characteristics of known users, a server which accepts from registration pictures taken on a user's camera device, accepts and processes training pictures and videos of other users to enable algorithm improvement, and a static camera which processes user pictures to determine whether the user is a known user or not based on the global database and a local database associated with each static camera.

Description

    FIELD OF THE INVENTION
  • The present invention generally relates to the recognition of persons based on their facial characteristics.
  • Systems for processing persons based on their identity have several common characteristics. They include a centralized database of information about the person or persons who can be processed, and one or more cameras. For entry systems, these cameras will be in fixed locations, with different lighting between cameras and depending on the time of day, weather conditions, etc.
  • There are also ways to fool these systems. One example of that is to hold a picture of the person up to the camera. Another is to have a picture of the person on an article of clothing, such as a t-shirt.
  • What is needed is a system for determining the identify of persons which can rapidly identify persons in varying lighting conditions, while overcoming some possible ways of getting around the system.
  • SUMMARY
  • A system for facial recognition, consisting of a global database of facial characteristics of known users, a server which accepts from registration pictures taken on a user's camera device, accepts and processes training pictures and videos of other users to enable algorithm improvement, and a static camera which processes user pictures to determine whether the user is a known user or not based on the global database and a local database associated with each static camera.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows an embodiment of the physical system including that necessary to enroll, train, and detect a known person.
  • FIG. 2 shows the details of one embodiment of the components of the system necessary to enroll, train and detect a known person.
  • FIG. 3 shows one or more embodiments of the details of the process needed to detect a known person.
  • FIG. 4 shows one or more embodiments of the details needed to train the system.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • FIG. 1 shows one embodiment of the system. A computer 104 is configured to accept pictures from one or more enrollment camera devices 102, such as cell phones, tablets or computers. The computer will determine facial characteristics of a known person in the picture, and determine the quality of the information. In one or more embodiments, the computer 104 will interact with the user via the enrollment camera device 102, prompting the user for identification information and verifying that the pictures contain adequate information by making sure that only one person is visible in the picture, and that there are adequate views of the person. In one or more embodiments, an action module 114 is called by the computer 104 based on the identification of the known person and location of the remote camera 110. For instance, if the remote camera 110 is over a door and the person identified is known by the system to have access to that door, one embodiment of the action module 114 will be to trigger a request to unlock that door. In one or more embodiments, the action module 114 is associated with a database which defines the attributes of the action. For instance, a person could only have access to certain doors, and then only during certain time periods.
  • A known person is a person of interest to the system, such as a person who needs access to a facility, or is a member of a group of interest, where the system will notify one or more action modules to react. An other person is a person not of interest but known to the system, whose facial characteristics can be compared to different persons (other or known) to improve the identification rules. In one such embodiment, this could be implements using a defined set of rules. For example, a rule which says if a person appears at a door during a specific time period, then the door is unlocked. Otherwise, the appearance is logged and ignored.
  • FIG. 2 shows one embodiment of the system with its software modules. The enrollment manager 202 accepts picture data and known person information from the enrollment camera 204. In one or more embodiments, the enrollment manager 202 determines the quality of the picture data on several levels. First, is there a single identifiable face in the pictures. Is it difficult to tell whether there is more than one person in the picture? Second, is the picture quality such that facial characteristics are difficult to recognize? Third, are there multiple useful views of the face (direct, left profile, right profile) such that adequately detailed facial characteristic data can be extracted from the information. In one or more embodiments, the features detected in different views are weighted by importance so that the values associated with features face forward get more weight than values associated with profile views, but the profile views still help to improve the calculations.
  • In one or more embodiments the software used to define the rules for finding known persons from the facial features uses an algorithm based on the convolution neural network to discover the relationships between facial features and known persons. Convolutional Neural Network is a type of neural network inspired by how the animal visual cortex works. They have a wide range of applications in image and video (see Wikipedia, https://en.wikipedia.org/wiki/Convolutional_neural_network).
  • If adequate information can be extracted by the enrollment manager 202, then the facial characteristics and known person information is saved in the universal database 208. The universal database 208 contains location independent information about a known person. In one or more embodiments, the universal rules module 216 compares multiple known persons information to create a set of rules to differentiate them. For instance, if there were only two known persons, and one had a ratio of distance between the eyes to distance between the ears of 0.7 and the other was 0.6, one rule might be if the ratio is less than 0.65 it is one known person and greater than 0.6 it is the other. In one or more embodiments, the ratios and values will vary based on the angle of view have a mean and variance, so that one has a probability of whether it is one known person or another.
  • At various fixed locations, the system interacts with one or more deployment cameras 214. Each deployment camera 214 is pointed at a background that will vary during the day in terms of light and objects. For instance, one location may be the entrance of a building that gets natural light during the day and one or more floodlights at night. Another may be in a hallway where different carts, pictures or other objects are stored against the opposite wall. In one or more embodiments, the placement of the deployment camera 214 is such that the expected size and distance of a person can be regulated. For instance, by placing the deployment camera above the height of an average person, then directing the person to stand some specified distance from the camera, the size of the face will not vary from a tall to a short person very much.
  • Associated with each deployment camera 214 is a deployment camera manager module 210. The deployment camera manager module 210 accepts picture data from the deployment camera 214 and removes the background information. This is done by recording the background information on a periodic basis so that it can be removed from the object that has inserted itself into the foreground. In one or more embodiments, the deployment camera manager module 210 adjusts the identification algorithm for changes in lighting. For instance, features may appear thicker in darker lighting and thinner in lighter conditions. The deployment camera manager module 210 identifies the face from the facial characteristics, then aligns the face to get a standard size face to work with.
  • Using the universal rules module 216, the identify of a known person is attempted. In one or more embodiments, a fixed probability threshold is used to determine whether or not this is a known person. On not exceeding that threshold, more pictures are accepted until either some lower threshold is reached or some time limit is exceeded. In one or more embodiments, this lower threshold will be a function of the number of acceptable quality pictures. For instance, there could be a fixed threshold of 95%, a threshold after 5 pictures or less of 90%, after 10 pictures of 90% and a timeout of 5 seconds.
  • In one or more embodiments, a per camera adapted database 212 is maintained for each deployment camera 214. In one or more embodiments, the per camera adapted database 212 is maintained over some period of time or some number of known users. Based on the picture quality and the facial data, this can be used to update rules stored in both the universal database 208 and the per camera adapted database 212.
  • In one or more embodiments, there are two ways to update the universal database from the per camera adapted database. First, calculate an image quality value for each picture based on contrast, and angle of view (profile vs face forward). Add high quality images to the universal database and associate them with the known person, effectively re-enrolling him.
  • Second, one can examine a group of persons from a specific deployment camera and, using convolutional neural networks or similar techniques, learn a mapping from the facial features detected by a specific deployment camera to the known person.
  • To prevent a user from being accepted by displaying a picture in front of a camera, there are several processing steps that can be used. In one or more embodiments, the deployment camera manager module 210 can look for multiple views of the user, either by examining multiple input frames or requesting that the user turn his head. In other embodiments, the head size can be assumed to be within a certain range such that if it is smaller or larger, it is assumed to be invalid (i.e. trying to hold a picture up to the camera).
  • In one or more embodiment, the deployment camera manager module interacts with a display that shows a short phrase for the user to read and examine the movement of the lips using a lip reading module associated with the deployment camera module. In one or more embodiments, the lip reading module tracks the location of landmarks on the lower and upper lip using standard motion based tracking. If the lips motion appears to be in line with utterances of that phrase within a determined probability score, then it is assumed to be a real person.
  • FIG. 3 shows one or more embodiments of the process of accepting new picture data. When the picture data is received, a background filter 302 is used to remove the background information from the person image. The person image is scanned to attempt feature recognition 304. In one or more embodiments, the features are detected and a process of alignment of the features are done to using the universal database 106 and the per camera adapted database 108 to detect what known person(s) it might be. In one or more embodiments, the alignment is done using convolutional neural network techniques to align and center a face. The rules associated with known persons can then be applied to calculate a match score. Based on a calculated match score 306, the result is to reject it or execute an action 308. In one or more embodiments, the match score is calculated by computing the distance of aligned features of an image from the new picture data compared to that of the universal database or per camera adapted database. The distance can be computed using either the dot product or euclidean distance between features. Finally, the database is updated through a process called re-enroll 310, where the new information is compared against a random subset of other known persons to improve the matching algorithms.
  • In one or more embodiments, the rules are further improved using training data from various sources, such as a set of photos of a known person, TED™ talks or YouTube™ videos. The training manager module 206 accepts a video or picture input. A user interacts with the training manager to identify the person of interest and provide some identifying information for him. In one or more embodiments, the training manager module will track that person through the video or set of pictures, extracting in some cases hundreds of usable pictures of that person and identifying facial features. That data so extracted can be used against the universal database to further refine rules for finding known persons. FIG. 4 shows one embodiment of the process of using training data. A video is selected by a user and presented to the training manager 402. The Training manager module 206 interacts with the user to select a face of interest 404, after which the training manager module 206 can track the person as he she moves through the video and acquire as many usable images of the face as possible. These images are then processed to recognize features 406, and the resulting data is compared to random sets of users to improve the rules 408.

Claims (11)

What is claimed is:
1) A system for facial recognition, the system comprising:
a universal database, consisting of the facial characteristics of known persons,
a computer,
an enrollment manager module, coupled to the computer, configured to accept registration pictures and known person information from an enrollment camera, detect facial characteristics, align the facial characteristics for a standard size face, associate the facial characteristics with the known person information and save those facial characteristics in the universal database,
a universal rules database, coupled to the computer,
a universal rules module, coupled to the computer, the universal rules database, and the universal database, the universal rules module configured to accept data from the fascial characteristics of known persons and calculate rules to enable the differentiation of known persons from one another, and
a deployment camera, coupled to the computer, configured to receive picture data.
2) The system in claim 1, further comprising:
a per camera adapted database, coupled to the computer and deployment camera manager, configured to save facial recognition data associated with the deployment camera, and
a camera rules module, coupled to the per camera adapted database, computer and universal rules database, configured to accept picture data, determine facial characteristics from the picture data, and send the update the universal rules database.
3) The system in claim 1, further comprising:
a database of fascial characteristics of other persons, and
a training module, coupled to the computer and other person database, configured to accept picture and video data and selection of an other person in the picture and video data, determine the facial characteristics of the other person and save that information in the other person database,
where the training module compares one or more other person data to known person data, updates rules and stores the updated rules in the global rules database.
4) The system in claim 1, further comprising:
An action module, coupled to the computer, which will perform a function on an external module based on the location of the remote camera and the identity of the known person.
5) A process for recognizing a person from his facial characteristics based on a set of pre-defined rules, the process comprising:
accepting picture data from a camera,
removing the static background from the picture data,
determining the facial characteristics from the picture data,
calculating a match score from the pre-defined rules,
identifying the person based on the match score, and
executing an action based on the location of the camera and the identity of the person.
6) The process in claim 5, calculating a match score further comprising:
determining if the match score exceeds a maximum threshold,
accepting more picture data to process
7) The process in claim 5, executing an action further comprising:
updating the pre-defined rules using the picture data
8) The process in claim 5, executing an action further comprising:
associating the picture data with the camera and the person, and
defining a set of rules associating with the camera and the person.
9) A process for generating rules for recognizing a person from facial characteristics, the process comprising:
accepting first picture data from a first user,
accepting second picture data from a second user,
detecting first facial characteristics from the first picture data,
detecting second facial characteristics from the second picture data,
comparing the first facial characteristics and the second facial characteristics, and
calculating a set of rules to differentiate the first user from the second user.
10) The process in claim 9, further comprising:
accepting third picture data from a first outside user,
detecting third facial characteristics from the third picture data,
comparing the third facial characteristics to the first facial characteristics and second facial characteristics, and
updating the rules for the first user to differentiate the first user from the second user and third user.
11) The process in claim 9, where the first picture data comprises multiple views of the face, including a head on, right profile, and left profile.
US15/294,786 2015-03-26 2016-10-17 System for adaptive real-time facial recognition using fixed video and still cameras Abandoned US20170032182A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/294,786 US20170032182A1 (en) 2015-03-26 2016-10-17 System for adaptive real-time facial recognition using fixed video and still cameras

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562138391P 2015-03-26 2015-03-26
US15/294,786 US20170032182A1 (en) 2015-03-26 2016-10-17 System for adaptive real-time facial recognition using fixed video and still cameras

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US62138391 Continuation 2015-03-26

Publications (1)

Publication Number Publication Date
US20170032182A1 true US20170032182A1 (en) 2017-02-02

Family

ID=56975076

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/075,186 Active 2036-02-23 US10089520B2 (en) 2015-03-26 2016-03-20 System for displaying the contents of a refrigerator
US15/294,786 Abandoned US20170032182A1 (en) 2015-03-26 2016-10-17 System for adaptive real-time facial recognition using fixed video and still cameras

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/075,186 Active 2036-02-23 US10089520B2 (en) 2015-03-26 2016-03-20 System for displaying the contents of a refrigerator

Country Status (1)

Country Link
US (2) US10089520B2 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190041197A1 (en) * 2017-08-01 2019-02-07 Apple Inc. Determining sparse versus dense pattern illumination
CN109344841A (en) * 2018-08-10 2019-02-15 北京华捷艾米科技有限公司 A kind of clothes recognition methods and device
CN110309768A (en) * 2019-06-28 2019-10-08 上海眼控科技股份有限公司 The staff's detection method and equipment of car test station
US10615994B2 (en) 2016-07-09 2020-04-07 Grabango Co. Visually automated interface integration
US10614514B2 (en) 2016-05-09 2020-04-07 Grabango Co. Computer vision system and method for automatic checkout
US10721418B2 (en) 2017-05-10 2020-07-21 Grabango Co. Tilt-shift correction for camera arrays
US10740742B2 (en) 2017-06-21 2020-08-11 Grabango Co. Linked observed human activity on video to a user account
US10963704B2 (en) 2017-10-16 2021-03-30 Grabango Co. Multiple-factor verification for vision-based systems
US11132737B2 (en) 2017-02-10 2021-09-28 Grabango Co. Dynamic customer checkout experience within an automated shopping environment
US11226688B1 (en) 2017-09-14 2022-01-18 Grabango Co. System and method for human gesture processing from video input
US11288648B2 (en) 2018-10-29 2022-03-29 Grabango Co. Commerce automation for a fueling station
US11481805B2 (en) 2018-01-03 2022-10-25 Grabango Co. Marketing and couponing in a retail environment using computer vision
US11507933B2 (en) 2019-03-01 2022-11-22 Grabango Co. Cashier interface for linking customers to virtual data

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10074224B2 (en) 2015-04-20 2018-09-11 Gate Labs Inc. Access management system
US10027866B2 (en) * 2015-08-05 2018-07-17 Whirlpool Corporation Refrigerators having internal content cameras, and methods of operating the same
US9822553B1 (en) 2016-11-23 2017-11-21 Gate Labs Inc. Door tracking system and method
DE102017213425A1 (en) * 2017-08-02 2019-02-07 BSH Hausgeräte GmbH Sensor device for a household refrigerator
US11763252B2 (en) 2017-08-10 2023-09-19 Cooler Screens Inc. Intelligent marketing and advertising platform
US11698219B2 (en) 2017-08-10 2023-07-11 Cooler Screens Inc. Smart movable closure system for cooling cabinet
US11768030B2 (en) 2017-08-10 2023-09-26 Cooler Screens Inc. Smart movable closure system for cooling cabinet
US11482215B2 (en) * 2019-03-27 2022-10-25 Samsung Electronics Co., Ltd. Multi-modal interaction with intelligent assistants in voice command devices
DE102019128366A1 (en) * 2019-08-16 2021-02-18 Liebherr-Hausgeräte Ochsenhausen GmbH Fridge and / or freezer
US10785456B1 (en) * 2019-09-25 2020-09-22 Haier Us Appliance Solutions, Inc. Methods for viewing and tracking stored items
US20240068742A1 (en) * 2022-08-24 2024-02-29 Haier Us Appliance Solutions, Inc. Gasket leak detection in a refrigerator appliance

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5412738A (en) * 1992-08-11 1995-05-02 Istituto Trentino Di Cultura Recognition system, particularly for recognising people
US20030103652A1 (en) * 2001-12-05 2003-06-05 Kyunghee Lee System for registering and authenticating human face using support vector machines and method thereof
US20060082439A1 (en) * 2003-09-05 2006-04-20 Bazakos Michael E Distributed stand-off ID verification compatible with multiple face recognition systems (FRS)
US20090185723A1 (en) * 2008-01-21 2009-07-23 Andrew Frederick Kurtz Enabling persistent recognition of individuals in images
US20110182482A1 (en) * 2010-01-27 2011-07-28 Winters Dustin L Method of person identification using social connections
US20130182918A1 (en) * 2011-12-09 2013-07-18 Viewdle Inc. 3d image estimation for 2d image recognition
US8542879B1 (en) * 2012-06-26 2013-09-24 Google Inc. Facial recognition
US20140152836A1 (en) * 2012-11-30 2014-06-05 Stephen Jeffrey Morris Tracking people and objects using multiple live and recorded surveillance camera video feeds
US20150248798A1 (en) * 2014-02-28 2015-09-03 Honeywell International Inc. System and method having biometric identification intrusion and access control
US20160132720A1 (en) * 2014-11-07 2016-05-12 Noblis, Inc. Vector-based face recognition algorithm and image search system

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6459919B1 (en) * 1997-08-26 2002-10-01 Color Kinetics, Incorporated Precision illumination methods and systems
SE522000C2 (en) * 2000-08-18 2004-01-07 Rutger Roseen Method and apparatus for keeping track of the shelf life of goods stored in a room
JP2002156181A (en) * 2000-11-16 2002-05-31 Yozan Inc Refrigerator
US7903838B2 (en) * 2004-01-30 2011-03-08 Evolution Robotics, Inc. Vision-enabled household appliances
WO2006001237A1 (en) * 2004-06-25 2006-01-05 Nec Corporation Article position management system, article position management method, terminal device, server, and article position management program
US8441534B2 (en) * 2005-04-29 2013-05-14 Nxp B.V. Electronic article surveillance system
CN101688912A (en) * 2007-06-14 2010-03-31 皇家飞利浦电子股份有限公司 Object localization method, system, label and user interface facilities
US20100170289A1 (en) * 2009-01-06 2010-07-08 Sony Corporation System and Method for Seamless Imaging of Appliance Interiors
KR101517083B1 (en) * 2009-05-11 2015-05-15 엘지전자 주식회사 A Portable terminal controlling refrigerator and operation method for the same
US8690273B2 (en) * 2010-03-26 2014-04-08 Whirlpool Corporation Method and apparatus for routing utilities in a refrigerator
EP2386229A1 (en) * 2010-05-10 2011-11-16 Jura Elektroapparate AG Milk cooler, drink preparation machine, combination of a milk cooler and a drink preparation device and method for obtaining an amount of milk
US8756942B2 (en) * 2010-07-29 2014-06-24 Lg Electronics Inc. Refrigerator and method for controlling the same
US9033006B2 (en) * 2010-09-17 2015-05-19 Nicholas J. Perazzo Oral syringe packaging system for hospital pharmacies
US8912905B2 (en) * 2011-02-28 2014-12-16 Chon Meng Wong LED lighting system
KR20120116207A (en) * 2011-04-12 2012-10-22 엘지전자 주식회사 A display device and a refrigerator comprising the display device
US8935938B2 (en) * 2011-05-25 2015-01-20 General Electric Company Water filter with monitoring device and refrigeration appliance including same
KR101189209B1 (en) * 2011-10-06 2012-10-09 늘솜주식회사 Position recognizing apparatus and methed therefor
US20140309863A1 (en) * 2013-04-15 2014-10-16 Flextronics Ap, Llc Parental control over vehicle features and child alert system
JP6167566B2 (en) * 2012-05-30 2017-07-26 株式会社リコー COMMUNICATION DEVICE, POSITION INFORMATION MANAGEMENT SYSTEM, AND POSITION INFORMATION MANAGEMENT METHOD
US9092896B2 (en) * 2012-08-07 2015-07-28 Microsoft Technology Licensing, Llc Augmented reality display of scene behind surface
US9412086B2 (en) * 2013-03-07 2016-08-09 Bradd A. Morse Apparatus and method for customized product data management
JP6498866B2 (en) * 2013-03-12 2019-04-10 東芝ライフスタイル株式会社 Refrigerator, camera device
WO2014168265A1 (en) * 2013-04-10 2014-10-16 엘지전자 주식회사 Method for managing storage product in refrigerator using image recognition, and refrigerator for same
KR102024594B1 (en) * 2013-04-18 2019-09-24 엘지전자 주식회사 Refrigerator and low temperature storage apparatus
KR102024595B1 (en) * 2013-04-25 2019-09-24 엘지전자 주식회사 Refrigerator and control method of the same

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5412738A (en) * 1992-08-11 1995-05-02 Istituto Trentino Di Cultura Recognition system, particularly for recognising people
US20030103652A1 (en) * 2001-12-05 2003-06-05 Kyunghee Lee System for registering and authenticating human face using support vector machines and method thereof
US20060082439A1 (en) * 2003-09-05 2006-04-20 Bazakos Michael E Distributed stand-off ID verification compatible with multiple face recognition systems (FRS)
US20090185723A1 (en) * 2008-01-21 2009-07-23 Andrew Frederick Kurtz Enabling persistent recognition of individuals in images
US20110182482A1 (en) * 2010-01-27 2011-07-28 Winters Dustin L Method of person identification using social connections
US20130182918A1 (en) * 2011-12-09 2013-07-18 Viewdle Inc. 3d image estimation for 2d image recognition
US8542879B1 (en) * 2012-06-26 2013-09-24 Google Inc. Facial recognition
US20140152836A1 (en) * 2012-11-30 2014-06-05 Stephen Jeffrey Morris Tracking people and objects using multiple live and recorded surveillance camera video feeds
US20150248798A1 (en) * 2014-02-28 2015-09-03 Honeywell International Inc. System and method having biometric identification intrusion and access control
US20160132720A1 (en) * 2014-11-07 2016-05-12 Noblis, Inc. Vector-based face recognition algorithm and image search system

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10861086B2 (en) 2016-05-09 2020-12-08 Grabango Co. Computer vision system and method for automatic checkout
US11727479B2 (en) 2016-05-09 2023-08-15 Grabango Co. Computer vision system and method for automatic checkout
US10614514B2 (en) 2016-05-09 2020-04-07 Grabango Co. Computer vision system and method for automatic checkout
US11216868B2 (en) 2016-05-09 2022-01-04 Grabango Co. Computer vision system and method for automatic checkout
US11302116B2 (en) 2016-07-09 2022-04-12 Grabango Co. Device interface extraction
US11295552B2 (en) 2016-07-09 2022-04-05 Grabango Co. Mobile user interface extraction
US10615994B2 (en) 2016-07-09 2020-04-07 Grabango Co. Visually automated interface integration
US10659247B2 (en) 2016-07-09 2020-05-19 Grabango Co. Computer vision for ambient data acquisition
US11095470B2 (en) 2016-07-09 2021-08-17 Grabango Co. Remote state following devices
US11132737B2 (en) 2017-02-10 2021-09-28 Grabango Co. Dynamic customer checkout experience within an automated shopping environment
US11847689B2 (en) 2017-02-10 2023-12-19 Grabango Co. Dynamic customer checkout experience within an automated shopping environment
US10721418B2 (en) 2017-05-10 2020-07-21 Grabango Co. Tilt-shift correction for camera arrays
US10778906B2 (en) 2017-05-10 2020-09-15 Grabango Co. Series-configured camera array for efficient deployment
US11805327B2 (en) 2017-05-10 2023-10-31 Grabango Co. Serially connected camera rail
US11288650B2 (en) 2017-06-21 2022-03-29 Grabango Co. Linking computer vision interactions with a computer kiosk
US10740742B2 (en) 2017-06-21 2020-08-11 Grabango Co. Linked observed human activity on video to a user account
US11748465B2 (en) 2017-06-21 2023-09-05 Grabango Co. Synchronizing computer vision interactions with a computer kiosk
US20190041197A1 (en) * 2017-08-01 2019-02-07 Apple Inc. Determining sparse versus dense pattern illumination
US10401158B2 (en) * 2017-08-01 2019-09-03 Apple Inc. Determining sparse versus dense pattern illumination
US10650540B2 (en) * 2017-08-01 2020-05-12 Apple Inc. Determining sparse versus dense pattern illumination
US11226688B1 (en) 2017-09-14 2022-01-18 Grabango Co. System and method for human gesture processing from video input
US11501537B2 (en) 2017-10-16 2022-11-15 Grabango Co. Multiple-factor verification for vision-based systems
US10963704B2 (en) 2017-10-16 2021-03-30 Grabango Co. Multiple-factor verification for vision-based systems
US11481805B2 (en) 2018-01-03 2022-10-25 Grabango Co. Marketing and couponing in a retail environment using computer vision
CN109344841A (en) * 2018-08-10 2019-02-15 北京华捷艾米科技有限公司 A kind of clothes recognition methods and device
US11288648B2 (en) 2018-10-29 2022-03-29 Grabango Co. Commerce automation for a fueling station
US11922390B2 (en) 2018-10-29 2024-03-05 Grabango Co Commerce automation for a fueling station
US11507933B2 (en) 2019-03-01 2022-11-22 Grabango Co. Cashier interface for linking customers to virtual data
CN110309768A (en) * 2019-06-28 2019-10-08 上海眼控科技股份有限公司 The staff's detection method and equipment of car test station

Also Published As

Publication number Publication date
US10089520B2 (en) 2018-10-02
US20160282039A1 (en) 2016-09-29

Similar Documents

Publication Publication Date Title
US20170032182A1 (en) System for adaptive real-time facial recognition using fixed video and still cameras
US11354901B2 (en) Activity recognition method and system
CN109934176B (en) Pedestrian recognition system, recognition method, and computer-readable storage medium
US20180342067A1 (en) Moving object tracking system and moving object tracking method
US10515199B2 (en) Systems and methods for facial authentication
JP5010905B2 (en) Face recognition device
US9639747B2 (en) Online learning method for people detection and counting for retail stores
JP2007317062A (en) Person recognition apparatus and method
JP2006293644A (en) Information processing device and information processing method
KR20120135469A (en) Facial image search system and facial image search method
US10303927B2 (en) People search system and people search method
US20190114470A1 (en) Method and System for Face Recognition Based on Online Learning
Kim et al. A non-cooperative user authentication system in robot environments
JP2019057815A (en) Monitoring system
KR20190093799A (en) Real-time missing person recognition system using cctv and method thereof
US20210256244A1 (en) Method for authentication or identification of an individual
US20070253598A1 (en) Image monitoring apparatus
CN108334870A (en) The remote monitoring system of AR device data server states
Chua et al. Vision-based hand grasping posture recognition in drinking activity
JP2013101551A (en) Face image authentication device
Saraswat et al. Anti-spoofing-enabled contactless attendance monitoring system in the COVID-19 pandemic
WO2012071677A1 (en) Method and system for face recognition
CN113536849A (en) Crowd gathering identification method and device based on image identification
Akhdan et al. Face recognition with anti spoofing eye blink detection
Fernandes et al. IoT based smart security for the blind

Legal Events

Date Code Title Description
AS Assignment

Owner name: VCOGNITION TECHNOLOGIES, INC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MOTUKURI, KRISHNA;AGRAWAL, MOTILAL;REEL/FRAME:047559/0163

Effective date: 20181120

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION