CN106650656B - User identity recognition device and robot - Google Patents

User identity recognition device and robot Download PDF

Info

Publication number
CN106650656B
CN106650656B CN201611168564.1A CN201611168564A CN106650656B CN 106650656 B CN106650656 B CN 106650656B CN 201611168564 A CN201611168564 A CN 201611168564A CN 106650656 B CN106650656 B CN 106650656B
Authority
CN
China
Prior art keywords
user
image information
video image
face
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611168564.1A
Other languages
Chinese (zh)
Other versions
CN106650656A (en
Inventor
黄巍伟
周禄兵
苗振伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Intelligent Machines Co ltd
Original Assignee
International Intelligent Machines Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Intelligent Machines Co ltd filed Critical International Intelligent Machines Co ltd
Priority to CN201611168564.1A priority Critical patent/CN106650656B/en
Publication of CN106650656A publication Critical patent/CN106650656A/en
Application granted granted Critical
Publication of CN106650656B publication Critical patent/CN106650656B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/245Classification techniques relating to the decision surface
    • G06F18/2451Classification techniques relating to the decision surface linear, e.g. hyperplane
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Robotics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a user identity recognition device and a robot, wherein the device comprises: the device comprises at least one binocular camera, at least one high-definition camera and a processor which is respectively connected with the binocular camera and the high-definition camera; the at least one binocular camera and the at least one high-definition camera work synchronously and are respectively used for acquiring video image information in real time; the processor performs depth processing on the first video image information acquired by the at least one binocular camera, acquires a depth map and a color map, and determines identity information of a user according to the depth map, the color map and the second video image information acquired by the at least one high-definition camera. Compared with the prior art that the user identity is identified only by means of the user image information captured by the high-definition camera, the robot provided by the application improves the accuracy and the identification efficiency of user identity identification.

Description

User identity recognition device and robot
Technical Field
The present application relates to the field of robot manufacturing technologies, and in particular, to a user identity recognition device and a robot.
Background
With the rapid development of science and technology, the manufacturing technology of robots is also rapidly developed, and the application of robots has gradually entered the home service industry.
The property service robot is a robot which is not influenced by environment and temperature and can dynamically realize user identity recognition, and the robot needs to have excellent recognition capability in the aspect of user identity recognition.
However, at present, the conventional property service robot captures user image information by using a high-definition camera, and the purpose of identifying the user identity is achieved by analyzing the captured user image information. Obviously, the user identity is identified only by means of the user image information captured by the high-definition camera, and the accuracy and the identification efficiency are very low.
Disclosure of Invention
In view of the above, the present application provides a user identity recognition device and a robot, so as to improve accuracy and recognition efficiency of user identity recognition. The technical proposal is as follows:
in one aspect of the present application, there is provided a user identity recognition apparatus, including: the device comprises at least one binocular camera, at least one high-definition camera and a processor which is respectively connected with the binocular camera and the high-definition camera; wherein,,
the at least one binocular camera and the at least one high-definition camera work synchronously and are respectively used for acquiring video image information in real time;
the processor performs depth processing on the first video image information acquired by the at least one binocular camera, acquires a depth map and a color map, and determines identity information of a user according to the depth map, the color map and the second video image information acquired by the at least one high-definition camera.
Preferably, the two cameras included in the binocular camera are high-definition cameras with 1080P, and the distance between the two cameras is 15 cm.
Preferably, the processor comprises:
the depth information processing module is used for performing depth processing on the first video image information acquired by the at least one binocular camera to acquire a depth map and a color map;
the picture information processing module is used for processing the depth map and the color map and determining at least one user in the first video image information;
the face positioning module is used for calculating the face positions of all users in the first video image information by adopting a face positioning method;
the face area determining module is used for determining face areas of all users in the second video image information acquired by the high-definition camera based on face positions of all users in the first video image information;
the face feature information extraction module is used for extracting features of face areas of each user in the second video image information by adopting a face feature extraction method, and obtaining face feature information corresponding to each face area respectively;
the comparison module is used for comparing the obtained face characteristic information with stored preset face characteristic information, and the preset face characteristic information corresponds to preset user identity information one by one;
and the user identity determining module is used for determining preset user identity information corresponding to the preset face characteristic information as the identity of the user when the obtained face characteristic information is compared with the stored preset face characteristic information by the comparison module.
Preferably, the picture information processing module includes:
the first processing sub-module is used for performing human body detection of Deep CNN of the color map to obtain a preliminary human body detection result of the first video image information;
the second processing sub-module is used for determining a final human body detection result of the first video image information based on the preliminary human body detection result and combining the image information of the depth map;
and the user determination submodule is used for determining at least one user in the first video image information according to the final human body detection result.
Preferably, the processor is further configured to determine the relative position information of the user according to the depth map.
In another aspect of the present application, a robot is provided, including a user identification device as described in any one of the preceding claims.
The user identity recognition device comprises at least one binocular camera, at least one high-definition camera and a processor which is respectively connected with the binocular camera and the high-definition camera, wherein the at least one binocular camera and the at least one high-definition camera work synchronously and are respectively used for acquiring video image information in real time, the processor performs depth processing on first video image information acquired by the at least one binocular camera to acquire a depth map and a color map, and determines identity information of a user according to the depth map, the color map and second video image information acquired by the at least one high-definition camera. Compared with the prior art that the user identity is identified only by means of the user image information captured by the high-definition camera, the robot provided by the application improves the accuracy and the identification efficiency of user identity identification.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a user identity recognition device according to the present application;
FIG. 2 is a schematic diagram of a processor according to the present application;
fig. 3 is a schematic structural diagram of a picture information processing module according to the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, a schematic structural diagram of a user identity recognition device provided by the present application is shown, including: at least one binocular camera 100, at least one high definition camera 200, and a processor 300 connected to the binocular camera 100, the high definition camera 200, respectively. The specific setting numbers of the binocular cameras 100 and the high-definition cameras 200 in the application can be flexibly set according to actual requirements, but the application needs to ensure that at least one binocular camera 100 and at least one high-definition camera 200 which are set need to work synchronously in real time to respectively acquire the video image information captured by each binocular camera 100 and the high-definition camera 200 in real time. In particular, in the embodiment of the present application, two cameras included in the binocular camera 100 may be high-definition cameras with 1080P, and the distance between the two cameras included in the binocular camera may be 15 cm. In the practical application process of the present application, the binocular camera 100 and the high-definition camera 200 preferably capture video image information in the same direction and in the same position simultaneously.
Specifically, in the present application, at least one binocular camera 100 and one high-definition camera 200 simultaneously and synchronously operate in real time, so as to respectively acquire the video image information captured by each binocular camera in real time. The video image information captured by the binocular camera 100 is referred to as first video image information, and the video image information captured by the high-definition camera 200 is referred to as second video image information.
The processor 300 is connected to each of the binocular cameras 100 and each of the high definition cameras 200, respectively, and is configured to receive video image information transmitted by each of the binocular cameras 100 and each of the high definition cameras 200. Further, the processor 300 performs depth processing on the first video image information acquired by the at least one binocular camera 100, acquires a depth map and a color map, and determines identity information of the user according to the depth map, the color map and the second video image information acquired by the at least one high-definition camera 200.
The following describes the specific implementation of the processor 300 of the present application in detail, and the structure thereof is shown in fig. 2, including:
the depth information processing module 301 is configured to perform depth processing on the first video image information acquired by the at least one binocular camera 100, and acquire a depth map and a color map.
For convenience of explanation, in the present application, taking an example including a binocular camera 100 and a high-definition camera 200, after receiving the first video image information sent by the binocular camera 100, the processor 300 performs a depth processing on the first video image information to obtain a depth map and a color map (for convenience of distinction, referred to herein as a color map one), and after receiving the second video image information sent by the high-definition camera 200, obtains a color map (color map two).
In the practical application process of the present application, because the field angle of the binocular camera 100 is larger, the resolution of the high-definition camera 200 is high and the field angle is relatively smaller (the face images obtained from the same distance are large), the contents of the first color chart and the second color chart are not exactly the same, but it is easy to understand that for the same contents of the first color chart and the second color chart, the contents of the second color chart are the enlarged contents of the first color chart.
And the picture information processing module 302 is configured to process the depth map and the color map, and determine at least one user in the first video image information.
In the present application, the picture information processing module 302 may specifically include, as shown in fig. 3:
a first processing submodule 3021, configured to perform human body detection on the color map by using Deep CNN (Deep Convolutional Neural Networks, deep convolutional neural network) to obtain a preliminary human body detection result of the first video image information.
In the embodiment of the application, the Deep CNN Deep learning network for human body detection consists of a convolution layer network, a region extraction network and a region classification network. And inputting the obtained first video image information into the Deep CNN for human body detection, and outputting a preliminary human body detection result in the first video image information. For example, if the first video image information includes three users, the preliminary human body detection result includes human body detection results of the three users. For the overlapping user, only one human body detection result is output.
A second processing sub-module 3022, configured to determine a final human body detection result of the first video image information based on the preliminary human body detection result and in combination with the image information of the depth map.
After human body detection through Deep CNN, only one human body detection result corresponding to one user is output for the overlapped user, and separate identification of the overlapped user cannot be realized, so that the application further distinguishes the overlapped user by acquiring the depth map and combining the image information of the depth map, thereby separating out single user individuals, realizing the function of identifying the overlapped user, and ensuring the accuracy of user identification.
A user determining sub-module 3023, configured to determine at least one user in the first video image information according to the final human body detection result.
For example, assume that the current color chart includes 5 users, namely, user a, user B, user C, user D and user E, respectively, where user C, user D and user E overlap, and then after performing Deep CNN human body detection on the color chart, the obtained preliminary human body detection result includes a human body detection result of user a, a human body detection result of user B and a human body detection result of user C ', where the human body detection result of user C ' indicates a human body detection result commonly corresponding to user C, user D and user E, where the human body detection result of user C ' overlaps. Further, the application acquires the depth map, and can acquire the overlapping problem of the current user C 'by combining the image information of the depth map, so that the user C' is separated, namely the user C, the user D and the user E are separated, and the human body detection results of the user C, the user D and the user E are respectively obtained. Finally, the application realizes the determination of the human body detection results of 5 users, namely user A, user B, user C, user D and user E, which are included in the current color map, namely, the 5 users in the first video image information.
The face positioning module 303 is configured to calculate a face position of each user in the first video image information by using a face positioning method.
Specifically, the embodiment of the application calculates the face position of each user in the first video image information by adopting a frame-based face positioning method.
In the application, after the picture information processing module 302 determines the human body detection results of 5 users in the first video image information, the frame-based human face positioning method calculates the approximate position of the human face of each user according to the basic proportion of human bodies, and then finds the specific position of the human face through Haar features and an AdaBoost classifier, thereby realizing the determination of the human face position of the user.
In addition, in the embodiment of the application, in order to ensure the accuracy of user identification, the application can acquire the depth map again and exclude the wrong face position according to the depth information distribution condition of the face position in the depth map.
The face region determining module 304 is configured to determine a face region of each user in the second video image information obtained by the high-definition camera 200 based on the face position of each user in the first video image information.
The binocular camera 100 and the high-definition camera 200 in the application synchronously work in real time, and can acquire video image information in the same direction and in the same position at the same moment, so that the first video image information acquired by the binocular camera 100 is compared and calibrated with the second video image information acquired by the high-definition camera 200, and the face position of each user in the second video image information can be determined based on the determined face position of each user in the first video image information, and further the face region of the user corresponding to the face position can be acquired.
And the face feature information extraction module 305 is configured to perform feature extraction on face regions of each user in the second video image information by using a face feature extraction method, so as to obtain face feature information corresponding to each face region.
According to the application, after the face areas of the users are determined, the face feature extraction method is adopted for each face area in sequence, and specifically, the face Deep CNN feature corresponding to each face area can be obtained by extracting the information of the face Deep CNN of the face area of the user in the second video image information.
In the embodiment of the present application, after the face area determining module 204 obtains the face areas of each user in the second video image information, the face pose calibration can be implemented by extracting the face feature information (such as the face feature points). The Deep CNN Deep learning network of the human face has 37 layers, including 16 convolution layers, the detection result of the human face is normalized to 224 multiplied by 224 after being calibrated, and the detection result is input into the Deep CNN Deep learning network of the human face to obtain the Deep CNN characteristic of the human face.
The comparison module 306 is configured to compare the obtained face feature information with stored preset face feature information, where the preset face feature information corresponds to preset user identity information one by one.
The application can adopt KNN (k-Nearest Neighbor) Nearest distance algorithm to compare the obtained face characteristic information with the stored preset face characteristic information.
The application comprises a face Deep CNN characteristic database which is used for storing a large amount of picture information acquired by researchers, and storing all the obtained face Deep CNN characteristics after Deep learning by using a face Deep CNN Deep learning network. Different users correspond to different facial Deep CNN characteristics, so that the identity of the user can be determined only by comparing the facial Deep CNN characteristics.
And the user identity determining module 307 is configured to determine preset user identity information corresponding to the preset face feature information as the identity of the user when the obtained face feature information is compared by the comparing module to be consistent with the stored preset face feature information.
In the embodiment of the present application, the user identity determining module 307 compares the obtained face feature information with the preset face feature information stored in the face Deep CNN feature database. If the comparison is consistent, the user corresponding to the current face feature information is the user corresponding to the preset face feature information, and therefore the identity of the user can be directly determined.
Furthermore, based on the above embodiment of the present application, the processor 300 in the present application may be further configured to determine the relative position information of the user according to the depth map.
In the vision system of the robot platform, besides the recognition of the user identity, the vision system also has the function of man-machine interaction, so that the processor 300 in the application can realize the determination of the relative position information of the user, namely the position information of the user relative to the robot, besides the determination of the user identity information. The first video image information obtained by the binocular camera 100 of the present application has the characteristics of larger field angle and depth information, and has natural advantages for human body detection and positioning. Human body positioning combined with depth information can provide reliable position information for intelligent navigation and man-machine interaction of a robot.
Therefore, by applying the technical scheme of the application, the application not only can realize human body detection and positioning by utilizing the characteristics of large field angle and depth information of the binocular camera 100, but also can provide reliable position information for intelligent navigation and man-machine interaction of a robot by combining the human body positioning function of the depth information. In addition, the binocular camera 100 is matched with the high-definition camera 200 to synchronously work, so that the positions of the person and the face detected in the binocular camera 100 can be mapped into a color chart II acquired by the high-definition camera 200, and the problem that the conventional remote automatic face recognition has high requirements on the quality and the size of the face region picture is solved. The high-definition camera 200 has high resolution and relatively small angle of view, and can autonomously control the burst light through a program, so that the quality of pictures including human faces can be ensured. In addition, the high-definition camera 200 is dedicated for face recognition, and can detect and recognize close-range faces, such as faces above 0.3 m, by increasing a moderate upward angle, which is also very important for man-machine interaction functions of a robot, such as automatic face brushing and card taking.
The user identity recognition device provided by the application can be particularly applied to robots, and is particularly applied to property service robots, so that the property service robots have excellent recognition capability in the aspect of user identity recognition, and the intelligent navigation and man-machine interaction functions of the property service robots are increased.
The foregoing has described in detail a user identification device and a robot provided by the present application, and specific examples have been applied herein to illustrate the principles and embodiments of the present application, where the above description of the examples is only for helping to understand the method and core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (5)

1. A user identification device, comprising: the device comprises at least one binocular camera, at least one high-definition camera and a processor which is respectively connected with the binocular camera and the high-definition camera; wherein,,
the at least one binocular camera and the at least one high-definition camera work synchronously and are respectively used for acquiring video image information in real time;
the processor performs depth processing on the first video image information acquired by the at least one binocular camera to acquire a depth image and a color image, and determines identity information of a user according to the depth image, the color image and the second video image information acquired by the at least one high-definition camera;
the processor includes:
the depth information processing module is used for performing depth processing on the first video image information acquired by the at least one binocular camera to acquire a depth map and a color map;
the picture information processing module is used for processing the depth map and the color map and determining at least one user in the first video image information;
the face positioning module is used for calculating the face positions of all users in the first video image information by adopting a face positioning method;
the face area determining module is used for determining face areas of all users in the second video image information acquired by the high-definition camera based on face positions of all users in the first video image information;
the face feature information extraction module is used for extracting features of face areas of each user in the second video image information by adopting a face feature extraction method, and obtaining face feature information corresponding to each face area respectively;
the comparison module is used for comparing the obtained face characteristic information with stored preset face characteristic information, and the preset face characteristic information corresponds to preset user identity information one by one;
and the user identity determining module is used for determining preset user identity information corresponding to the preset face characteristic information as the identity of the user when the obtained face characteristic information is compared with the stored preset face characteristic information by the comparison module.
2. The apparatus of claim 1, wherein the binocular camera comprises two cameras each having a high definition of 1080P and a pitch of 15 cm.
3. The apparatus of claim 1, wherein the picture information processing module comprises:
the first processing sub-module is used for performing human body detection of Deep CNN of the color map to obtain a preliminary human body detection result of the first video image information;
the second processing sub-module is used for determining a final human body detection result of the first video image information based on the preliminary human body detection result and combining the image information of the depth map;
and the user determination submodule is used for determining at least one user in the first video image information according to the final human body detection result.
4. A device according to claim 1 or 3, wherein the processor is further configured to determine the relative position information of the user from the depth map.
5. A robot comprising a user identification device as claimed in any one of claims 1-4.
CN201611168564.1A 2016-12-16 2016-12-16 User identity recognition device and robot Active CN106650656B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611168564.1A CN106650656B (en) 2016-12-16 2016-12-16 User identity recognition device and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611168564.1A CN106650656B (en) 2016-12-16 2016-12-16 User identity recognition device and robot

Publications (2)

Publication Number Publication Date
CN106650656A CN106650656A (en) 2017-05-10
CN106650656B true CN106650656B (en) 2023-10-27

Family

ID=58822663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611168564.1A Active CN106650656B (en) 2016-12-16 2016-12-16 User identity recognition device and robot

Country Status (1)

Country Link
CN (1) CN106650656B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844744A (en) * 2017-10-09 2018-03-27 平安科技(深圳)有限公司 With reference to the face identification method, device and storage medium of depth information
CN110653812B (en) * 2018-06-29 2021-06-04 深圳市优必选科技有限公司 Interaction method of robot, robot and device with storage function
CN112784634A (en) * 2019-11-07 2021-05-11 北京沃东天骏信息技术有限公司 Video information processing method, device and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103606093A (en) * 2013-10-28 2014-02-26 燕山大学 Intelligent chain VIP customer service system based on human characteristics
CN204143555U (en) * 2014-08-26 2015-02-04 杭州摩科商用设备有限公司 The Certificate of House Property printing terminal of identification self-aided terminal and correspondence
US9430697B1 (en) * 2015-07-03 2016-08-30 TCL Research America Inc. Method and system for face recognition using deep collaborative representation-based classification
CN106228117A (en) * 2016-07-13 2016-12-14 福州米立科技有限公司 Recognition of face single camera gathers imaging system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8855369B2 (en) * 2012-06-22 2014-10-07 Microsoft Corporation Self learning face recognition using depth based tracking for database generation and update

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103606093A (en) * 2013-10-28 2014-02-26 燕山大学 Intelligent chain VIP customer service system based on human characteristics
CN204143555U (en) * 2014-08-26 2015-02-04 杭州摩科商用设备有限公司 The Certificate of House Property printing terminal of identification self-aided terminal and correspondence
US9430697B1 (en) * 2015-07-03 2016-08-30 TCL Research America Inc. Method and system for face recognition using deep collaborative representation-based classification
CN106228117A (en) * 2016-07-13 2016-12-14 福州米立科技有限公司 Recognition of face single camera gathers imaging system

Also Published As

Publication number Publication date
CN106650656A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
TWI677825B (en) Method of video object tracking and apparatus thereof and non-volatile computer readable storage medium
US9818023B2 (en) Enhanced face detection using depth information
US8983235B2 (en) Pupil detection device and pupil detection method
US9602783B2 (en) Image recognition method and camera system
WO2019042426A1 (en) Augmented reality scene processing method and apparatus, and computer storage medium
KR100834577B1 (en) Home intelligent service robot and method capable of searching and following moving of target using stereo vision processing
JP2020523665A (en) Biological detection method and device, electronic device, and storage medium
US10558844B2 (en) Lightweight 3D vision camera with intelligent segmentation engine for machine vision and auto identification
JP5662670B2 (en) Image processing apparatus, image processing method, and program
CN106650656B (en) User identity recognition device and robot
CN110490171B (en) Dangerous posture recognition method and device, computer equipment and storage medium
US9323989B2 (en) Tracking device
WO2021204267A1 (en) Identity recognition
CN110909561A (en) Eye state detection system and operation method thereof
CN111476894A (en) Three-dimensional semantic map construction method and device, storage medium and electronic equipment
TW201544995A (en) Object recognition method and object recognition apparatus using the same
CN112528902A (en) Video monitoring dynamic face recognition method and device based on 3D face model
CN113378641A (en) Gesture recognition method based on deep neural network and attention mechanism
WO2023279799A1 (en) Object identification method and apparatus, and electronic system
CN109919128B (en) Control instruction acquisition method and device and electronic equipment
CN113420704A (en) Object identification method and device based on visual sensor and robot
Jiang et al. Depth image-based obstacle avoidance for an in-door patrol robot
KR102664123B1 (en) Apparatus and method for generating vehicle data, and vehicle system
CN111062311B (en) Pedestrian gesture recognition and interaction method based on depth-level separable convolution network
Langenberg et al. Automatic traffic light to ego vehicle lane association at complex intersections

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518000 Shekou Torch Pioneering Building, Nanshan District, Shenzhen City, Guangdong Province, 2nd Floor

Applicant after: INTERNATIONAL INTELLIGENT MACHINES Co.,Ltd.

Address before: 518000 Shekou Torch Pioneering Building, Nanshan District, Shenzhen City, Guangdong Province, 2nd Floor

Applicant before: INTERNATIONAL INTELLIGENT MACHINES CO.,LTD.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant