US20110222724A1 - Systems and methods for determining personal characteristics - Google Patents

Systems and methods for determining personal characteristics Download PDF

Info

Publication number
US20110222724A1
US20110222724A1 US12/790,979 US79097910A US2011222724A1 US 20110222724 A1 US20110222724 A1 US 20110222724A1 US 79097910 A US79097910 A US 79097910A US 2011222724 A1 US2011222724 A1 US 2011222724A1
Authority
US
United States
Prior art keywords
faces
face
cnns
tracking
correspondence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/790,979
Other versions
US8582807B2 (en
Inventor
Ming Yang
Shenghuo Zhu
Fengjun Lv
Kai Yu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Laboratories America Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Laboratories America Inc filed Critical NEC Laboratories America Inc
Priority to US12/790,979 priority Critical patent/US8582807B2/en
Publication of US20110222724A1 publication Critical patent/US20110222724A1/en
Application granted granted Critical
Publication of US8582807B2 publication Critical patent/US8582807B2/en
Assigned to NEC CORPORATION reassignment NEC CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NEC LABORATORIES AMERICA, INC.
Assigned to NEC CORPORATION reassignment NEC CORPORATION CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE 8538896 AND ADD 8583896 PREVIOUSLY RECORDED ON REEL 031998 FRAME 0667. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT. Assignors: NEC LABORATORIES AMERICA, INC.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Definitions

  • the present application relates to video analysis systems.
  • a computer implemented method determines personal characteristics from images by generating a baseline gender model and an age estimation model using one or more convolutional neural networks (CNNs); capturing correspondences of faces by face tracking, and applying incremental learning to the CNNs and enforcing correspondence constraint such that CNN outputs are consistent and stable for one person.
  • CNNs convolutional neural networks
  • the system includes collecting correspondences of faces by face tracking. Incremental learning in the neural network can be done by enforcing a correspondence constraint.
  • the system can implement incremental training with an online stochastic gradient descent process.
  • a correspondence given by visual tracking can be used to update a pre-trained model.
  • the system includes performing face detection and tracking; aligning the detected faces; normalizing the faces to a plurality of patches; and sending the normalized faces to the CNNs to estimate gender and age.
  • CNNs can be used to face detection and face alignment.
  • the system can perform multi-hypothesis visual tracking to obtain correspondence of faces in successive video frames.
  • the system can update the baseline models using data collected online to avoid model drift.
  • the system applies weakly supervised incremental training to face correspondences.
  • the video analysis systems based on statistical learning models maintains its performance when being deployed to a real-world scenario, primarily due to the fact that training data can cover unfamiliar variations in reality.
  • the object correspondences in successive frames are leveraged as weak supervision to conduct incremental learning.
  • the strategy is applied to the CNN based gender and age estimation system.
  • the system maintains output consistent and stable results on face images from the same trajectory in videos by using incremental stochastic training.
  • the supervision of correspondences can improve the estimation accuracy by a large margin.
  • the strength of the correspondence driven incremental learning originates from the capability to address the mismatch problem due to the factors such as lighting conditions, view angels, and image quality or noise levels in the deployment environments, which may hardly be exactly the same as those in the training set.
  • the models are adapted to be less sensitive to these factors. Therefore, the updated models outperform the original ones noticeably even when a small amount of additional data is added in the incremental learning.
  • the system is not restricted to gender and age estimation, and is also applicable to other facial attribute recognition, e.g. race, or face verification. Thus, the system can be applied to a number of personally identifiable characteristics as well.
  • FIG. 1 shows various convolutional neural network (CNN) approaches to incremental learning.
  • FIG. 2 shows an exemplary architecture of the convolutional neural networks (CNNs) where each plane represents a feature map.
  • CNNs convolutional neural networks
  • FIG. 3 shows an exemplary correspondence driven incremental learning capability in a gender and age estimation system.
  • FIG. 4 shows an exemplary gender and age estimation system with correspondence driven incremental learning.
  • FIG. 5 shows an exemplary output of the age/gender analysis system.
  • pattern recognition consists of two steps: The first step computes hand-crafted features from raw inputs; and the second step learns classifiers based on the obtained features.
  • the overall performance of the system is largely determined by the first step, which is, however, highly problem dependent and requires extensive feature engineering.
  • Convolutional neural networks are a class of deep learning approaches in which multiple stages of learned feature extractors are applied directly to the raw input images and the entire system can be trained end-to-end in a supervised manner.
  • FIG. 2 shows an exemplary architecture of the convolutional neural networks (CNNs) where each plane represents a feature map.
  • CNNs convolutional neural networks
  • the adjustable parameters of a CNN model include all weights of the layers and connections which are learned by the back-propagation algorithm in a stochastic gradient descent manner.
  • CNN models are appealing since the design of hand-crafted features are avoided and they are very efficient to evaluate at the testing stage, although in general the training of CNN models requires a large amount of data.
  • the systems learn separate CNN models for gender, female age, and male age.
  • Training data 100 is manually labeled and used to generate one or more models during development stage.
  • the models are used in deployment stage with testing dat 120 to generate predictions of age and gender.
  • the predictions are also provided to a correspondence determination module 130 which provides training data to enhance the models.
  • the system of FIG. 3 applies correspondence driven incremental learning strategy to a CNN-based gender and age estimation system in FIG. 4 .
  • Input video is provided to a face detection module 200 , which then drives a face tracking module 210 .
  • the face is aligned by module 220 , and the gender is estimated in module 230 .
  • an age estimation module 240 is applied to generate the output.
  • the face tracking module 210 also drives a face correspondence module 260 , which applies weakly supervised incremental learning to one or more CNN models 250 .
  • the system After collecting the face correspondences by visual tracking, the system derives an online stochastic training method to enforce the updated models to output consistent gender and age estimations for the same person.
  • the system enables a pre-trained CNN model adapt to the deployment environments and achieves significant performance improvement on a large dataset containing 884 persons with 68759 faces.
  • the correspondence driven approach can be readily integrated to a fully automatic video analysis system.
  • the system For every frame of the input video, the system performs face detection and tracking, then the detected faces are aligned and normalized to 64 ⁇ 64 patches and fed to the CNN recognition engine to estimate the gender and age.
  • the face detection and face alignment modules are also based on certain CNN models.
  • the system employs multi-hypothesis visual tracking algorithms to obtain the correspondence of faces in successive frames. The parameters are tuned to increase the precision of the tracker and tolerate the switch-id errors. Too short tracks are discarded.
  • the gender and age models trained offline using manually labeled faces are denoted by the Baseline CNN models.
  • the aligned faces and their correspondences given by visual tracking are stored as additional data which are used to update the Baseline CNN models periodically.
  • the updated CNN models are applied to new test videos. Note, we always update the Baseline models using the additional data collected online, not the latest updated CNN models, to avoid model drift.
  • the first term keeps the parameter, ⁇ , from deviating from the original parameters ⁇ 0 with the regularization parameter ⁇ .
  • the second term reduces the loss on the additional training data.
  • the idea of the correspondence driven incremental learning is to reduce the inconsistency of the neural network outputs for faces on the same trajectory, provided the parameters do not deviate from the original parameters of the Baseline CNN model, ⁇ 0 .
  • the correspondence driven incremental learning is written as this optimization problem,
  • the first term keeps the parameters, ⁇ , from deviating from the original parameters ⁇ 0 , where we use regularization parameter ⁇ to control the deviation scale.
  • the second term reduces the inconsistency of the neural network outputs of faces on the same trajectory. We normalize it by the size of S and the size of each trajectory,
  • ⁇ t is the step size
  • ⁇ ( ⁇ 0 ) is the weight decay.
  • the average output of those images other than X t on the trajectory are used as the pseudo labels, ⁇ tilde over (y) ⁇ t , for the incremental training Eq. (4) is in the same form as Eq. (2) except that the pseudo label ⁇ tilde over (y) ⁇ t is used in Eq. (4).
  • the updating process is implemented by the back propagation training for neural networks. Therefore, we can feed the pseudo labels into the existing training framework to perform the correspondence driven incremental learning. Because the validation set is not available, the stochastic training is only carried out in one pass, i.e., each additional data X t is used only once, which largely relieves the over-fitting risk. Note, since the face images are collected sequentially from videos, data shuffle is critical in the stochastic training.
  • FIG. 5 shows an exemplary output of the age/gender analysis system.
  • the gender influences the age estimation considerably. Therefore, instead of using a unified age model for both male and female, this embodiment trains separate models for both gender groups.
  • the system separates the males and females in the previous age estimation experiments.
  • the system predicts the gender first and then select the corresponding age model for age prediction.
  • Some example screen shots are shown in FIG. 5 , where the bounding boxes of faces are drawn in different colors to indicate the person identification. Gender and age range centered at our estimation are also shown. For instance, “7:F10-15” means that the person #7 is female with age ranging from 10 to 15.
  • Video analysis systems based on statistical learning models maintained its performance when being deployed to a real-world scenario, primarily due to the fact that training data can cover unfamiliar variations in reality.
  • the object correspondences in successive frames are leveraged as weak supervision to conduct incremental learning.
  • the strategy is applied to the CNN based gender and age estimation system.
  • the system maintains output consistent and stable results on face images from the same trajectory in videos by using incremental stochastic training.
  • the supervision of correspondences can further improve the estimation accuracy by a large margin.
  • the strength of the correspondence driven incremental learning originates from the capability to address the mismatch problem due to the factors such as lighting conditions, view angels, and image quality or noise levels in the deployment environments, which may hardly be exactly the same as those in the training set.
  • the models are adapted to be less sensitive to these factors. Therefore, the updated models outperform the original ones noticeably even when a small amount of additional data is added in the incremental learning (e.g., around 6K to 10K faces are added in the ExpB in each video clip).
  • the system is not restricted to gender and age estimation, and is also applicable to other facial attribute recognition, e.g. race. Thus, the system can be applied to a number of personally identifiable characteristics as well.
  • the invention may be implemented in hardware, firmware or software, or a combination of the three.
  • the invention is implemented in a computer program executed on a programmable computer having a processor, a data storage system, volatile and non-volatile memory and/or storage elements, at least one input device and at least one output device.
  • the computer preferably includes a processor, random access memory (RAM), a program memory (preferably a writable read-only memory (ROM) such as a flash ROM) and an input/output (I/O) controller coupled by a CPU bus.
  • RAM random access memory
  • program memory preferably a writable read-only memory (ROM) such as a flash ROM
  • I/O controller coupled by a CPU bus.
  • the computer may optionally include a hard drive controller which is coupled to a hard disk and CPU bus. Hard disk may be used for storing application programs, such as the present invention, and data. Alternatively, application programs may be stored in RAM or ROM.
  • I/O controller is coupled by means of an I/O bus to an I/O interface. I/O interface receives and transmits data in analog or digital form over communication links such as a serial link, local area network, wireless link, and parallel link.
  • a display, a keyboard and a pointing device may also be connected to I/O bus.
  • I/O bus may also be connected to I/O bus.
  • separate connections may be used for I/O interface, display, keyboard and pointing device.
  • Programmable processing system may be preprogrammed or it may be programmed (and reprogrammed) by downloading a program from another source (e.g., a floppy disk, CD-ROM, or another computer).
  • Each computer program is tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein.
  • the inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Geometry (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

Systems and methods are disclosed for determining personal characteristics from images by generating a baseline gender model and an age estimation model using one or more convolutional neural networks (CNNs); capturing correspondences of faces by face tracking, and applying incremental learning to the CNNs and enforcing correspondence constraint such that CNN outputs are consistent and stable for one person.

Description

  • This application claims priority from U.S. Provisional Application Ser. No. 61/313,878, filed Mar. 15, 2010, the content of which is incorporated by reference.
  • BACKGROUND
  • The present application relates to video analysis systems.
  • In recent years, intelligent video analysis systems, e.g. automatic gender and age estimation, face verification and recognition, and wide-area surveillance, are flourishing nourished by steady advances of computer vision and machine learning technologies both in theory and in practice. In general, certain statistical models are learned offline from a huge amount of training data in the development stage. However, when being deployed to real-world scenarios, these systems are often confronted by the model mismatch issue, that is, the performance degradation originated from the fact that training data can hardly cover the large variations in reality due to different illumination conditions, image quality, and noise, etc. It is extremely hard, if not impossible, to collect sufficient amount of training data in that the possible factors are unpredictable in different scenarios. Thus, it is desirable to allow the statistical models in visual recognition systems to adapt to their specific deployment scenes by incremental learning, so as to enhance the systems' generalization capability.
  • To address this model mismatch issue, people have developed various strategies working with training data block 10 and developing a model to apply to testing data block 20. The most straightforward and ideal way is to obtain the ground truth labels of the testing data block 20 in the deployment scene and utilize them to perform supervised incremental learning, as shown in FIG. 1( a). Nevertheless, manual labels are costly and sometimes impractical to obtain. Alternatively, these systems can trust the predictions by the model and simply employ them in incremental learning in a self-training manner as illustrated in FIG. 1( b). However, these positive feedbacks are very risky in practice. Another alternative way is to explore the structure and distances of unlabeled data using semi-supervised learning approaches as in FIG. 1( c), while, whether the heuristic distance metric can capture the correct underlining structure of unlabeled data is in question.
  • Inferring biological traits like gender and age from images can greatly help applications such as face verification and recognition, video surveillance, digital signage, and retail customer analysis. Both gender and age estimation from facial images have attracted considerable research interests for decades. Yet they remain challenging problems, especially the age estimation, since the aging facial patterns are highly variable and influenced by many factors like gender, race, and living styles, not to mention the subtleties of images due to lighting, shading, and view angles. Thus, sophisticated representations and a huge amount of training data have been required to tackle these problems in real-world applications.
  • SUMMARY
  • In one aspect, a computer implemented method determines personal characteristics from images by generating a baseline gender model and an age estimation model using one or more convolutional neural networks (CNNs); capturing correspondences of faces by face tracking, and applying incremental learning to the CNNs and enforcing correspondence constraint such that CNN outputs are consistent and stable for one person.
  • The system includes collecting correspondences of faces by face tracking. Incremental learning in the neural network can be done by enforcing a correspondence constraint. The system can implement incremental training with an online stochastic gradient descent process. A correspondence given by visual tracking can be used to update a pre-trained model. The system includes performing face detection and tracking; aligning the detected faces; normalizing the faces to a plurality of patches; and sending the normalized faces to the CNNs to estimate gender and age. CNNs can be used to face detection and face alignment. The system can perform multi-hypothesis visual tracking to obtain correspondence of faces in successive video frames. The system can update the baseline models using data collected online to avoid model drift. The system applies weakly supervised incremental training to face correspondences.
  • Advantages of the preferred embodiments may include one or more of the following. The video analysis systems based on statistical learning models maintains its performance when being deployed to a real-world scenario, primarily due to the fact that training data can cover unfamiliar variations in reality. The object correspondences in successive frames are leveraged as weak supervision to conduct incremental learning. The strategy is applied to the CNN based gender and age estimation system. The system maintains output consistent and stable results on face images from the same trajectory in videos by using incremental stochastic training. The supervision of correspondences can improve the estimation accuracy by a large margin. The strength of the correspondence driven incremental learning originates from the capability to address the mismatch problem due to the factors such as lighting conditions, view angels, and image quality or noise levels in the deployment environments, which may hardly be exactly the same as those in the training set. By forcing the models to produce consistent results for the same person, the models are adapted to be less sensitive to these factors. Therefore, the updated models outperform the original ones noticeably even when a small amount of additional data is added in the incremental learning. Moreover, the system is not restricted to gender and age estimation, and is also applicable to other facial attribute recognition, e.g. race, or face verification. Thus, the system can be applied to a number of personally identifiable characteristics as well.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows various convolutional neural network (CNN) approaches to incremental learning.
  • FIG. 2 shows an exemplary architecture of the convolutional neural networks (CNNs) where each plane represents a feature map.
  • FIG. 3 shows an exemplary correspondence driven incremental learning capability in a gender and age estimation system.
  • FIG. 4 shows an exemplary gender and age estimation system with correspondence driven incremental learning.
  • FIG. 5 shows an exemplary output of the age/gender analysis system.
  • DESCRIPTION
  • Commonly, pattern recognition consists of two steps: The first step computes hand-crafted features from raw inputs; and the second step learns classifiers based on the obtained features. The overall performance of the system is largely determined by the first step, which is, however, highly problem dependent and requires extensive feature engineering. Convolutional neural networks are a class of deep learning approaches in which multiple stages of learned feature extractors are applied directly to the raw input images and the entire system can be trained end-to-end in a supervised manner.
  • FIG. 2 shows an exemplary architecture of the convolutional neural networks (CNNs) where each plane represents a feature map. As shown in FIG. 2, the convolution and subsampling operations are iteratively applied to the raw input images to generate multiple layers of feature maps. The adjustable parameters of a CNN model include all weights of the layers and connections which are learned by the back-propagation algorithm in a stochastic gradient descent manner. CNN models are appealing since the design of hand-crafted features are avoided and they are very efficient to evaluate at the testing stage, although in general the training of CNN models requires a large amount of data. The systems learn separate CNN models for gender, female age, and male age.
  • The correspondence driven incremental learning are incorporated in the gender and age estimation system in FIG. 3. Training data 100 is manually labeled and used to generate one or more models during development stage. The models are used in deployment stage with testing dat 120 to generate predictions of age and gender. The predictions are also provided to a correspondence determination module 130 which provides training data to enhance the models.
  • The system of FIG. 3 applies correspondence driven incremental learning strategy to a CNN-based gender and age estimation system in FIG. 4. Input video is provided to a face detection module 200, which then drives a face tracking module 210. The face is aligned by module 220, and the gender is estimated in module 230. Next, an age estimation module 240 is applied to generate the output. The face tracking module 210 also drives a face correspondence module 260, which applies weakly supervised incremental learning to one or more CNN models 250.
  • After collecting the face correspondences by visual tracking, the system derives an online stochastic training method to enforce the updated models to output consistent gender and age estimations for the same person. The system enables a pre-trained CNN model adapt to the deployment environments and achieves significant performance improvement on a large dataset containing 884 persons with 68759 faces. The correspondence driven approach can be readily integrated to a fully automatic video analysis system.
  • For every frame of the input video, the system performs face detection and tracking, then the detected faces are aligned and normalized to 64×64 patches and fed to the CNN recognition engine to estimate the gender and age. The face detection and face alignment modules are also based on certain CNN models. The system employs multi-hypothesis visual tracking algorithms to obtain the correspondence of faces in successive frames. The parameters are tuned to increase the precision of the tracker and tolerate the switch-id errors. Too short tracks are discarded.
  • The gender and age models trained offline using manually labeled faces are denoted by the Baseline CNN models. During processing a new video, the aligned faces and their correspondences given by visual tracking are stored as additional data which are used to update the Baseline CNN models periodically. Then, the updated CNN models are applied to new test videos. Note, we always update the Baseline models using the additional data collected online, not the latest updated CNN models, to avoid model drift.
  • Next, the details of the learning process are discussed. The Baseline neural network function can be represented by y=F(X,Ω0), where X is the input, i.e., the raw image patch of a face, where Ω0 denotes the parameters of the original pre-trained CNN model, and the y is the output, for example the age, or the gender (−1 for females and 1 for males). For simplicity of description, this is the L2 loss regression problem. Other loss functions are also applicable to the derivation.
  • In the supervised incremental learning, given a set of additional training data S, each element of which is (X,y), where X is the input, y is the supervised label. The incremental training problem can be expressed as
  • m Ω J ( Ω ) = λ 2 P Ω - Ω 0 P 2 + 1 2 S ( X , y ) S ( F ( X , Ω ) - y ) 2 . ( 1 )
  • The first term keeps the parameter, Ω, from deviating from the original parameters Ω0 with the regularization parameter λ. The second term reduces the loss on the additional training data. Thus, the stochastic gradient descent update rule for one training sample (Xt,yt) is
  • Ω Ω - γ t [ λ ( Ω - Ω 0 ) + ( F ( X t , Ω ) - y t ) Ω F ( X t , Ω ) ] . ( 2 )
  • The idea of the correspondence driven incremental learning is to reduce the inconsistency of the neural network outputs for faces on the same trajectory, provided the parameters do not deviate from the original parameters of the Baseline CNN model, Ω0. For a given image XεS in the additional training image set, the set of all other faces on the same trajectory is denoted as a correspondence function T(X). Then, the correspondence driven incremental learning is written as this optimization problem,
  • min Ω J ( Ω ) = λ 2 P Ω - Ω 0 P 2 + 1 4 S X S 1 T ( X ) Z T ( X ) ( F ( X , Ω ) - F ( Z , Ω ) ) 2 . ( 3 )
  • The first term keeps the parameters, Ω, from deviating from the original parameters Ω0, where we use regularization parameter λ to control the deviation scale. The second term reduces the inconsistency of the neural network outputs of faces on the same trajectory. We normalize it by the size of S and the size of each trajectory, |T(X)|.
  • The derivative of the objective function of Eq. (3) can be written as,
  • J Ω = λ ( Ω - Ω 0 ) + 1 S X S 1 T ( X ) Z T ( X ) ( F ( X , Ω ) - F ( Z , Ω ) ) Ω F ( X , Ω ) .
  • Note that each pair of images, X and Z, appears twice in the summation. The stochastic update rule for each face image, Xt, is:
  • Ω Ω - γ t [ λ ( Ω - Ω 0 ) + 1 T ( X t ) Z T ( X t ) ( F ( X t , Ω ) - F ( Z , Ω ) ) Ω F ( X , θ ) ] = Ω - γ t [ λ ( Ω - Ω 0 ) + ( F ( X t , Ω ) - y ~ t ) Ω F ( X , θ ) ] , ( 4 )
  • where
  • y ~ t = 1 T ( X t ) Z T ( X t ) F ( Z , Ω ) ,
  • γt is the step size, the term λ(Ω−Ω0) is the weight decay. Intuitively, the average output of those images other than Xt on the trajectory are used as the pseudo labels, {tilde over (y)}t, for the incremental training Eq. (4) is in the same form as Eq. (2) except that the pseudo label {tilde over (y)}t is used in Eq. (4).
  • The updating process is implemented by the back propagation training for neural networks. Therefore, we can feed the pseudo labels into the existing training framework to perform the correspondence driven incremental learning. Because the validation set is not available, the stochastic training is only carried out in one pass, i.e., each additional data Xt is used only once, which largely relieves the over-fitting risk. Note, since the face images are collected sequentially from videos, data shuffle is critical in the stochastic training.
  • For a pair of face images, X and Z, the related items in the second term of Eq. (3) are
  • 1 4 S 1 T ( X ) ( F ( X , Ω ) - F ( Z , Ω ) ) 2 + 1 4 S 1 T ( Z ) ( F ( X , Ω ) - F ( Z , Ω ) ) 2 = 1 2 S 1 T ( X ) T ( Z ) ( F ( X , Ω ) - F ( Z , Ω ) ) 2 ,
  • because |T(X)|=|T(Z)|. This shows that the second term of Eq. (3) is exactly the term of normalized Laplacian regularization of a graph [?] whose nodes are images, and edges are those pairs of images in the same trajectory. Here our contribution is the stochastic training method for the normalized Laplacian regularization without explicitly using the Laplacian matrix. This training method allows the system to use the back propagation training framework to conduct the incremental learning.
  • FIG. 5 shows an exemplary output of the age/gender analysis system. The gender influences the age estimation considerably. Therefore, instead of using a unified age model for both male and female, this embodiment trains separate models for both gender groups. To study the performance of gender and age estimation independently, the system separates the males and females in the previous age estimation experiments. In the gender and age estimation system, the system predicts the gender first and then select the corresponding age model for age prediction. Some example screen shots are shown in FIG. 5, where the bounding boxes of faces are drawn in different colors to indicate the person identification. Gender and age range centered at our estimation are also shown. For instance, “7:F10-15” means that the person #7 is female with age ranging from 10 to 15.
  • Video analysis systems based on statistical learning models maintained its performance when being deployed to a real-world scenario, primarily due to the fact that training data can cover unfamiliar variations in reality. The object correspondences in successive frames are leveraged as weak supervision to conduct incremental learning. The strategy is applied to the CNN based gender and age estimation system. The system maintains output consistent and stable results on face images from the same trajectory in videos by using incremental stochastic training. On a video dataset containing 884 persons with 68759 faces, the supervision of correspondences can further improve the estimation accuracy by a large margin. The strength of the correspondence driven incremental learning originates from the capability to address the mismatch problem due to the factors such as lighting conditions, view angels, and image quality or noise levels in the deployment environments, which may hardly be exactly the same as those in the training set. By forcing the models to produce consistent results for the same person, the models are adapted to be less sensitive to these factors. Therefore, the updated models outperform the original ones noticeably even when a small amount of additional data is added in the incremental learning (e.g., around 6K to 10K faces are added in the ExpB in each video clip). The system is not restricted to gender and age estimation, and is also applicable to other facial attribute recognition, e.g. race. Thus, the system can be applied to a number of personally identifiable characteristics as well.
  • The invention may be implemented in hardware, firmware or software, or a combination of the three. Preferably the invention is implemented in a computer program executed on a programmable computer having a processor, a data storage system, volatile and non-volatile memory and/or storage elements, at least one input device and at least one output device.
  • By way of example, a computer to support the system is discussed next. The computer preferably includes a processor, random access memory (RAM), a program memory (preferably a writable read-only memory (ROM) such as a flash ROM) and an input/output (I/O) controller coupled by a CPU bus. The computer may optionally include a hard drive controller which is coupled to a hard disk and CPU bus. Hard disk may be used for storing application programs, such as the present invention, and data. Alternatively, application programs may be stored in RAM or ROM. I/O controller is coupled by means of an I/O bus to an I/O interface. I/O interface receives and transmits data in analog or digital form over communication links such as a serial link, local area network, wireless link, and parallel link. Optionally, a display, a keyboard and a pointing device (mouse) may also be connected to I/O bus. Alternatively, separate connections (separate buses) may be used for I/O interface, display, keyboard and pointing device. Programmable processing system may be preprogrammed or it may be programmed (and reprogrammed) by downloading a program from another source (e.g., a floppy disk, CD-ROM, or another computer).
  • Each computer program is tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein. The inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
  • The invention has been described herein in considerable detail in order to comply with the patent Statutes and to provide those skilled in the art with the information needed to apply the novel principles and to construct and use such specialized components as are required. However, it is to be understood that the invention can be carried out by specifically different equipment and devices, and that various modifications, both as to the equipment details and operating procedures, can be accomplished without departing from the scope of the invention itself.

Claims (20)

1. A computer implemented method to classify camera images, comprising
a. generating a baseline gender model and an age estimation model using one or more convolutional neural networks (CNNs);
b. capturing correspondences of faces by face tracking, and
c. applying incremental learning to the CNNs and enforcing correspondence constraint such that CNN outputs are consistent and stable for one person.
2. The method of claim 1, comprising collecting correspondences of faces by face tracking.
3. The method of claim 1, comprising providing incremental learning in the neural network by enforcing a correspondence constraint.
4. The method of claim 1, comprising implementing incremental training with an online stochastic gradient descent process.
5. The method of claim 1, comprising applying a correspondence given by visual tracking to update a pre-trained model.
6. The method of claim 1, comprising:
a. performing face detection and tracking;
b. aligning the detected faces;
c. normalizing the faces to a plurality of patches; and
d. sending the normalized faces to the CNNs to estimate gender and age.
7. The method of claim 1, comprising applying CNNs to face detection and face alignment.
8. The method of claim 1, comprising performing multi-hypothesis visual tracking to obtain correspondence of faces in successive video frames.
9. The method of claim 1, comprising updating the baseline models using data collected online to avoid model drift.
10. The method of claim 1, comprising applying weakly supervised incremental training to face correspondences.
11. A system to classify camera images, comprising
a. means for generating a baseline gender model and an age estimation model using one or more convolutional neural networks (CNNs);
b. means for capturing correspondences of faces by face tracking, and
c. means for applying incremental learning to the CNNs and enforcing correspondence constraint such that CNN outputs are consistent and stable for one person.
12. The system of claim 11, comprising means for collecting correspondences of faces by face tracking.
13. The system of claim 11, comprising means for providing incremental learning in the neural network by enforcing a correspondence constraint.
14. The system of claim 11, comprising means for implementing incremental training with an online stochastic gradient descent process.
15. The system of claim 11, comprising means for applying a correspondence given by visual tracking to update a pre-trained model.
16. The system of claim 11, comprising:
a. means for performing face detection and tracking;
b. means for aligning the detected faces;
c. means for normalizing the faces to a plurality of patches; and
d. means for sending the normalized faces to the CNNs to estimate gender and age.
17. The system of claim 11, comprising means for applying CNNs to face detection and face alignment.
18. The system of claim 11, comprising means for performing multi-hypothesis visual tracking to obtain correspondence of faces in successive video frames.
19. The system of claim 11, comprising means for updating the baseline models using data collected online to avoid model drift.
20. The system of claim 11, comprising means for applying weakly supervised incremental training to face correspondences.
US12/790,979 2010-03-15 2010-05-31 Systems and methods for determining personal characteristics Active 2032-01-04 US8582807B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/790,979 US8582807B2 (en) 2010-03-15 2010-05-31 Systems and methods for determining personal characteristics

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US31387810P 2010-03-15 2010-03-15
US12/790,979 US8582807B2 (en) 2010-03-15 2010-05-31 Systems and methods for determining personal characteristics

Publications (2)

Publication Number Publication Date
US20110222724A1 true US20110222724A1 (en) 2011-09-15
US8582807B2 US8582807B2 (en) 2013-11-12

Family

ID=44559992

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/790,979 Active 2032-01-04 US8582807B2 (en) 2010-03-15 2010-05-31 Systems and methods for determining personal characteristics

Country Status (1)

Country Link
US (1) US8582807B2 (en)

Cited By (98)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120143797A1 (en) * 2010-12-06 2012-06-07 Microsoft Corporation Metric-Label Co-Learning
CN103294982A (en) * 2012-02-24 2013-09-11 北京明日时尚信息技术有限公司 Method and system for figure detection, body part positioning, age estimation and gender identification in picture of network
CN103544506A (en) * 2013-10-12 2014-01-29 Tcl集团股份有限公司 Method and device for classifying images on basis of convolutional neural network
CN103984959A (en) * 2014-05-26 2014-08-13 中国科学院自动化研究所 Data-driven and task-driven image classification method
CN104021373A (en) * 2014-05-27 2014-09-03 江苏大学 Semi-supervised speech feature variable factor decomposition method
US8837787B2 (en) 2012-04-05 2014-09-16 Ancestry.Com Operations Inc. System and method for associating a photo with a data structure node
CN104077577A (en) * 2014-07-03 2014-10-01 浙江大学 Trademark detection method based on convolutional neural network
CN104537630A (en) * 2015-01-22 2015-04-22 厦门美图之家科技有限公司 Method and device for image beautifying based on age estimation
CN104572965A (en) * 2014-12-31 2015-04-29 南京理工大学 Search-by-image system based on convolutional neural network
US20150139485A1 (en) * 2013-11-15 2015-05-21 Facebook, Inc. Pose-aligned networks for deep attribute modeling
WO2015078185A1 (en) * 2013-11-29 2015-06-04 华为技术有限公司 Convolutional neural network and target object detection method based on same
WO2015078018A1 (en) * 2013-11-30 2015-06-04 Xiaoou Tang Method and system for face image recognition
US20150254532A1 (en) * 2014-03-07 2015-09-10 Qualcomm Incorporated Photo management
CN104966104A (en) * 2015-06-30 2015-10-07 孙建德 Three-dimensional convolutional neural network based video classifying method
CN105117739A (en) * 2015-07-29 2015-12-02 南京信息工程大学 Clothes classifying method based on convolutional neural network
WO2015192263A1 (en) * 2014-06-16 2015-12-23 Xiaoou Tang A method and a system for face verification
CN105224963A (en) * 2014-06-04 2016-01-06 华为技术有限公司 The method of changeable degree of depth learning network structure and terminal
WO2016077027A1 (en) * 2014-11-13 2016-05-19 Nec Laboratories America, Inc. Hyper-class augmented and regularized deep learning for fine-grained image classification
US20160140436A1 (en) * 2014-11-15 2016-05-19 Beijing Kuangshi Technology Co., Ltd. Face Detection Using Machine Learning
CN105654049A (en) * 2015-12-29 2016-06-08 中国科学院深圳先进技术研究院 Facial expression recognition method and device
CN105678381A (en) * 2016-01-08 2016-06-15 浙江宇视科技有限公司 Gender classification network training method, gender classification method and related device
CN105678232A (en) * 2015-12-30 2016-06-15 中通服公众信息产业股份有限公司 Face image feature extraction and comparison method based on deep learning
CN105701468A (en) * 2016-01-12 2016-06-22 华南理工大学 Face attractiveness evaluation method based on deep learning
CN105740823A (en) * 2016-02-01 2016-07-06 北京高科中天技术股份有限公司 Dynamic gesture trace recognition method based on depth convolution neural network
CN105760834A (en) * 2016-02-14 2016-07-13 北京飞搜科技有限公司 Face feature point locating method
US9400922B2 (en) * 2014-05-29 2016-07-26 Beijing Kuangshi Technology Co., Ltd. Facial landmark localization using coarse-to-fine cascaded neural networks
CN105956571A (en) * 2016-05-13 2016-09-21 华侨大学 Age estimation method for face image
CN106033594A (en) * 2015-03-11 2016-10-19 日本电气株式会社 Recovery method and apparatus for spatial information based on feature obtained by convolutional neural network
CN106203284A (en) * 2016-06-30 2016-12-07 华中科技大学 Based on convolutional neural networks and the method for detecting human face of condition random field
CN106295502A (en) * 2016-07-25 2017-01-04 厦门中控生物识别信息技术有限公司 A kind of method for detecting human face and device
CN106503661A (en) * 2016-10-25 2017-03-15 陕西师范大学 Face gender identification method based on fireworks depth belief network
CN106529667A (en) * 2016-09-23 2017-03-22 中国石油大学(华东) Logging facies identification and analysis method based on fuzzy depth learning in big data environment
CN106548145A (en) * 2016-10-31 2017-03-29 北京小米移动软件有限公司 Image-recognizing method and device
US20170124415A1 (en) * 2015-11-04 2017-05-04 Nec Laboratories America, Inc. Subcategory-aware convolutional neural networks for object detection
CN106650573A (en) * 2016-09-13 2017-05-10 华南理工大学 Cross-age face verification method and system
CN106778558A (en) * 2016-12-02 2017-05-31 电子科技大学 A kind of facial age estimation method based on depth sorting network
CN106778854A (en) * 2016-12-07 2017-05-31 西安电子科技大学 Activity recognition method based on track and convolutional neural networks feature extraction
CN106951867A (en) * 2017-03-22 2017-07-14 成都擎天树科技有限公司 Face identification method, device, system and equipment based on convolutional neural networks
CN107145857A (en) * 2017-04-29 2017-09-08 深圳市深网视界科技有限公司 Face character recognition methods, device and method for establishing model
WO2017165363A1 (en) * 2016-03-21 2017-09-28 The Procter & Gamble Company Systems and methods for providing customized product recommendations
CN107239803A (en) * 2017-07-21 2017-10-10 国家***第海洋研究所 Utilize the sediment automatic classification method of deep learning neutral net
CN107251091A (en) * 2015-02-24 2017-10-13 株式会社日立制作所 Image processing method, image processing apparatus
US20170300785A1 (en) * 2016-04-14 2017-10-19 Linkedln Corporation Deep convolutional neural network prediction of image professionalism
CN107330383A (en) * 2017-06-18 2017-11-07 天津大学 A kind of face identification method based on depth convolutional neural networks
CN107358257A (en) * 2017-07-07 2017-11-17 华南理工大学 Under a kind of big data scene can incremental learning image classification training method
CN107430705A (en) * 2015-03-17 2017-12-01 高通股份有限公司 Samples selection for re -training grader
US20170351905A1 (en) * 2016-06-06 2017-12-07 Samsung Electronics Co., Ltd. Learning model for salient facial region detection
CN107480178A (en) * 2017-07-01 2017-12-15 广州深域信息科技有限公司 A kind of pedestrian's recognition methods again compared based on image and video cross-module state
CN107609512A (en) * 2017-09-12 2018-01-19 上海敏识网络科技有限公司 A kind of video human face method for catching based on neutral net
CN107622261A (en) * 2017-11-03 2018-01-23 北方工业大学 Face age estimation method and device based on deep learning
US20180075317A1 (en) * 2016-09-09 2018-03-15 Microsoft Technology Licensing, Llc Person centric trait specific photo match ranking engine
CN107808150A (en) * 2017-11-20 2018-03-16 珠海习悦信息技术有限公司 The recognition methods of human body video actions, device, storage medium and processor
US20180150684A1 (en) * 2016-11-30 2018-05-31 Shenzhen AltumView Technology Co., Ltd. Age and gender estimation using small-scale convolutional neural network (cnn) modules for embedded systems
US10013653B2 (en) * 2016-01-26 2018-07-03 Università della Svizzera italiana System and a method for learning features on geometric domains
US20180211099A1 (en) * 2015-07-20 2018-07-26 University Of Maryland, College Park Deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition
CN108334842A (en) * 2018-02-02 2018-07-27 成都国铁电气设备有限公司 A method of identification pantograph-catenary current collection arcing size
US10043240B2 (en) 2016-04-14 2018-08-07 Microsoft Technology Licensing, Llc Optimal cropping of digital image based on professionalism score of subject
US10043254B2 (en) 2016-04-14 2018-08-07 Microsoft Technology Licensing, Llc Optimal image transformation based on professionalism score of subject
CN108537112A (en) * 2017-03-03 2018-09-14 佳能株式会社 Image processing apparatus, image processing system, image processing method and storage medium
CN108629291A (en) * 2018-04-13 2018-10-09 深圳市未来媒体技术研究院 A kind of face depth prediction approach of anti-grid effect
CN108629284A (en) * 2017-10-28 2018-10-09 深圳奥瞳科技有限责任公司 The method and device of Real- time Face Tracking and human face posture selection based on embedded vision system
CN109116984A (en) * 2018-07-27 2019-01-01 冯仕昌 A kind of tool box for three-dimension interaction scene
CN109154988A (en) * 2016-04-21 2019-01-04 台拉维夫大学拉莫特有限公司 concatenated convolutional neural network
US20190034706A1 (en) * 2010-06-07 2019-01-31 Affectiva, Inc. Facial tracking with classifiers for query evaluation
CN109350051A (en) * 2018-11-28 2019-02-19 华南理工大学 The head wearable device and its working method with adjusting are assessed for the state of mind
US10210430B2 (en) 2016-01-26 2019-02-19 Fabula Ai Limited System and a method for learning features on geometric domains
CN109583583A (en) * 2017-09-29 2019-04-05 腾讯科技(深圳)有限公司 Neural network training method, device, computer equipment and readable medium
CN109655059A (en) * 2019-01-09 2019-04-19 武汉大学 Vision-inertia fusion navigation system and method based on theta-increment learning
US10269120B2 (en) 2016-11-25 2019-04-23 Industrial Technology Research Institute Character recognition systems and character recognition methods thereof using convolutional neural network
WO2019120019A1 (en) * 2017-12-20 2019-06-27 Oppo广东移动通信有限公司 User gender prediction method and apparatus, storage medium and electronic device
CN109993207A (en) * 2019-03-01 2019-07-09 华南理工大学 A kind of image method for secret protection and system based on target detection
CN110012060A (en) * 2019-02-13 2019-07-12 平安科技(深圳)有限公司 Information-pushing method, device, storage medium and the server of mobile terminal
US20190266386A1 (en) * 2018-02-28 2019-08-29 Chanel Parfums Beaute Method for building a computer-implemented tool for assessment of qualitative features from face images
CN110287795A (en) * 2019-05-24 2019-09-27 北京爱诺斯科技有限公司 A kind of eye age detection method based on image analysis
US20190378171A1 (en) * 2018-06-06 2019-12-12 Walmart Apollo, Llc Targeted advertisement system
CN110598638A (en) * 2019-09-12 2019-12-20 Oppo广东移动通信有限公司 Model training method, face gender prediction method, device and storage medium
US10540768B2 (en) 2015-09-30 2020-01-21 Samsung Electronics Co., Ltd. Apparatus and method to segment object from image
CN110826469A (en) * 2019-11-01 2020-02-21 Oppo广东移动通信有限公司 Person detection method and device and computer readable storage medium
US10574883B2 (en) 2017-05-31 2020-02-25 The Procter & Gamble Company System and method for guiding a user to take a selfie
US20200090006A1 (en) * 2017-05-19 2020-03-19 Deepmind Technologies Limited Imagination-based agent neural networks
US20200259347A1 (en) * 2019-02-11 2020-08-13 Alfi, Inc. Methods and apparatus for a tablet computer system incorporating a battery charging station
CN111626303A (en) * 2020-05-29 2020-09-04 南京甄视智能科技有限公司 Sex and age identification method, sex and age identification device, storage medium and server
CN111695415A (en) * 2020-04-28 2020-09-22 平安科技(深圳)有限公司 Construction method and identification method of image identification model and related equipment
US10818007B2 (en) 2017-05-31 2020-10-27 The Procter & Gamble Company Systems and methods for determining apparent skin age
CN111967382A (en) * 2020-08-14 2020-11-20 北京金山云网络技术有限公司 Age estimation method, and training method and device of age estimation model
CN112101083A (en) * 2019-06-17 2020-12-18 辉达公司 Object detection with weak supervision using one or more neural networks
CN112784773A (en) * 2021-01-27 2021-05-11 展讯通信(上海)有限公司 Image processing method and device, storage medium and terminal
CN113344792A (en) * 2021-08-02 2021-09-03 浙江大华技术股份有限公司 Image generation method and device and electronic equipment
WO2021196389A1 (en) * 2020-04-03 2021-10-07 平安科技(深圳)有限公司 Facial action unit recognition method and apparatus, electronic device, and storage medium
US11157726B2 (en) 2017-04-14 2021-10-26 Koninklijike Philips N.V. Person identification systems and methods
US11172168B2 (en) * 2017-12-29 2021-11-09 Bull Sas Movement or topology prediction for a camera network
US11222196B2 (en) 2018-07-11 2022-01-11 Samsung Electronics Co., Ltd. Simultaneous recognition of facial attributes and identity in organizing photo albums
US11386702B2 (en) * 2017-09-30 2022-07-12 Canon Kabushiki Kaisha Recognition apparatus and method
TWI772668B (en) * 2019-01-31 2022-08-01 大陸商北京市商湯科技開發有限公司 Method, device and electronic apparatus for target object processing and storage medium thereof
US11423694B2 (en) * 2019-06-19 2022-08-23 Samsung Electronics Company, Ltd. Methods and systems for dynamic and incremental face recognition
US11461919B2 (en) 2016-04-21 2022-10-04 Ramot At Tel Aviv University Ltd. Cascaded neural network
US11527086B2 (en) 2020-06-24 2022-12-13 Bank Of America Corporation System for character recognition in a digital image processing environment
US11527091B2 (en) * 2019-03-28 2022-12-13 Nec Corporation Analyzing apparatus, control method, and program

Families Citing this family (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015196052A1 (en) 2014-06-19 2015-12-23 Massachusetts Institute Of Technology Lubricant-impregnated surfaces for electrochemical applications, and devices and systems using same
US9652745B2 (en) * 2014-06-20 2017-05-16 Hirevue, Inc. Model-driven evaluator bias detection
US10417525B2 (en) * 2014-09-22 2019-09-17 Samsung Electronics Co., Ltd. Object recognition with reduced neural network weight precision
CN104598871B (en) * 2014-12-06 2017-11-17 电子科技大学 A kind of facial age computational methods based on correlation regression
CN105808610B (en) * 2014-12-31 2019-12-20 中国科学院深圳先进技术研究院 Internet picture filtering method and device
CN104537393B (en) * 2015-01-04 2018-01-16 大连理工大学 A kind of traffic sign recognition method based on multiresolution convolutional neural networks
CN106156846B (en) * 2015-03-30 2018-12-25 日本电气株式会社 The treating method and apparatus of convolutional neural networks feature
US10860887B2 (en) 2015-11-16 2020-12-08 Samsung Electronics Co., Ltd. Method and apparatus for recognizing object, and method and apparatus for training recognition model
CN105426917A (en) * 2015-11-23 2016-03-23 广州视源电子科技股份有限公司 Component classification method and apparatus
CN105512620B (en) * 2015-11-30 2019-07-26 北京眼神智能科技有限公司 The training method and device of convolutional neural networks for recognition of face
CN108701210B (en) 2016-02-02 2021-08-17 北京市商汤科技开发有限公司 Method and system for CNN network adaptation and object online tracking
US10424072B2 (en) 2016-03-01 2019-09-24 Samsung Electronics Co., Ltd. Leveraging multi cues for fine-grained object classification
CN105825183B (en) * 2016-03-14 2019-02-12 合肥工业大学 Facial expression recognizing method based on partial occlusion image
JP6727543B2 (en) * 2016-04-01 2020-07-22 富士ゼロックス株式会社 Image pattern recognition device and program
CN105975916B (en) * 2016-04-28 2019-10-11 西安电子科技大学 Age estimation method based on multi output convolutional neural networks and ordinal regression
CN105975931B (en) * 2016-05-04 2019-06-14 浙江大学 A kind of convolutional neural networks face identification method based on multiple dimensioned pond
CN107368770B (en) * 2016-05-12 2021-05-11 江苏安纳泰克能源服务有限公司 Method and system for automatically identifying returning passenger
US10032067B2 (en) 2016-05-28 2018-07-24 Samsung Electronics Co., Ltd. System and method for a unified architecture multi-task deep learning machine for object recognition
CN106295521B (en) * 2016-07-29 2019-06-04 厦门美图之家科技有限公司 A kind of gender identification method based on multi output convolutional neural networks, device and calculate equipment
CN106354816B (en) * 2016-08-30 2019-12-13 东软集团股份有限公司 video image processing method and device
US10198626B2 (en) * 2016-10-19 2019-02-05 Snap Inc. Neural networks for facial modeling
CN107909026B (en) * 2016-11-30 2021-08-13 深圳奥瞳科技有限责任公司 Small-scale convolutional neural network based age and/or gender assessment method and system
CN106778584B (en) * 2016-12-08 2019-07-16 南京邮电大学 A kind of face age estimation method based on further feature Yu shallow-layer Fusion Features
CN106845383B (en) * 2017-01-16 2023-06-06 腾讯科技(上海)有限公司 Human head detection method and device
CN107203740A (en) * 2017-04-24 2017-09-26 华侨大学 A kind of face age estimation method based on deep learning
US10726301B2 (en) 2017-06-29 2020-07-28 The Procter & Gamble Company Method for treating a surface
CN107423701B (en) * 2017-07-17 2020-09-01 智慧眼科技股份有限公司 Face unsupervised feature learning method and device based on generative confrontation network
CN107358648B (en) * 2017-07-17 2019-08-27 中国科学技术大学 Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
CN107463888B (en) * 2017-07-21 2020-05-19 竹间智能科技(上海)有限公司 Face emotion analysis method and system based on multi-task learning and deep learning
KR101977174B1 (en) 2017-09-13 2019-05-10 이재준 Apparatus, method and computer program for analyzing image
US10452954B2 (en) * 2017-09-14 2019-10-22 Google Llc Object detection and representation in images
CN107590478A (en) * 2017-09-26 2018-01-16 四川长虹电器股份有限公司 A kind of age estimation method based on deep learning
CN109583277B (en) * 2017-09-29 2021-04-20 大连恒锐科技股份有限公司 Gender determination method of barefoot footprint based on CNN
JP6658822B2 (en) * 2017-10-30 2020-03-04 ダイキン工業株式会社 Concentration estimation device
CN107870321B (en) * 2017-11-03 2020-12-29 电子科技大学 Radar one-dimensional range profile target identification method based on pseudo-label learning
CN108280397B (en) * 2017-12-25 2020-04-07 西安电子科技大学 Human body image hair detection method based on deep convolutional neural network
CN108510466A (en) * 2018-03-27 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for verifying face
WO2019222327A1 (en) 2018-05-17 2019-11-21 The Procter & Gamble Company Systems and methods for hair analysis
EP3793427A1 (en) 2018-05-17 2021-03-24 The Procter & Gamble Company Systems and methods for hair coverage analysis
CN108985256A (en) * 2018-08-01 2018-12-11 曜科智能科技(上海)有限公司 Based on the multiple neural network demographic method of scene Density Distribution, system, medium, terminal
CN109685826A (en) * 2018-11-27 2019-04-26 哈尔滨工业大学(深圳) Target tracking method, system and the storage medium of adaptive features select
EP3899974A1 (en) 2018-12-21 2021-10-27 The Procter & Gamble Company Apparatus and method for operating a personal grooming appliance or household cleaning appliance
CN109886095A (en) * 2019-01-08 2019-06-14 浙江新再灵科技股份有限公司 A kind of passenger's Attribute Recognition system and method for the light-duty convolutional neural networks of view-based access control model
CN111222545B (en) * 2019-12-24 2022-04-19 西安电子科技大学 Image classification method based on linear programming incremental learning
US11562137B2 (en) 2020-04-14 2023-01-24 Bank Of America Corporation System to correct model drift for natural language understanding
US11580456B2 (en) 2020-04-27 2023-02-14 Bank Of America Corporation System to correct model drift in machine learning application

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060078163A1 (en) * 2002-06-07 2006-04-13 Microsoft Corporation Mode- based multi-hypothesis tracking using parametric contours
US20070219801A1 (en) * 2006-03-14 2007-09-20 Prabha Sundaram System, method and computer program product for updating a biometric model based on changes in a biometric feature of a user

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060078163A1 (en) * 2002-06-07 2006-04-13 Microsoft Corporation Mode- based multi-hypothesis tracking using parametric contours
US20070219801A1 (en) * 2006-03-14 2007-09-20 Prabha Sundaram System, method and computer program product for updating a biometric model based on changes in a biometric feature of a user

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Ying-Nong Chen; Chin-Chuan Han; Cheng-Tzu Wang; Bor-Shenn Jeng; Kuo-Chin Fan; , "The Application of a Convolution Neural Network on Face and License Plate Detection," Pattern Recognition, 2006. ICPR 2006. 18th International Conference on , vol.3, no., pp.552-555 *

Cited By (127)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190034706A1 (en) * 2010-06-07 2019-01-31 Affectiva, Inc. Facial tracking with classifiers for query evaluation
US20120143797A1 (en) * 2010-12-06 2012-06-07 Microsoft Corporation Metric-Label Co-Learning
CN103294982A (en) * 2012-02-24 2013-09-11 北京明日时尚信息技术有限公司 Method and system for figure detection, body part positioning, age estimation and gender identification in picture of network
US8837787B2 (en) 2012-04-05 2014-09-16 Ancestry.Com Operations Inc. System and method for associating a photo with a data structure node
CN103544506A (en) * 2013-10-12 2014-01-29 Tcl集团股份有限公司 Method and device for classifying images on basis of convolutional neural network
US11380119B2 (en) * 2013-11-15 2022-07-05 Meta Platforms, Inc. Pose-aligned networks for deep attribute modeling
US10402632B2 (en) * 2013-11-15 2019-09-03 Facebook, Inc. Pose-aligned networks for deep attribute modeling
US20160328606A1 (en) * 2013-11-15 2016-11-10 Facebook, Inc. Pose-aligned networks for deep attribute modeling
US9400925B2 (en) * 2013-11-15 2016-07-26 Facebook, Inc. Pose-aligned networks for deep attribute modeling
US20150139485A1 (en) * 2013-11-15 2015-05-21 Facebook, Inc. Pose-aligned networks for deep attribute modeling
WO2015078185A1 (en) * 2013-11-29 2015-06-04 华为技术有限公司 Convolutional neural network and target object detection method based on same
WO2015078018A1 (en) * 2013-11-30 2015-06-04 Xiaoou Tang Method and system for face image recognition
US9530047B1 (en) 2013-11-30 2016-12-27 Beijing Sensetime Technology Development Co., Ltd. Method and system for face image recognition
CN105849747A (en) * 2013-11-30 2016-08-10 北京市商汤科技开发有限公司 Method and system for face image recognition
JP2017515189A (en) * 2014-03-07 2017-06-08 クゥアルコム・インコーポレイテッドQualcomm Incorporated Photo management
US20150254532A1 (en) * 2014-03-07 2015-09-10 Qualcomm Incorporated Photo management
US10043112B2 (en) * 2014-03-07 2018-08-07 Qualcomm Incorporated Photo management
CN103984959A (en) * 2014-05-26 2014-08-13 中国科学院自动化研究所 Data-driven and task-driven image classification method
CN104021373A (en) * 2014-05-27 2014-09-03 江苏大学 Semi-supervised speech feature variable factor decomposition method
US9400922B2 (en) * 2014-05-29 2016-07-26 Beijing Kuangshi Technology Co., Ltd. Facial landmark localization using coarse-to-fine cascaded neural networks
CN105224963A (en) * 2014-06-04 2016-01-06 华为技术有限公司 The method of changeable degree of depth learning network structure and terminal
CN106415594A (en) * 2014-06-16 2017-02-15 北京市商汤科技开发有限公司 A method and a system for face verification
WO2015192263A1 (en) * 2014-06-16 2015-12-23 Xiaoou Tang A method and a system for face verification
CN104077577A (en) * 2014-07-03 2014-10-01 浙江大学 Trademark detection method based on convolutional neural network
WO2016077027A1 (en) * 2014-11-13 2016-05-19 Nec Laboratories America, Inc. Hyper-class augmented and regularized deep learning for fine-grained image classification
US20160140436A1 (en) * 2014-11-15 2016-05-19 Beijing Kuangshi Technology Co., Ltd. Face Detection Using Machine Learning
US10268950B2 (en) * 2014-11-15 2019-04-23 Beijing Kuangshi Technology Co., Ltd. Face detection using machine learning
CN104572965A (en) * 2014-12-31 2015-04-29 南京理工大学 Search-by-image system based on convolutional neural network
CN104537630A (en) * 2015-01-22 2015-04-22 厦门美图之家科技有限公司 Method and device for image beautifying based on age estimation
CN107251091A (en) * 2015-02-24 2017-10-13 株式会社日立制作所 Image processing method, image processing apparatus
CN106033594A (en) * 2015-03-11 2016-10-19 日本电气株式会社 Recovery method and apparatus for spatial information based on feature obtained by convolutional neural network
US11334789B2 (en) * 2015-03-17 2022-05-17 Qualcomm Incorporated Feature selection for retraining classifiers
CN107430705A (en) * 2015-03-17 2017-12-01 高通股份有限公司 Samples selection for re -training grader
CN104966104A (en) * 2015-06-30 2015-10-07 孙建德 Three-dimensional convolutional neural network based video classifying method
US10860837B2 (en) * 2015-07-20 2020-12-08 University Of Maryland, College Park Deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition
US20180211099A1 (en) * 2015-07-20 2018-07-26 University Of Maryland, College Park Deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition
CN105117739A (en) * 2015-07-29 2015-12-02 南京信息工程大学 Clothes classifying method based on convolutional neural network
US10540768B2 (en) 2015-09-30 2020-01-21 Samsung Electronics Co., Ltd. Apparatus and method to segment object from image
US20170124415A1 (en) * 2015-11-04 2017-05-04 Nec Laboratories America, Inc. Subcategory-aware convolutional neural networks for object detection
US9965719B2 (en) * 2015-11-04 2018-05-08 Nec Corporation Subcategory-aware convolutional neural networks for object detection
CN105654049B (en) * 2015-12-29 2019-08-16 中国科学院深圳先进技术研究院 The method and device of facial expression recognition
CN105654049A (en) * 2015-12-29 2016-06-08 中国科学院深圳先进技术研究院 Facial expression recognition method and device
CN105678232A (en) * 2015-12-30 2016-06-15 中通服公众信息产业股份有限公司 Face image feature extraction and comparison method based on deep learning
CN105678381A (en) * 2016-01-08 2016-06-15 浙江宇视科技有限公司 Gender classification network training method, gender classification method and related device
CN105701468A (en) * 2016-01-12 2016-06-22 华南理工大学 Face attractiveness evaluation method based on deep learning
US10013653B2 (en) * 2016-01-26 2018-07-03 Università della Svizzera italiana System and a method for learning features on geometric domains
US10210430B2 (en) 2016-01-26 2019-02-19 Fabula Ai Limited System and a method for learning features on geometric domains
CN105740823A (en) * 2016-02-01 2016-07-06 北京高科中天技术股份有限公司 Dynamic gesture trace recognition method based on depth convolution neural network
CN105760834A (en) * 2016-02-14 2016-07-13 北京飞搜科技有限公司 Face feature point locating method
CN108701323A (en) * 2016-03-21 2018-10-23 宝洁公司 System and method for the Products Show for providing customization
JP2019512797A (en) * 2016-03-21 2019-05-16 ザ プロクター アンド ギャンブル カンパニー System and method for providing customized product recommendations
WO2017165363A1 (en) * 2016-03-21 2017-09-28 The Procter & Gamble Company Systems and methods for providing customized product recommendations
US11055762B2 (en) 2016-03-21 2021-07-06 The Procter & Gamble Company Systems and methods for providing customized product recommendations
US10043240B2 (en) 2016-04-14 2018-08-07 Microsoft Technology Licensing, Llc Optimal cropping of digital image based on professionalism score of subject
US20170300785A1 (en) * 2016-04-14 2017-10-19 Linkedln Corporation Deep convolutional neural network prediction of image professionalism
US10043254B2 (en) 2016-04-14 2018-08-07 Microsoft Technology Licensing, Llc Optimal image transformation based on professionalism score of subject
US9904871B2 (en) * 2016-04-14 2018-02-27 Microsoft Technologies Licensing, LLC Deep convolutional neural network prediction of image professionalism
US10621477B2 (en) 2016-04-21 2020-04-14 Ramot At Tel Aviv University Ltd. Cascaded convolutional neural network
CN109154988A (en) * 2016-04-21 2019-01-04 台拉维夫大学拉莫特有限公司 concatenated convolutional neural network
US11461919B2 (en) 2016-04-21 2022-10-04 Ramot At Tel Aviv University Ltd. Cascaded neural network
CN105956571A (en) * 2016-05-13 2016-09-21 华侨大学 Age estimation method for face image
US20170351905A1 (en) * 2016-06-06 2017-12-07 Samsung Electronics Co., Ltd. Learning model for salient facial region detection
US10579860B2 (en) * 2016-06-06 2020-03-03 Samsung Electronics Co., Ltd. Learning model for salient facial region detection
CN106203284A (en) * 2016-06-30 2016-12-07 华中科技大学 Based on convolutional neural networks and the method for detecting human face of condition random field
CN106295502A (en) * 2016-07-25 2017-01-04 厦门中控生物识别信息技术有限公司 A kind of method for detecting human face and device
US20180075317A1 (en) * 2016-09-09 2018-03-15 Microsoft Technology Licensing, Llc Person centric trait specific photo match ranking engine
CN106650573A (en) * 2016-09-13 2017-05-10 华南理工大学 Cross-age face verification method and system
CN106529667A (en) * 2016-09-23 2017-03-22 中国石油大学(华东) Logging facies identification and analysis method based on fuzzy depth learning in big data environment
CN106503661A (en) * 2016-10-25 2017-03-15 陕西师范大学 Face gender identification method based on fireworks depth belief network
CN106548145A (en) * 2016-10-31 2017-03-29 北京小米移动软件有限公司 Image-recognizing method and device
US10269120B2 (en) 2016-11-25 2019-04-23 Industrial Technology Research Institute Character recognition systems and character recognition methods thereof using convolutional neural network
US20180150684A1 (en) * 2016-11-30 2018-05-31 Shenzhen AltumView Technology Co., Ltd. Age and gender estimation using small-scale convolutional neural network (cnn) modules for embedded systems
US10558908B2 (en) * 2016-11-30 2020-02-11 Altumview Systems Inc. Age and gender estimation using small-scale convolutional neural network (CNN) modules for embedded systems
CN106778558A (en) * 2016-12-02 2017-05-31 电子科技大学 A kind of facial age estimation method based on depth sorting network
CN106778854A (en) * 2016-12-07 2017-05-31 西安电子科技大学 Activity recognition method based on track and convolutional neural networks feature extraction
CN108537112A (en) * 2017-03-03 2018-09-14 佳能株式会社 Image processing apparatus, image processing system, image processing method and storage medium
CN106951867A (en) * 2017-03-22 2017-07-14 成都擎天树科技有限公司 Face identification method, device, system and equipment based on convolutional neural networks
US11157726B2 (en) 2017-04-14 2021-10-26 Koninklijike Philips N.V. Person identification systems and methods
CN107145857A (en) * 2017-04-29 2017-09-08 深圳市深网视界科技有限公司 Face character recognition methods, device and method for establishing model
US11328183B2 (en) 2017-05-19 2022-05-10 Deepmind Technologies Limited Imagination-based agent neural networks
US20200090006A1 (en) * 2017-05-19 2020-03-19 Deepmind Technologies Limited Imagination-based agent neural networks
US10776670B2 (en) * 2017-05-19 2020-09-15 Deepmind Technologies Limited Imagination-based agent neural networks
US10818007B2 (en) 2017-05-31 2020-10-27 The Procter & Gamble Company Systems and methods for determining apparent skin age
US10574883B2 (en) 2017-05-31 2020-02-25 The Procter & Gamble Company System and method for guiding a user to take a selfie
CN107330383A (en) * 2017-06-18 2017-11-07 天津大学 A kind of face identification method based on depth convolutional neural networks
CN107480178A (en) * 2017-07-01 2017-12-15 广州深域信息科技有限公司 A kind of pedestrian's recognition methods again compared based on image and video cross-module state
CN107358257A (en) * 2017-07-07 2017-11-17 华南理工大学 Under a kind of big data scene can incremental learning image classification training method
CN107239803A (en) * 2017-07-21 2017-10-10 国家***第海洋研究所 Utilize the sediment automatic classification method of deep learning neutral net
CN107609512A (en) * 2017-09-12 2018-01-19 上海敏识网络科技有限公司 A kind of video human face method for catching based on neutral net
CN109583583A (en) * 2017-09-29 2019-04-05 腾讯科技(深圳)有限公司 Neural network training method, device, computer equipment and readable medium
US11386702B2 (en) * 2017-09-30 2022-07-12 Canon Kabushiki Kaisha Recognition apparatus and method
CN108629284A (en) * 2017-10-28 2018-10-09 深圳奥瞳科技有限责任公司 The method and device of Real- time Face Tracking and human face posture selection based on embedded vision system
CN107622261A (en) * 2017-11-03 2018-01-23 北方工业大学 Face age estimation method and device based on deep learning
CN107808150A (en) * 2017-11-20 2018-03-16 珠海习悦信息技术有限公司 The recognition methods of human body video actions, device, storage medium and processor
WO2019120019A1 (en) * 2017-12-20 2019-06-27 Oppo广东移动通信有限公司 User gender prediction method and apparatus, storage medium and electronic device
US11172168B2 (en) * 2017-12-29 2021-11-09 Bull Sas Movement or topology prediction for a camera network
CN108334842A (en) * 2018-02-02 2018-07-27 成都国铁电气设备有限公司 A method of identification pantograph-catenary current collection arcing size
US20190266386A1 (en) * 2018-02-28 2019-08-29 Chanel Parfums Beaute Method for building a computer-implemented tool for assessment of qualitative features from face images
US10956716B2 (en) * 2018-02-28 2021-03-23 Chanel Parfums Beaute Method for building a computer-implemented tool for assessment of qualitative features from face images
CN108629291A (en) * 2018-04-13 2018-10-09 深圳市未来媒体技术研究院 A kind of face depth prediction approach of anti-grid effect
US20190378171A1 (en) * 2018-06-06 2019-12-12 Walmart Apollo, Llc Targeted advertisement system
US11222196B2 (en) 2018-07-11 2022-01-11 Samsung Electronics Co., Ltd. Simultaneous recognition of facial attributes and identity in organizing photo albums
CN109116984A (en) * 2018-07-27 2019-01-01 冯仕昌 A kind of tool box for three-dimension interaction scene
CN109350051A (en) * 2018-11-28 2019-02-19 华南理工大学 The head wearable device and its working method with adjusting are assessed for the state of mind
CN109655059A (en) * 2019-01-09 2019-04-19 武汉大学 Vision-inertia fusion navigation system and method based on theta-increment learning
US11403489B2 (en) 2019-01-31 2022-08-02 Beijing Sensetime Technology Development Co., Ltd. Target object processing method and apparatus, electronic device, and storage medium
TWI772668B (en) * 2019-01-31 2022-08-01 大陸商北京市商湯科技開發有限公司 Method, device and electronic apparatus for target object processing and storage medium thereof
US10910854B2 (en) * 2019-02-11 2021-02-02 Alfi, Inc. Methods and apparatus for a tablet computer system incorporating a battery charging station
US11824387B2 (en) * 2019-02-11 2023-11-21 Lee Digital, Llc Methods and apparatus for a tablet computer system incorporating a battery charging station
US20210351603A1 (en) * 2019-02-11 2021-11-11 Alfi, Inc. Methods and apparatus for a tablet computer system incorporating a battery charging station
US20200259347A1 (en) * 2019-02-11 2020-08-13 Alfi, Inc. Methods and apparatus for a tablet computer system incorporating a battery charging station
CN110012060A (en) * 2019-02-13 2019-07-12 平安科技(深圳)有限公司 Information-pushing method, device, storage medium and the server of mobile terminal
CN109993207A (en) * 2019-03-01 2019-07-09 华南理工大学 A kind of image method for secret protection and system based on target detection
US11527091B2 (en) * 2019-03-28 2022-12-13 Nec Corporation Analyzing apparatus, control method, and program
CN110287795A (en) * 2019-05-24 2019-09-27 北京爱诺斯科技有限公司 A kind of eye age detection method based on image analysis
CN112101083A (en) * 2019-06-17 2020-12-18 辉达公司 Object detection with weak supervision using one or more neural networks
US11423694B2 (en) * 2019-06-19 2022-08-23 Samsung Electronics Company, Ltd. Methods and systems for dynamic and incremental face recognition
CN110598638A (en) * 2019-09-12 2019-12-20 Oppo广东移动通信有限公司 Model training method, face gender prediction method, device and storage medium
CN110826469A (en) * 2019-11-01 2020-02-21 Oppo广东移动通信有限公司 Person detection method and device and computer readable storage medium
WO2021196389A1 (en) * 2020-04-03 2021-10-07 平安科技(深圳)有限公司 Facial action unit recognition method and apparatus, electronic device, and storage medium
CN111695415A (en) * 2020-04-28 2020-09-22 平安科技(深圳)有限公司 Construction method and identification method of image identification model and related equipment
CN111626303A (en) * 2020-05-29 2020-09-04 南京甄视智能科技有限公司 Sex and age identification method, sex and age identification device, storage medium and server
US11527086B2 (en) 2020-06-24 2022-12-13 Bank Of America Corporation System for character recognition in a digital image processing environment
CN111967382A (en) * 2020-08-14 2020-11-20 北京金山云网络技术有限公司 Age estimation method, and training method and device of age estimation model
CN112784773A (en) * 2021-01-27 2021-05-11 展讯通信(上海)有限公司 Image processing method and device, storage medium and terminal
WO2023010701A1 (en) * 2021-08-02 2023-02-09 Zhejiang Dahua Technology Co., Ltd. Image generation method, apparatus, and electronic device
CN113344792A (en) * 2021-08-02 2021-09-03 浙江大华技术股份有限公司 Image generation method and device and electronic equipment

Also Published As

Publication number Publication date
US8582807B2 (en) 2013-11-12

Similar Documents

Publication Publication Date Title
US8582807B2 (en) Systems and methods for determining personal characteristics
JP7113093B2 (en) Domain adaptation for instance detection and segmentation
US11853882B2 (en) Methods, apparatus, and storage medium for classifying graph nodes
EP3361423B1 (en) Learning system, learning device, learning method, learning program, teacher data creation device, teacher data creation method, teacher data creation program, terminal device, and threshold value changing device
US9852019B2 (en) System and method for abnormality detection
US10127445B2 (en) Video object classification with object size calibration
US11816183B2 (en) Methods and systems for mining minority-class data samples for training a neural network
Kaess et al. Covariance recovery from a square root information matrix for data association
CN109145759B (en) Vehicle attribute identification method, device, server and storage medium
US8345984B2 (en) 3D convolutional neural networks for automatic human action recognition
Yang et al. Correspondence driven adaptation for human profile recognition
US20130343641A1 (en) System and method for labelling aerial images
US10262214B1 (en) Learning method, learning device for detecting lane by using CNN and testing method, testing device using the same
CN108491766B (en) End-to-end crowd counting method based on depth decision forest
CN111868780B (en) Learning data generation device and method, model generation system, and program
CN111507469A (en) Method and device for optimizing hyper-parameters of automatic labeling device
US11610420B2 (en) Human detection in scenes
CN112052818A (en) Unsupervised domain adaptive pedestrian detection method, unsupervised domain adaptive pedestrian detection system and storage medium
US11625608B2 (en) Methods and systems for operating applications through user interfaces
KR101675692B1 (en) Method and apparatus for crowd behavior recognition based on structure learning
JP6622150B2 (en) Information processing apparatus and information processing method
CN113221667A (en) Face and mask attribute classification method and system based on deep learning
JP5652694B2 (en) Objective variable calculation device, objective variable calculation method, program, and recording medium
JP2023106081A (en) Model generation method, model generation device, inference program, and inference device
JP7365261B2 (en) computer systems and programs

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: NEC CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NEC LABORATORIES AMERICA, INC.;REEL/FRAME:031998/0667

Effective date: 20140113

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: NEC CORPORATION, JAPAN

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE 8538896 AND ADD 8583896 PREVIOUSLY RECORDED ON REEL 031998 FRAME 0667. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:NEC LABORATORIES AMERICA, INC.;REEL/FRAME:042754/0703

Effective date: 20140113

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8