CN110781800A - Image recognition system - Google Patents
Image recognition system Download PDFInfo
- Publication number
- CN110781800A CN110781800A CN201911010091.6A CN201911010091A CN110781800A CN 110781800 A CN110781800 A CN 110781800A CN 201911010091 A CN201911010091 A CN 201911010091A CN 110781800 A CN110781800 A CN 110781800A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- pixel point
- feature vector
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention provides an image recognition system which comprises an image acquisition module, an image preprocessing module, a feature extraction module and an expression recognition module, wherein the image acquisition module is used for acquiring a face image, the image preprocessing module is used for positioning a face based on the acquired face image, the feature extraction module is used for extracting the features of the face based on the positioned face, and the expression recognition module is used for recognizing the expression of the face according to the extracted features of the face. The invention has the beneficial effects that: the image recognition system is provided, and based on facial feature extraction, accurate recognition of facial expressions is achieved.
Description
Technical Field
The invention relates to the technical field of image recognition, in particular to an image recognition system.
Background
With the rapid development of computer technology and artificial intelligence technology, people increasingly have strong demands for human-computer interaction similar to human-human communication. Computers and robots fundamentally change the relationship between people and computers if they have the ability to understand and express emotions like humans. Expression recognition is the basis for emotional understanding. Expression recognition refers to the separation of a particular expression state from a given still image or dynamic video sequence to determine the psychological mood of the recognized object.
Because a computer cannot directly recognize the facial expression like a human, how to realize effective recognition of the facial expression becomes a difficult problem in the field of image recognition.
Disclosure of Invention
In view of the above problems, the present invention is directed to an image recognition system.
The purpose of the invention is realized by adopting the following technical scheme:
the image recognition system comprises an image acquisition module, an image preprocessing module, a feature extraction module and an expression recognition module, wherein the image acquisition module is used for acquiring face images, the image preprocessing module positions faces based on the acquired face images, the feature extraction module extracts face features based on the positioned faces, and the expression recognition module recognizes face expressions according to the extracted face features.
Optionally, the feature extraction module includes a first feature extraction module, a second feature extraction module, and a face feature generation module, where the first feature extraction module is configured to extract a first feature vector of a face, the second feature extraction module is configured to extract a second feature vector of the face, and the face feature generation module generates a face feature vector based on the first feature vector and the second feature vector of the face.
Optionally, the first feature extraction module is configured to extract a first feature vector of a human face, and specifically includes:
selecting a 3 x 3 pixel window of the image, comparing the values of the surrounding eight pixel points with the value of the central pixel point, marking the value of the central pixel point to be more than or equal to 1, marking the value of the central pixel point to be less than 0, connecting the values of the eight pixel points to form an eight-bit binary number value to represent the central pixel point, and establishing a vector L according to the eight-bit binary number value acquired by each pixel point on the face in the image
1:
L
1=[a
1,a
2,…,a
M]Wherein a is
jRepresents a vector L
1Wherein j is 1, 2, …, M,
b
jrepresenting the binary value of eight bits of the jth pixel point of the face in the image, wherein M represents the number of pixel points contained in the face in the image;
selecting a 5 x 5 pixel window of the image, making a circumference with the center pixel point as the center and 2 pixel edges as the radius, and making the value of sixteen pixel points and the value of the center pixel point which pass through the circumferenceComparing, marking the value more than or equal to the central pixel point as 1, marking the value less than the central point as 0, connecting the values of the sixteen pixel points to form a sixteen-digit binary number to represent the central pixel point, and establishing a vector L according to the sixteen-digit binary number value obtained by each pixel point on the face in the image
2:
L
2=[d
1,d
2,…,d
M]Wherein d is
jRepresents a vector L
2Wherein j is 1, 2, …, M,
f
jand representing a sixteen-bit binary value of a jth pixel point of the face in the image, wherein M represents the number of pixel points contained in the face in the image.
Optionally, the first feature extraction module is configured to extract a first feature vector of a human face, and specifically further includes:
calculating a first feature vector L of the face:
in the formula, L represents a first feature vector of a human face.
Optionally, the second feature extraction module is configured to extract a second feature vector of the face, and specifically includes:
selecting a 3 x 3 pixel window of the image, calculating a difference value between a central pixel point and surrounding neighborhood pixel points, taking each pixel point on the face in the image as the central pixel point, and establishing a vector R according to the difference value between each pixel point on the face in the image and the surrounding neighborhood pixel points
1:
R
1=[s
1,s
2,…,s
M]Wherein s is
jRepresenting the jth pixel point x in the image
jThe difference value between the image and the eight surrounding neighborhood pixels, j is 1, 2, …, M, M represents the number of pixels contained in the face of the person in the image;
calculating the gradient value of each pixel point on the face in the image, and establishing a vector R according to the gradient value of each pixel point on the face in the image
2:
R
2=[t
1,t
2,…,t
M]Wherein, t
jRepresenting the gradient value of the jth element in the vector, wherein j is 1, 2, …, and M represents the number of pixel points contained in the face in the image;
calculating a second feature vector R of the face: r ═ s
1,s
2,…,s
M,t
1,t
2,…,t
M]In the formula, R represents a second feature vector of the face.
Optionally, the difference value between the central pixel point and the surrounding neighborhood pixel points is obtained by the following method:
calculating pixel point x in image by using following formula
jDifference value with surrounding neighborhood pixel:
in the formula, s
jRepresenting a pixel point x
jDifference value x from eight surrounding neighborhood pixels
iRepresenting a pixel point x
jI is 1, 2, …, p-1, p is the total number of adjacent pixel points.
Optionally, the face feature generation module generates a face feature vector based on the first feature vector and the second feature vector of the face, specifically: calculating a feature vector T of the face: t ═ L, R ], where T represents a feature vector of a face;
the expression recognition module recognizes the facial expression according to the extracted facial features, and specifically comprises the following steps:
and establishing a corresponding relation between the feature vector of the face and the facial expression by adopting a group of training samples, extracting the feature vector of the face to be detected according to the corresponding relation between the feature vector of the face and the facial expression, and identifying the facial expression.
The invention has the beneficial effects that: the image recognition system is provided, and based on facial feature extraction, accurate recognition of facial expressions is achieved.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
FIG. 1 is a schematic structural view of the present invention;
reference numerals:
the system comprises an image acquisition module 1, an image preprocessing module 2, a feature extraction module 3 and an expression recognition module 4.
Detailed Description
The invention is further described with reference to the following examples.
Referring to fig. 1, the image recognition system of the embodiment includes an image acquisition module 1, an image preprocessing module 2, a feature extraction module 3, and an expression recognition module 4, where the image acquisition module 1 is configured to acquire a face image, the image preprocessing module 2 locates a face based on the acquired face image, the feature extraction module 3 extracts face features based on the located face, and the expression recognition module 4 recognizes a facial expression according to the extracted face features. The human face image can be collected through a camera or a camera, and the positioning of the human face image comprises the determination of pixel points contained in the human face in the image.
The embodiment provides an image recognition system, which is based on facial feature extraction and realizes accurate recognition of facial expressions.
Preferably, the feature extraction module 3 includes a first feature extraction module, a second feature extraction module and a face feature generation module, the first feature extraction module is configured to extract a first feature vector of a face, the second feature extraction module is configured to extract a second feature vector of the face, and the face feature generation module generates a face feature vector based on the first feature vector and the second feature vector of the face;
the first feature extraction module is used for extracting a first feature vector of a human face, and specifically comprises:
selecting a 3 x 3 pixel window of the image, and calculating the values of the surrounding eight pixels and the value of the central pixelComparing, marking the value more than or equal to the central pixel point as 1, marking the value less than the central point as 0, connecting the values of the eight pixel points to form an eight-bit binary number value to represent the central pixel point, and establishing a vector L according to the eight-bit binary number value acquired by each pixel point on the face in the image
1:
L
1=[a
1,a
2,…,a
M]Wherein a is
jRepresents a vector L
1Wherein j is 1, 2, …, M,
b
jrepresenting the binary value of eight bits of the jth pixel point of the face in the image, wherein M represents the number of pixel points contained in the face in the image;
selecting a 5 multiplied by 5 pixel window of an image, making a circumference taking a central pixel point as a circle center and 2 pixels as side lengths as a radius, comparing values of sixteen pixel points passing through the circumference with a value of the central pixel point, marking the value which is more than or equal to the central pixel point as 1, marking the value which is less than the central point as 0, connecting the values of the sixteen pixel points to form a sixteen-bit binary number value to represent the central pixel point, and establishing a vector L according to the sixteen-bit binary number value acquired by each pixel point on the face in the image
2:
L
2=[d
1,d
2,…,d
M]Wherein d is
jRepresents a vector L
2Wherein j is 1, 2, …, M,
f
jand representing a sixteen-bit binary value of a jth pixel point of the face in the image, wherein M represents the number of pixel points contained in the face in the image.
Calculating a first feature vector L of the face:
in the formula, L represents a first feature vector of a human face;
the preferred embodiment passes vector L
1=[a
1,a
2,…,a
M]And L
2=[d
1,d
2,…,d
M]Calculating a first feature vector of a human face
The face image can be efficiently and accurately represented, and a foundation is laid for the generation of a feature vector of a subsequent face;
preferably, the second feature extraction module is configured to extract a second feature vector of the human face, and specifically includes:
selecting a 3 x 3 pixel window of the image, calculating a difference value between a central pixel point and surrounding neighborhood pixel points, taking each pixel point on the face in the image as the central pixel point, and establishing a vector R according to the difference value between each pixel point on the face in the image and the surrounding neighborhood pixel points
1:
R
1=[s
1,s
2,…,s
M]Wherein s is
jRepresenting the jth pixel point x in the image
jThe difference value between the image and the eight surrounding neighborhood pixels, j is 1, 2, …, M, M represents the number of pixels contained in the face of the person in the image;
calculating the gradient value of each pixel point on the face in the image, and establishing a vector R according to the gradient value of each pixel point on the face in the image
2:
R
2=[t
1,t
2,…,t
M]Wherein, t
jRepresenting the gradient value of the jth element in the vector, wherein j is 1, 2, …, and M represents the number of pixel points contained in the face in the image;
calculating a second feature vector R of the face: r ═ s
1,s
2,…,s
M,t
1,t
2,…,t
M]In the formula, R represents a second feature vector of the human face;
the difference value between the central pixel point and the surrounding neighborhood pixel points is obtained by the following method:
calculating pixel point x in image by using following formula
jDifference value with surrounding neighborhood pixel:
in the formula, s
jRepresenting a pixel point x
jDifference value x from eight surrounding neighborhood pixels
iRepresenting a pixel point x
jThe ith adjacent pixel point of (1), 2, …, p-1, p is the total number of adjacent pixel points;
the preferred embodiment passes vector L
1=[a
1,a
2,…,a
M]And L
2=[d
1,d
2,…,d
M]Calculating a first feature vector of a human face
The face texture features can be further obtained, and a foundation is laid for the subsequent feature vector generation of the face;
preferably, the face feature generation module generates a face feature vector based on a first feature vector and a second feature vector of a face, specifically: calculating a feature vector T of the face: t ═ L, R ], where T represents a feature vector of a face;
the expression recognition module 4 recognizes the facial expression according to the extracted facial features, and specifically comprises:
and establishing a corresponding relation between the feature vector of the face and the facial expression by adopting a group of training samples, extracting the feature vector of the face to be detected according to the corresponding relation between the feature vector of the face and the facial expression, and identifying the facial expression.
The preferred embodiment is based on the first feature vector of the face
And a second eigenvector R ═ s
1,s
2,…,s
M,t
1,t
2,…,t
M]Calculating the characteristic vector T ═ L, R of the human face]The accurate extraction of the face vector is realized, and the accurate recognition of the face expression is ensured.
The image recognition system is adopted to recognize the facial expressions, 5 users are selected to carry out experiments, namely the user 1, the user 2, the user 3, the user 4 and the user 5, and the recognition efficiency and the recognition accuracy are counted, compared with the prior art, the image recognition system has the following beneficial effects:
finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (7)
1. The utility model provides an image recognition system, its characterized in that includes image acquisition module, image preprocessing module, feature extraction module, expression identification module, image acquisition module is used for gathering facial image, image preprocessing module fixes a position the face based on the facial image of gathering, feature extraction module extracts facial feature to the face based on the face of location, expression identification module discerns facial expression according to the facial feature of extracting.
2. The image recognition system of claim 1, wherein the feature extraction module comprises a first feature extraction module, a second feature extraction module and a face feature generation module, the first feature extraction module is configured to extract a first feature vector of a face, the second feature extraction module is configured to extract a second feature vector of the face, and the face feature generation module generates a face feature vector based on the first feature vector and the second feature vector of the face.
3. The image recognition system of claim 2, wherein the first feature extraction module is configured to extract a first feature vector of a human face, and specifically includes:
selecting a 3 x 3 pixel window of the image, comparing the values of the surrounding eight pixel points with the value of the central pixel point, marking the value of the central pixel point to be more than or equal to 1, marking the value of the central pixel point to be less than 0, connecting the values of the eight pixel points to form an eight-bit binary number value to represent the central pixel point, and establishing a vector L according to the eight-bit binary number value acquired by each pixel point on the face in the image
1:
L
1=[a
1,a
2,…,a
M]Wherein a is
jRepresents a vector L
1Wherein j is 1, 2, …, M,
b
jrepresenting the binary value of eight bits of the jth pixel point of the face in the image, wherein M represents the number of pixel points contained in the face in the image;
selecting a 5 multiplied by 5 pixel window of an image, making a circumference taking a central pixel point as a circle center and 2 pixels as side lengths as a radius, comparing values of sixteen pixel points passing through the circumference with a value of the central pixel point, marking the value which is more than or equal to the central pixel point as 1, marking the value which is less than the central point as 0, connecting the values of the sixteen pixel points to form a sixteen-bit binary number value to represent the central pixel point, and establishing a vector L according to the sixteen-bit binary number value acquired by each pixel point on the face in the image
2:
4. The image recognition system of claim 3, wherein the first feature extraction module is configured to extract a first feature vector of a human face, and specifically further comprises:
5. The image recognition system of claim 4, wherein the second feature extraction module is configured to extract a second feature vector of the human face, and specifically includes:
selecting a 3 x 3 pixel window of the image, calculating a difference value between a central pixel point and surrounding neighborhood pixel points, taking each pixel point on the face in the image as the central pixel point, and establishing a vector R according to the difference value between each pixel point on the face in the image and the surrounding neighborhood pixel points
1:
R
1=[s
1,s
2,…,s
M]Wherein s is
jRepresenting the jth pixel point x in the image
jThe difference value between the image and the eight surrounding neighborhood pixels, j is 1, 2, …, M, M represents the number of pixels contained in the face of the person in the image;
calculating the gradient value of each pixel point on the face in the image, and establishing a vector R according to the gradient value of each pixel point on the face in the image
2:
R
2=[t
1,t
2,…,t
M]Wherein, t
jRepresenting the gradient value of the jth element in the vector, wherein j is 1, 2, …, and M represents the number of pixel points contained in the face in the image;
calculating a second feature vector R of the face: r ═ s
1,s
2,…,s
M,t
1,t
2,…,t
M]In the formula, R represents a second feature vector of the face.
6. The image recognition system of claim 5, wherein the difference value between the central pixel point and the surrounding neighborhood pixel points is obtained by:
calculating pixel point x in image by using following formula
jDifference value with surrounding neighborhood pixel:
in the formula, s
jRepresenting a pixel point x
jDifference value x from eight surrounding neighborhood pixels
iRepresenting a pixel point x
jI is 1, 2, …, p-1, p is the total number of adjacent pixel points.
7. The image recognition system of claim 6, wherein the face feature generation module generates the face feature vector based on a first feature vector and a second feature vector of the face, and specifically comprises: calculating a feature vector T of the face: t ═ L, R ], where T represents a feature vector of a face;
the expression recognition module recognizes the facial expression according to the extracted facial features, and specifically comprises the following steps:
and establishing a corresponding relation between the feature vector of the face and the facial expression by adopting a group of training samples, extracting the feature vector of the face to be detected according to the corresponding relation between the feature vector of the face and the facial expression, and identifying the facial expression.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911010091.6A CN110781800B (en) | 2019-10-23 | 2019-10-23 | Image recognition system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911010091.6A CN110781800B (en) | 2019-10-23 | 2019-10-23 | Image recognition system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110781800A true CN110781800A (en) | 2020-02-11 |
CN110781800B CN110781800B (en) | 2022-04-12 |
Family
ID=69386495
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911010091.6A Active CN110781800B (en) | 2019-10-23 | 2019-10-23 | Image recognition system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110781800B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103996018A (en) * | 2014-03-03 | 2014-08-20 | 天津科技大学 | Human-face identification method based on 4DLBP |
CN107273824A (en) * | 2017-05-27 | 2017-10-20 | 西安电子科技大学 | Face identification method based on multiple dimensioned multi-direction local binary patterns |
CN108376256A (en) * | 2018-05-08 | 2018-08-07 | 兰州大学 | One kind is based on ARM processing platform dynamic processing face identification systems and its equipment |
KR20180094453A (en) * | 2017-02-15 | 2018-08-23 | 동명대학교산학협력단 | FACE RECOGNITION Technique using Multi-channel Gabor Filter and Center-symmetry Local Binary Pattern |
CN108960112A (en) * | 2018-06-26 | 2018-12-07 | 肖鑫茹 | A kind of facial expression recognition system |
CN110287823A (en) * | 2019-06-10 | 2019-09-27 | 南京邮电大学 | Based on the face identification method for improving LBP operator and support vector cassification |
-
2019
- 2019-10-23 CN CN201911010091.6A patent/CN110781800B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103996018A (en) * | 2014-03-03 | 2014-08-20 | 天津科技大学 | Human-face identification method based on 4DLBP |
KR20180094453A (en) * | 2017-02-15 | 2018-08-23 | 동명대학교산학협력단 | FACE RECOGNITION Technique using Multi-channel Gabor Filter and Center-symmetry Local Binary Pattern |
CN107273824A (en) * | 2017-05-27 | 2017-10-20 | 西安电子科技大学 | Face identification method based on multiple dimensioned multi-direction local binary patterns |
CN108376256A (en) * | 2018-05-08 | 2018-08-07 | 兰州大学 | One kind is based on ARM processing platform dynamic processing face identification systems and its equipment |
CN108960112A (en) * | 2018-06-26 | 2018-12-07 | 肖鑫茹 | A kind of facial expression recognition system |
CN110287823A (en) * | 2019-06-10 | 2019-09-27 | 南京邮电大学 | Based on the face identification method for improving LBP operator and support vector cassification |
Non-Patent Citations (1)
Title |
---|
任静媛: ""基于多尺度特征提取的人脸检测算法研究"", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
Also Published As
Publication number | Publication date |
---|---|
CN110781800B (en) | 2022-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108520216B (en) | Gait image-based identity recognition method | |
CN110045823B (en) | Motion guidance method and device based on motion capture | |
CN105718869B (en) | The method and apparatus of face face value in a kind of assessment picture | |
CN105740780B (en) | Method and device for detecting living human face | |
CN106339702A (en) | Multi-feature fusion based face identification method | |
CN111126240B (en) | Three-channel feature fusion face recognition method | |
CN110796101A (en) | Face recognition method and system of embedded platform | |
CN102567716B (en) | Face synthetic system and implementation method | |
CN107133590B (en) | A kind of identification system based on facial image | |
CN110263768A (en) | A kind of face identification method based on depth residual error network | |
CN108171223A (en) | A kind of face identification method and system based on multi-model multichannel | |
CN105956570B (en) | Smiling face's recognition methods based on lip feature and deep learning | |
CN105069745A (en) | face-changing system based on common image sensor and enhanced augmented reality technology and method | |
CN104376611A (en) | Method and device for attendance of persons descending well on basis of face recognition | |
CN110175511A (en) | It is a kind of to be embedded in positive negative sample and adjust the distance pedestrian's recognition methods again of distribution | |
CN111126143A (en) | Deep learning-based exercise judgment guidance method and system | |
CN110598574A (en) | Intelligent face monitoring and identifying method and system | |
CN109740486B (en) | Method and system for identifying number of human beings contained in image | |
CN110222647A (en) | A kind of human face in-vivo detection method based on convolutional neural networks | |
CN104573628A (en) | Three-dimensional face recognition method | |
CN112131950B (en) | Gait recognition method based on Android mobile phone | |
CN113591692A (en) | Multi-view identity recognition method | |
CN110781800B (en) | Image recognition system | |
CN112084840A (en) | Finger vein identification method based on three-dimensional NMI | |
CN111582195A (en) | Method for constructing Chinese lip language monosyllabic recognition classifier |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |