CN110148468A - The method and device of dynamic human face image reconstruction - Google Patents
The method and device of dynamic human face image reconstruction Download PDFInfo
- Publication number
- CN110148468A CN110148468A CN201910382834.6A CN201910382834A CN110148468A CN 110148468 A CN110148468 A CN 110148468A CN 201910382834 A CN201910382834 A CN 201910382834A CN 110148468 A CN110148468 A CN 110148468A
- Authority
- CN
- China
- Prior art keywords
- image
- face
- response data
- neural response
- human face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000008904 neural response Effects 0.000 claims abstract description 231
- 230000008921 facial expression Effects 0.000 claims abstract description 171
- 210000004556 brain Anatomy 0.000 claims abstract description 54
- 238000012545 processing Methods 0.000 claims abstract description 17
- 239000013598 vector Substances 0.000 claims description 158
- 238000012360 testing method Methods 0.000 claims description 120
- 239000011159 matrix material Substances 0.000 claims description 67
- 238000012549 training Methods 0.000 claims description 54
- 230000009466 transformation Effects 0.000 claims description 32
- 210000005036 nerve Anatomy 0.000 claims description 19
- 230000004044 response Effects 0.000 claims description 18
- 210000003478 temporal lobe Anatomy 0.000 claims description 8
- 230000001815 facial effect Effects 0.000 abstract description 22
- 230000001149 cognitive effect Effects 0.000 abstract description 9
- 230000000007 visual effect Effects 0.000 abstract description 8
- 230000019771 cognition Effects 0.000 abstract description 5
- 208000010877 cognitive disease Diseases 0.000 abstract description 3
- 230000007246 mechanism Effects 0.000 abstract description 3
- 208000020016 psychiatric disease Diseases 0.000 abstract description 3
- 230000008859 change Effects 0.000 description 15
- 230000008569 process Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 8
- 238000002599 functional magnetic resonance imaging Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 6
- 238000000354 decomposition reaction Methods 0.000 description 6
- 201000010099 disease Diseases 0.000 description 5
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 5
- 238000000513 principal component analysis Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000004438 eyesight Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000002441 reversible effect Effects 0.000 description 3
- 208000024827 Alzheimer disease Diseases 0.000 description 2
- 206010039966 Senile dementia Diseases 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 208000006888 Agnosia Diseases 0.000 description 1
- 241001047040 Agnosia Species 0.000 description 1
- 206010029260 Neuroblastoma Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000002610 neuroimaging Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 201000006470 prosopagnosia Diseases 0.000 description 1
- 238000002601 radiography Methods 0.000 description 1
- 230000003997 social interaction Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biomedical Technology (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Pathology (AREA)
- General Engineering & Computer Science (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
A kind of method and device of dynamic human face image reconstruction provided by the invention, for dynamic human face image the characteristics of high-level visual signature information is main is presented, different attribute facial characteristics is the characteristic for being responsible for processing by different higher cognitive brain areas, using three kinds of different attribute advanced features information, utilize three kinds of different higher cognitive brain areas, get the first kind neural response data of three kinds of different attribute advanced features information of corresponding face, second class neural response data and third class neural response data, different higher cognitive brain area and dynamic human face are constructed from visual pattern space to the model of brain aware space simultaneously, and the space-filling curve relationship between model, get face base image, Facial Expression Image and face identity image, to realize the reconstruction of various dimensions facial characteristics, get dynamic human face image, it can rebuild The dynamic human face image that some patient perceivables arrive, make we to the cognitive disorder mechanism of mental disease have deeper into understanding and cognition.
Description
Technical field
The present invention relates to image processing techniques more particularly to a kind of method and devices of dynamic human face image reconstruction.
Background technique
The vision object perceived from brain neuroblastoma signal reproduction is a cutting edge technology field being currently widely noticed, it
Refer to functional magnetic resonance signal (functional MagneticResonance Imaging, the letter by acquiring human brain
Claim: fMRI), and by means of image procossing and machine learning algorithm, restore the visual pattern seen by vision, face conduct
We are in the knowledge of natural environment and carry out a kind of visual perception object be most commonly encountered in social interaction and mostly important, certain tools
There are the disease of cognition and phrenoblabia such as prosopagnosia, self-closing disease, senile dementia, Parkinsonian in identification dynamic surface
Existing defects when the high-level characteristic attribute in hole, therefore, it is necessary to by face reconstruction techniques to imagining in user's brain to be measured
The reconstruction of face progress image.
The prior art is established using principal component analysis (Principal Component Analysis, referred to as: PCA)
Single linear mapping relations between eigenface and neural response signal, the reconstruction of Lai Shixian facial image.
However the prior art can only rebuild Static Human Face picture, it is difficult to meet in image reconstruction field to face multidimensional information
The demand of reconstruction.
Summary of the invention
The embodiment of the present invention provides a kind of method of dynamic human face image reconstruction, realizes dynamic human face reconstruction, is rebuilding
Dynamic human face image in simultaneously rebuild expressive features, identity characteristic, enrich the information of reconstruction, improve human face rebuilding
Accuracy.
The embodiment of the present invention in a first aspect, providing a kind of method of dynamic human face image reconstruction, comprising:
First kind neural response data are extracted, and according to the first kind neural response data and preset facial image weight
Established model obtains face base image.
The second class neural response data are extracted, and according to the second class neural response data and preset human face expression weight
Established model obtains Facial Expression Image.
Third class neural response data are extracted, and according to the third class neural response data and preset face identity weight
Established model obtains face identity image.
According to the face base image, the Facial Expression Image and the face identity image, dynamic human face is obtained
Image.
Optionally, described according to the first kind neural response data in a kind of possible implementation of first aspect
With preset face image model, face base image is obtained, comprising:
According to the following formula one and the first kind neural response data, obtain face base image.
Wherein, XG_RECONIt is the face base image,It is preset dynamic human face in face image model
The average image of base image sample, YtestIt is the first kind neural response data,It is in face image model
The average data of first kind neural response data sample, s caused by preset dynamic human face base image sampletestIt is middle Ytest
Projection coordinate, ttestIt is XG_RECONProjection coordinate, WtrainIt is s in face image modeltest-ttestTransformation matrix,
UtrainIt is Y in face image modeltestFeature vector, VtrainIt is preset dynamic human face in face image model
The feature vector of base image sample.
Optionally, it in a kind of possible implementation of first aspect, in the first kind neural response data and presets
Face image model, obtain face base image before, further includes:
Obtain dynamic human face base image training sample and with the first kind caused by the dynamic human face base image sample
Neural response data training sample.
Using the dynamic human face base image sample as output quantity, using the first kind neural response data sample as input
Amount, by following formula two to feature vector and the dynamic human face basis of s-t transformation matrix, first kind neural response data sample
The feature vector of image training sample carries out parameter learning, obtains s in face image modeltest-ttestTransformation matrix, people
The feature vector of first kind neural response data sample in face image reconstruction model, face foundation drawing in face image model
The feature vector of picture,
Wherein, X is the dynamic human face base image sample,It is the average image of X, Y is that the first kind nerve is rung
Data sample is answered,It is the average data of Y, s is the projection coordinate of Y, and t is the projection coordinate of X, and W is the s-t transformation matrix, U
It is the feature vector of Y, V is the feature vector of X.
According to s in the face image modeltest-ttestFirst in transformation matrix, the face image model
The feature vector of face base image, is obtained in the feature vector of class neural response data sample, the face image model
Take face image model.
Optionally, described according to the second class neural response data in a kind of possible implementation of first aspect
With preset human face expression reconstruction model, Facial Expression Image is obtained, comprising:
According to the following formula three and the second class neural response data, obtain Facial Expression Image.
Wherein, XE_RECONIt is the Facial Expression Image,It is preset dynamic people in human face expression reconstruction model
The average image of face facial expression image sample, YE_testIt is the second class neural response data,It is pre- in human face expression reconstruction model
If dynamic human face facial expression image sample caused by the second class neural response data sample average data, sE_testIt is YE_test's
Projection coordinate, tE_testIt is XE_RECONProjection coordinate, WE_trainIt is the s in human face expression reconstruction modelE_test-tE_testConvert square
Battle array, UE_trainIt is the Y of human face expression reconstruction modelE_testFeature vector, VE_trainIt is that human face expression reconstruction model is preset dynamic
The feature vector of state Facial Expression Image sample.
Optionally, it in a kind of possible implementation of first aspect, in the second class neural response data and presets
Human face expression reconstruction model, obtain Facial Expression Image before, further includes:
Obtain dynamic human face facial expression image training sample and with the second class caused by the dynamic human face facial expression image sample
Neural response data training sample.
Using the dynamic human face facial expression image sample as output quantity, using the second class neural response data sample as input
Amount, by following formula four to sE-tEThe feature vector and dynamic human face table of transformation matrix, the second class neural response data sample
The feature vector of feelings image training sample carries out parameter learning, obtains s in human face expression reconstruction modelE_test-tE_testConvert square
Battle array, the feature vector of the second class neural response data sample in human face expression reconstruction model, face in human face expression reconstruction model
The feature vector of facial expression image,
Wherein, XEIt is the dynamic human face facial expression image sample,It is XEThe average image, YEIt is the second class nerve
Response data sample,It is YEAverage data, sEIt is YEProjection coordinate, tEIt is XEProjection coordinate, WEIt is the sE-tEBecome
Change matrix, UEIt is YEFeature vector, VEIt is XEFeature vector.
According to the s in the human face expression reconstruction modelE_test-tE_testTransformation matrix, the human face expression reconstruction model
In the feature vector of the second class neural response data sample, in the human face expression reconstruction model Facial Expression Image feature to
Amount obtains human face expression reconstruction model.
Optionally, described according to the third class neural response data in a kind of possible implementation of first aspect
With preset face identity reconstruction model, face identity image is obtained, comprising:
According to the following formula five and the third class neural response data, obtain face identity image.
Wherein, XI_RECONIt is the face identity image,Preset dynamic human face in face identity reconstruction model
The average image of identity image pattern, YI testThe third class neural response data,It is in face identity reconstruction model
The average data of third class neural response data sample, s caused by preset dynamic human face identity image patternI_testIt is YI_test
Projection coordinate, tI_testIt is XI_RECONProjection coordinate, WtrainIt is s in face identity reconstruction modelI_test-tI_testConvert square
Battle array, UI_trainIt is face identity reconstruction model YI_testFeature vector, VI_trainIt is preset dynamic in face identity reconstruction model
The feature vector of state face identity image pattern.
Optionally, it in a kind of possible implementation of first aspect, in the third class neural response data and presets
Face identity reconstruction model, obtain face identity image before, further includes:
Obtain dynamic human face identity image training sample and with third class caused by dynamic human face identity image training sample
Neural response data training sample.
Using the dynamic human face identity image pattern as output quantity, using the third class neural response data sample as input
Amount, by following formula six to transformation matrix sI-tI, third class neural response data sample feature vector and dynamic human face body
The feature vector of part image training sample carries out parameter learning, obtains the s in face identity reconstruction modelI_test-tI_testTransformation
The feature vector of third class neural response data sample and face identity image training sample in matrix, face identity reconstruction model
Feature vector,
Wherein, XIIt is the dynamic human face identity image pattern,It is XIThe average image, YIIt is the third class nerve
Response data sample,It is YIAverage data, sIIt is YIProjection coordinate, tIIt is XIProjection coordinate, WIIt is the sI-tIBecome
Change matrix, UIIt is YIFeature vector, VIIt is face identity reconstruction model XIFeature vector.
According to s in the face identity reconstruction modelI_test-tI_testTransformation matrix, the face identity reconstruction model
In the feature vector of third class neural response data sample, in the face identity reconstruction model face identity image feature
Vector obtains face identity reconstruction model.
Optionally, in a kind of possible implementation of first aspect, the first kind neural response data are to be measured
The neural response data that the brain primary visual cortex brain area of user obtains.
The second class neural response data are the nerve obtained from the rear side sulcus temporalis superior and amygdaloid nucleus brain area of user to be measured
Response data.
The third class neural response data are from user to be measured from fusiform gyrus face processing area's brain area and front side temporal lobe
The neural response data that brain area obtains.
Optionally, described according to the face base image, the people in a kind of possible implementation of first aspect
Face facial expression image and the face identity image obtain dynamic human face image, comprising:
By the face base image, the average image of the Facial Expression Image and the face identity image, determine
For the dynamic human face image.
The second aspect of the embodiment of the present invention provides a kind of device of dynamic human face image reconstruction, comprising:
First obtains module, for extracting first kind neural response data, and according to the first kind neural response data
With preset face image model, face base image is obtained.
Second obtains module, for extracting the second class neural response data, and according to the second class neural response data
With preset human face expression reconstruction model, Facial Expression Image is obtained.
Third obtains module, for extracting third class neural response data, and according to the third class neural response data
With preset face identity reconstruction model, face identity image is obtained.
Dynamic human face image collection module, for according to the face base image, the Facial Expression Image and described
Face identity image obtains dynamic human face image.
Optionally, in a kind of possible implementation of second aspect, described first, which obtains module, is used for according to following public
Formula one and the first kind neural response data obtain face base image.
Wherein, XG_RECONIt is the face base image,It is preset dynamic human face in face image model
The average image of base image sample, YtestIt is the first kind neural response data sample,It is face image mould
The average data of first kind neural response data sample caused by preset dynamic human face base image sample, s in typetestIt is
YtestProjection coordinate, ttestIt is XG_RECONProjection coordinate, WtrainIt is s in face image modeltest-ttestConvert square
Battle array, UtrainIt is Y in face image modeltestFeature vector, VtrainIt is preset dynamic in face image model
The feature vector of face base image sample.
Optionally, in a kind of possible implementation of second aspect, the first acquisition module 401 further includes for obtaining
Take dynamic human face base image training sample and with first kind neural response number caused by the dynamic human face base image sample
According to training sample.
Using the dynamic human face base image sample as output quantity, using the first kind neural response data sample as input
Amount, by following formula two to feature vector and the dynamic human face basis of s-t transformation matrix, first kind neural response data sample
The feature vector of image training sample carries out parameter learning, obtains s in face image modeltest-ttestTransformation matrix, people
The feature vector of first kind neural response data sample in face image reconstruction model, face foundation drawing in face image model
The feature vector of picture,
Wherein, X is the dynamic human face base image sample,It is the average image of X, Y is that the first kind nerve is rung
Data sample is answered,It is the average data of Y, s is the projection coordinate of Y, and t is the projection coordinate of X, and W is the s-t transformation matrix, U
It is the feature vector of Y, V is the feature vector of X.
According to s in the face image modeltest-ttestFirst in transformation matrix, the face image model
The feature vector of face base image, is obtained in the feature vector of class neural response data sample, the face image model
Take face image model.
Optionally, in a kind of possible implementation of second aspect, described second, which obtains module, is used for according to following public
Formula three and the second class neural response data obtain Facial Expression Image;
Wherein, XE_RECONIt is the Facial Expression Image,It is preset dynamic people in human face expression reconstruction model
The average image of face facial expression image sample, YE_testIt is the second class neural response data sample,It is human face expression reconstruction model
In the second class neural response data sample caused by preset dynamic human face facial expression image sample average data, sE_testIt is
YE_testProjection coordinate, tE_testIt is XE_RECONProjection coordinate, WE_trainIt is the s in human face expression reconstruction modelE_test-
tE_testTransformation matrix, UE_trainIt is the Y of human face expression reconstruction modelE_testFeature vector, VE_trainIt is that human face expression rebuilds mould
The feature vector of the preset dynamic human face facial expression image sample of type.
Optionally, in a kind of possible implementation of second aspect, the second acquisition module further includes for obtaining
Dynamic human face facial expression image training sample and with the second class neural response data caused by the dynamic human face facial expression image sample
Training sample.
Using the dynamic human face facial expression image sample as output quantity, using the second class neural response data sample as input
Amount, by following formula four to sE-tEThe feature vector and dynamic human face table of transformation matrix, the second class neural response data sample
The feature vector of feelings image training sample carries out parameter learning, obtains s in human face expression reconstruction modelE_test-tE_testConvert square
Battle array, the feature vector of the second class neural response data sample in human face expression reconstruction model, face in human face expression reconstruction model
The feature vector of facial expression image,
Wherein, XEIt is the dynamic human face facial expression image sample,It is XEThe average image, YEIt is the second class nerve
Response data sample,It is YEAverage data, sEIt is YEProjection coordinate, tEIt is XEProjection coordinate, WEIt is the sE-tEBecome
Change matrix, UEIt is YEFeature vector, VEIt is XEFeature vector.
According to the s in the human face expression reconstruction modelE_test-tE_testTransformation matrix, the human face expression reconstruction model
In the feature vector of the second class neural response data sample, in the human face expression reconstruction model Facial Expression Image feature to
Amount obtains human face expression reconstruction model.
Optionally, the third obtains module for according to the following formula five and the third class neural response data, obtains
Take face identity image.
Wherein, XI_RECONIt is the face identity image,Preset dynamic human face in face identity reconstruction model
The average image of identity image pattern, YI_testThe third class neural response data sample,It is that face identity rebuilds mould
The average data of third class neural response data sample caused by preset dynamic human face identity image pattern, s in typeI_testIt is
YI_testProjection coordinate, tI_testIt is XI_RECONProjection coordinate, WtrainIt is s in face identity reconstruction modelI_test-tI_testBecome
Change matrix, UI_trainIt is face identity reconstruction model YI_testFeature vector, VI_trainIt is to be preset in face identity reconstruction model
Dynamic human face identity image pattern feature vector.
Optionally, in a kind of possible implementation of second aspect, it further includes for obtaining that the third, which obtains module,
Dynamic human face identity image training sample and with third class neural response data caused by dynamic human face identity image training sample
Training sample;It is defeated using the dynamic human face identity image pattern as output quantity, with the third class neural response data sample
Enter amount, by following formula six to transformation matrix sI-tI, third class neural response data sample feature vector and dynamic human face
The feature vector of identity image training sample carries out parameter learning, obtains the s in face identity reconstruction modelI_test-tI_testBecome
Change matrix, the feature vector of third class neural response data sample and face identity image training sample in face identity reconstruction model
This feature vector,
Wherein, XIIt is the dynamic human face identity image pattern,It is XIThe average image, YIIt is the third class nerve
Response data sample,It is YIAverage data, sIIt is YIProjection coordinate, tIIt is XIProjection coordinate, WIIt is the sI-tIBecome
Change matrix, UIIt is YIFeature vector, VIIt is face identity reconstruction model XIFeature vector.
According to s in the face identity reconstruction modelI_test-tI_testTransformation matrix, the face identity reconstruction model
In the feature vector of third class neural response data sample, in the face identity reconstruction model face identity image feature
Vector obtains face identity reconstruction model.
Optionally, in a kind of possible implementation of second aspect, the first kind neural response data are to be measured
The neural response data that the brain primary visual cortex brain area of user obtains.
The second class neural response data are the nerve obtained from the rear side sulcus temporalis superior and amygdaloid nucleus brain area of user to be measured
Response data.
The third class neural response data are from user to be measured from fusiform gyrus face processing area's brain area and front side temporal lobe
The neural response data that brain area obtains.
Optionally, in a kind of possible implementation of second aspect, the dynamic human face image collection module is used for will
The average image of the face base image, the Facial Expression Image and the face identity image, is determined as the dynamic
Facial image.
The method of a kind of dynamic human face image reconstruction provided by the invention, for dynamic human face image mainly high level is presented
The characteristics of secondary visual signature information is main, different attribute facial characteristics are the cognitions for being responsible for processing by different higher cognitive brain areas
Characteristic, this programme get correspondence using three kinds of different higher cognitive brain areas using three kinds of different attribute advanced features information
The first kind neural response data, the second class neural response data and third class mind of three kinds of different attribute advanced features information of face
Through response data, while different higher cognitive brain area and dynamic human face are constructed from visual pattern space to the mould of brain aware space
Space-filling curve relationship between type and model gets face base image, Facial Expression Image and face identity image,
It realizes the reconstruction of various dimensions facial characteristics, gets dynamic human face image, the dynamic people that some patient perceivables arrive can be rebuild
Face image, make we to the cognitive disorder mechanism of mental disease have deeper into understanding and cognition.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of the method for dynamic human face image reconstruction provided by the invention;
Fig. 2 is that a kind of signal of method of dynamic human face image reconstruction provided by the invention transmits schematic diagram;
Fig. 3 is a kind of structural schematic diagram of the device of dynamic human face image reconstruction provided in an embodiment of the present invention;
Fig. 4 is a kind of hardware structural diagram of dynamic human face equipment for reconstructing image provided in an embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only
It is only a part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill
Personnel's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Description and claims of this specification and term " first ", " second ", " third " " in above-mentioned attached drawing
Four " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way
Data be interchangeable under appropriate circumstances, so as to the embodiment of the present invention described herein can in addition to illustrating herein or
Sequence other than those of description is implemented.
It should be appreciated that in various embodiments of the present invention, the size of the serial number of each process is not meant to execute sequence
It is successive, the execution of each process sequence should be determined by its function and internal logic, the implementation without coping with the embodiment of the present invention
Journey constitutes any restriction.
It should be appreciated that in the present invention, " comprising " and " having " and their any deformation, it is intended that covering is not arranged
His includes, for example, the process, method, system, product or equipment for containing a series of steps or units are not necessarily limited to clearly
Those of list step or unit, but may include be not clearly listed or for these process, methods, product or equipment
Intrinsic other step or units.
Noun according to the present invention is explained first:
Functional magnetic resonance imaging (functional Magnetic Resonance Imaging, abbreviation fMRI): it is
Refer to that a kind of emerging neuroimaging mode, principle are to measure the blood that neuron activity causes using magnetic vibration radiography to move
The change of power.
The specific application scenarios of the present invention can be adapted for rebuilding the disease such as face agnosia with cognition and phrenoblabia
Disease, self-closing disease, senile dementia, Parkinsonian identify dynamic face high-level characteristic attribute when existing defects trouble
The dynamic human face image that person perceives, can make we to the cognitive disorder mechanism of mental disease have deeper into understanding and recognize
Know, current face image is come using principal component analysis (Principal Component Analysis, referred to as: PCA)
It realizes the reconstruction of facial image, however the prior art does not consider that is reflected to foundation is treated with a certain discrimination to different attribute advanced features information
Relationship is penetrated, Static Human Face picture can only be rebuild, it is difficult to meet in image reconstruction field the needs of to the reconstruction of face multidimensional information.
The present invention provides a kind of method of dynamic human face image reconstruction, it is intended to the technical problem as above of the prior art is solved,
Dynamic human face reconstruction is realized, has rebuild expressive features, identity characteristic simultaneously in the dynamic human face image of reconstruction, enriches weight
The information built improves the accuracy of human face rebuilding.
How to be solved with technical solution of the specifically embodiment to technical solution of the present invention and the application below above-mentioned
Technical problem is described in detail.These specific embodiments can be combined with each other below, for the same or similar concept
Or process may repeat no more in certain embodiments.Below in conjunction with attached drawing, the embodiment of the present invention is described.
Fig. 1 is a kind of flow diagram of the method for dynamic human face image reconstruction provided by the invention, method shown in Fig. 1
Executing subject can be software and/or hardware device.Method shown in Fig. 1 includes step S101 to step S104, specific as follows:
S101, first kind neural response data are extracted, and according to the first kind neural response data and preset face
Image reconstruction model obtains face base image.
Specifically, the first kind neural response data are to obtain from the brain primary visual cortex brain area of user to be measured
Neural response data, the different attribute of face be characterized in being responsible for Cognitive Processing by brain Different brain region, brain primary vision skin
Layer brain area can perceive the Pixel-level Level Visual feature of face, and first kind neural response data use functional mri
Technology acquires the functional magnetic resonance signal of the brain primary visual cortex brain area of user to be measured, obtains first kind neural response number
According to preset face image model response first kind neural response data can get corresponding face base image.
S102, the second class neural response data are extracted, and according to the second class neural response data and preset face
Expression reconstruction model obtains Facial Expression Image.
Specifically, the second class neural response data are to obtain from the rear side sulcus temporalis superior and amygdaloid nucleus brain area of user to be measured
Neural response data, the facial expression feature of face is responsible for processing by the brain areas such as rear side sulcus temporalis superior and amygdaloid nucleus brain area, second
Class neural response data use Functional magnetic resonance imaging, acquire the rear side sulcus temporalis superior of user to be measured and the function of amygdaloid nucleus brain area
Energy magnetic resonance signal, obtains the second class neural response data, and preset human face expression reconstruction model responds the second class neural response
Data can get corresponding Facial Expression Image.
S103, third class neural response data are extracted, and according to the third class neural response data and preset face
Identity reconstruction model obtains face identity image.
Specifically, the third class neural response data be from user to be measured from fusiform gyrus face processing area's brain area and before
The neural response data that side temporal lobe brain area obtains, the facial identity characteristic of face is by from fusiform gyrus face processing area's brain area and front side
The brain areas such as temporal lobe brain area are responsible for processing, and third class neural response data use Functional magnetic resonance imaging, acquire user to be measured
The functional magnetic resonance signal from fusiform gyrus face processing area brain area brain area or front side temporal lobe brain area, obtain third class neural response
Data, preset face identity reconstruction model response third class neural response data can get corresponding face identity figure
Picture.
S104, according to the face base image, the Facial Expression Image and the face identity image, obtain dynamic
Facial image.
Specifically, by the face base image, the mean chart of the Facial Expression Image and the face identity image
Picture is determined as the dynamic human face image.
Above-mentioned steps S101 to step S103, is not limited by the described action sequence, step in the present embodiment
S101 to step S103 can be performed in other orders or simultaneously.
A kind of dynamic human face image rebuilding method provided by the above embodiment is believed using three kinds of different attribute advanced features
Breath, using three kinds of different higher cognitive brain areas, gets the first kind of three kinds of different attribute advanced features information of corresponding face
Neural response data, the second class neural response data and third class neural response data, while constructing different higher cognitive brains
Area and dynamic human face are obtained from visual pattern space to the space-filling curve relationship between the model of brain aware space and model
To face base image, Facial Expression Image and face identity image, the reconstruction of Lai Shixian various dimensions facial characteristics is got dynamic
State facial image can rebuild the dynamic human face image that some tested users perceive.
On the basis of the above embodiments, step S101 is (according to the first kind neural response data and preset face
Image reconstruction model, obtain face base image) specific implementation may is that
Referring to fig. 2, Fig. 2 is that a kind of signal of method of dynamic human face image reconstruction provided by the invention transmits schematic diagram,
Dynamic human face includes basic dynamic human face aware space, human face expression aware space and face identity in the expression of brain aware space
Aware space;Expression of the dynamic human face in image space includes primary image pixel space, face-image expression space and face
Portion image identity space.
One (i.e. face image model) and the first kind neural response data according to the following formula, get step
Face base image in S101,
Wherein, XG_RECONIt is the face base image,It is preset dynamic human face in face image model
The average image of base image sample, YtestIt is the first kind neural response data sample,It is face image mould
The average data of first kind neural response data sample caused by preset dynamic human face base image sample, s in typetestIn being
YtestProjection coordinate, ttestIt is XG_RECONProjection coordinate, WtrainIt is s in face image modeltest-ttestConvert square
Battle array, UtrainIt is Y in face image modeltestFeature vector, VtrainIt is preset dynamic in face image model
The feature vector of face base image sample.
On the basis of the above embodiments, according to the first kind neural response data and preset face image
Model can also include the process to parameter learning each in face image model, specifically before obtaining face base image
It is as follows:
S201, using the dynamic human face base image sample as output quantity, with the first kind neural response data sample
For input quantity, feature vector and dynamic people by formula two as above to s-t transformation matrix, first kind neural response data sample
The feature vector of face base image training sample carries out parameter learning, obtains s in face image modeltest-ttestConvert square
Battle array, the feature vector of first kind neural response data sample in face image model, face in face image model
The feature vector of base image.
Wherein, dynamic human face is under primary image pixel space, it is assumed that XjFor dynamic human face visual pattern, j=1 here,
2 ..., N, N are dynamic human face image number, each width dynamic human face are indicated in the form of one-dimensional vector, then dynamic human face base
Plinth image pattern X is indicated as follows are as follows: X=[X1 X2 ... Xj ... XN]。
PCA singular value decomposition is carried out to X, to generate one " primary image pixel space " using these samples, every width is dynamic
Projection coordinate of the state facial image sample under primary image pixel space are as follows:
WhereinIt is the average image of X, V is the feature vector of X, is arranged from high to lower according to the size of its corresponding characteristic value
Sequence, V can more specifically indicate V=[V1,V2,...,VN], V isAll (linear independence) feature vector feature vectors,
It is each to be classified as one group of feature vector.
At dynamic human face base image sample X, each width dynamic image (image being not limited in sample) is ok
By its projection coordinate under this space come linear expression, this decomposable process based on PCA is reversible, therefore any width regards
Feel that image can be reconstructed according to its projection coordinate under feature space to be formulated are as follows:
Wherein, dynamic human face is under basic dynamic human face aware space, it is assumed that YjIt is a width dynamic human face image at one group
Neural response in brain primary visual cortex brain area is distributed, here j=1,2 ..., N, by YjThe table in the form of one-dimensional vector
Show, then dynamic human face image pattern collection is in the neural response of brain primary visual cortex brain area, i.e. first kind neural response data
Sample Y is expressed as Y=[Y1 Y2 ... Yj ... YN], singular value decomposition, every width dynamic human face image are carried out to Y using PCA
Projection coordinate of the neural response under the neural response space can indicate are as follows:
It is the average data of Y, U is the feature vector of Y, and from big to small according to corresponding characteristic value, U is embodied as U
=[U1,U2,...,UN]。
Wherein, space-filling curve relationship is that dynamic human face image pattern is expressed as it at Y in the projection coordinate t under X
The linear transformation of projection coordinate s, i.e.,
T=sW formula 2.4
Wherein W is s-t transformation matrix, known to t and s and be non-singular matrix in the case where, a kind of solution of transformation matrix W
Mode are as follows:
W=(sTs+I)-1sTT formula 2.5
In conclusion formula two is formed by formula 2.1, formula 2.3, public affairs 2.4 and formula 2.5,
Wherein, X is the dynamic human face base image sample,It is the average image of X, Y is that the first kind nerve is rung
Data sample is answered,It is the average data of Y, s is the projection coordinate of Y, and t is the projection coordinate of X, and W is the s-t transformation matrix, U
It is the feature vector of Y, V is the feature vector of X.
S202, according to s in the face image modeltest-ttestTransformation matrix, the face image model
In the feature vector of middle first kind neural response data sample, the face image model feature of face base image to
Amount obtains the face image model such as formula one.
On the basis of the above embodiments, step S102 (extracts the second class neural response data, and according to second class
Neural response data and preset human face expression reconstruction model, obtain Facial Expression Image) specific implementation may is that
Three (i.e. human face expression reconstruction models) and the second class neural response data according to the following formula obtain face table
Feelings image;
Wherein, XE_RECONIt is the Facial Expression Image,It is preset dynamic people in human face expression reconstruction model
The average image of face facial expression image sample, YE_testIt is the second class neural response data sample,It is that human face expression rebuilds mould
The average data of second class neural response data sample caused by preset dynamic human face facial expression image sample, s in typeE_testIt is
YE_testProjection coordinate, tE_testIt is XE_RECONProjection coordinate, WE_trainIt is the s in human face expression reconstruction modelE_test-
tE_testTransformation matrix, UE_trainIt is the Y of human face expression reconstruction modelE_testFeature vector, VE_trainIt is that human face expression rebuilds mould
The feature vector of the preset dynamic human face facial expression image sample of type.
On the basis of the above embodiments, mould is rebuild according to the second class neural response data and preset human face expression
Type can also include the process to parameter learning each in human face expression reconstruction model, specifically such as before obtaining Facial Expression Image
Under:
S301, using the dynamic human face facial expression image sample as output quantity, with the second class neural response data sample
For input quantity, by formula as above four to sE-tEThe feature vector and dynamic of transformation matrix, the second class neural response data sample
The feature vector of Facial Expression Image training sample carries out parameter learning, obtains s in human face expression reconstruction modelE_test-tE_test
The feature vector of second class neural response data sample, human face expression reconstruction model in transformation matrix, human face expression reconstruction model
The feature vector of middle Facial Expression Image.
Obtain dynamic human face facial expression image training sample and with the second class caused by the dynamic human face facial expression image sample
Neural response data training sample;Wherein, dynamic human face re-flags image pattern concentration under face-image expression space
N number of sample image, and dynamic human face sample set is reintegrated, with Facial Expression Image training sample XEForm indicate such as
Under:
Wherein XEEach column be spliced by P of same facial expression different facial identity one-dimensional vectors, represent
A kind of facial expression, XEEvery a line have Q value, represent Q kind of the same facial identity on an image local position
Expression shape change.
To XEThe singular value decomposition based on PCA is carried out, each facial expression is in XEUnder projection coordinate can indicate are as follows:
It is XEThe average image, VEIt is XEFeature vector, characteristic value by from big to small sequence arrangement be expressed as
In XEUnder, each facial expression (being not limited to the expression type in sample) can be by it under this space
Projection coordinate indicate, since the decomposable process of PCA is reversible, any kind facial expression can be according to it in expressive features
Projection coordinate under space, which reconstructs, to be come:
Wherein, dynamic human face is under human face expression aware space, it is assumed that Yi,eIt is a width dynamic human face image on rear side temporo
Neural response distribution in ditch and amygdaloid nucleus brain area, to Yi,eIt is rearranged according to the neural response for being often classified as a kind of expression
Obtain the second class neural response data sample YE, to YESingular value decomposition is carried out, the neural response of every kind of facial expression is in the YEUnder
Projection coordinate can indicate are as follows:
Wherein,It is YEAverage data, UEIt is feature vector, is embodied as by corresponding eigenvalue is descending
By dynamic human face image pattern in XEUnder projection coordinate tEIt is expressed as in YELower projection coordinate sELinear change
It changes, defines tEAnd sEAre as follows:
Here id is the label of each facial identity individual, and the mapping relations of foundation are as follows:
tE(id)=sE(id)WE(id)Formula 4.4
WE(id)A kind of parsing may be expressed as:
WE(id)=(sT E(id)sE(id)+I)-1sT E(id)tE(id)Formula 4.5
In conclusion by forming formula four by formula 4.1, formula 4.3, formula 4.4 and formula 4.5,
Wherein, XEIt is the dynamic human face facial expression image sample,It is XEThe average image, YEIt is the second class nerve
Response data sample,It is YEAverage data, sEIt is YEProjection coordinate, tEIt is XEProjection coordinate, WEIt is the sE-tEBecome
Change matrix, UEIt is YEFeature vector, VEIt is XEFeature vector.
S302, according to the s in the human face expression reconstruction modelE_test-tE_testTransformation matrix, the human face expression are rebuild
The feature vector of second class neural response data sample in model, in the human face expression reconstruction model Facial Expression Image spy
Vector is levied, human face expression reconstruction model is obtained.
On the basis of the above embodiments, step S103 (extracts third class neural response data, and according to the third class
Neural response data and preset face identity reconstruction model, obtain face identity image) specific implementation may is that
Five (i.e. face identity reconstruction models) and the third class neural response data according to the following formula obtain face body
Part image;
Wherein, XI_RECONIt is the face identity image,Preset dynamic human face in face identity reconstruction model
The average image of identity image pattern, YI_testThe third class neural response data sample,It is that face identity rebuilds mould
The average data of third class neural response data sample caused by preset dynamic human face identity image pattern, s in typeI_testIt is
YI_testProjection coordinate, tI_testIt is XI_RECONProjection coordinate, WtrainIt is s in face identity reconstruction modelI_test-tI_testBecome
Change matrix, UI_trainIt is face identity reconstruction model YI_testFeature vector, VI_trainIt is to be preset in face identity reconstruction model
Dynamic human face identity image pattern feature vector.
On the basis of the above embodiments, mould is rebuild according to the third class neural response data and preset face identity
Type can also include the process to parameter learning each in face image model, specifically such as before obtaining face identity image
Under:
S401, using the dynamic human face identity image pattern as output quantity, with the third class neural response data sample
For input quantity, by formula six as above to transformation matrix sI-tI, third class neural response data sample feature vector and dynamic
The feature vector of face identity image training sample carries out parameter learning, obtains the s in face identity reconstruction modelI_test-
tI_testThe feature vector and face identity figure of third class neural response data sample in transformation matrix, face identity reconstruction model
As the feature vector of training sample.
Obtain dynamic human face identity image training sample and with third class caused by dynamic human face identity image training sample
Neural response data training sample.
Wherein, dynamic human face again integrates N number of sample image of label under face-image identity space, and
With dynamic human face identity image pattern XIIt indicates, XIEach column from Q difference facial expression one-dimensionals of same facial identity to
Amount is spliced, and represents a facial identity individual, XIEvery a line have P value, represent the same facial expression in image
P individual identity variation on local location.
To XIThe singular value decomposition based on PCA is carried out, each facial identity individual is in this new identity characteristic space
Under projection coordinate can indicate are as follows:
It is XIThe average image, VIIt is XIFeature vector, characteristic value by from big to small sequence arrangement can indicate
For
In XIUnder, each facial identity individual (being not limited to the identity individual in sample) can be by it in this sky
Between under projection coordinate indicate that, since the decomposable process of PCA is reversible, any one identity individual can be according to it in identity
Projection coordinate under feature space, which reconstructs, to be come:
Wherein, dynamic human face is under face identity aware space, it is assumed that Yi,eIt is a width dynamic human face image from fusiform gyrus
Neural response distribution in face processing area brain area and front side temporal lobe brain area, to Yi,eAccording to being often classified as, a facial identity is individual
Neural response data rearranged to obtain YI, and to YIPCA singular value decomposition is carried out, the nerve of every kind of facial expression is rung
It should be in YIUnder projection coordinate can indicate are as follows:
HereIt is YIAverage data, UIIt is YIFeature vector, be embodied as by corresponding eigenvalue is descending
By projection coordinate t of the dynamic human face image pattern under face-image identity spaceIIt is expressed as in neural response
Projection coordinate s under spaceILinear transformation.Redefine tIAnd sIAre as follows:
Here ex is the label of each facial identity.Mapping relations are as follows:
tI(ex)=sI(ex)WI(ex)Formula 6.4
WI(ex)Analytic solutions may be expressed as:
WI(ex)=(sT I(ex)sI(ex)+I)-1sT I(ex)tI(ex)Formula 6.5
In conclusion formula six is formed by formula 6.1, formula 6.3, formula 6.4 and formula 6.5,
Wherein, XIIt is the dynamic human face identity image pattern,It is XIThe average image, YIIt is the third class nerve
Response data sample,It is YIAverage data, sIIt is YIProjection coordinate, tIIt is XIProjection coordinate, WIIt is the sI-tIBecome
Change matrix, UIIt is YIFeature vector, VIIt is face identity reconstruction model XIFeature vector.
S402, according to s in the face identity reconstruction modelI_test-tI_testTransformation matrix, the face identity rebuild
Face identity image in the feature vector of third class neural response data sample in model, the face identity reconstruction model
Feature vector obtains face identity reconstruction model.
It is a kind of structural schematic diagram of the device of dynamic human face image reconstruction provided in an embodiment of the present invention, such as referring to Fig. 3
The device 40 of dynamic human face image reconstruction shown in Fig. 3, comprising:
First obtains module 401, for extracting first kind neural response data, and according to the first kind neural response number
According to preset face image model, obtain face base image.
Second obtains module 402, for extracting the second class neural response data, and according to the second class neural response number
According to preset human face expression reconstruction model, obtain Facial Expression Image.
Third obtains module 403, for extracting third class neural response data, and according to the third class neural response number
According to preset face identity reconstruction model, obtain face identity image.
Dynamic human face image collection module 404, for according to the face base image, the Facial Expression Image and institute
Face identity image is stated, dynamic human face image is obtained.
The device of the dynamic human face image reconstruction of embodiment illustrated in fig. 3 accordingly can be used for executing the implementation of method shown in Fig. 1
Step in example, it is similar that the realization principle and technical effect are similar, and details are not described herein again.
Optionally, described first module 401 is obtained for according to the following formula one and the first kind neural response data,
Obtain face base image;
Wherein, XG_RECONIt is the face base image,It is preset dynamic human face in face image model
The average image of base image sample, YtestIt is the first kind neural response data sample,It is face image mould
The average data of first kind neural response data sample caused by preset dynamic human face base image sample, s in typetestIn being
YtestProjection coordinate, ttestIt is XG_RECONProjection coordinate, WtrainIt is s in face image modeltest-ttestConvert square
Battle array, UtrainIt is Y in face image modeltestFeature vector, VtrainIt is preset dynamic in face image model
The feature vector of face base image sample.
Optionally, it is described first acquisition module 401 further include for obtain dynamic human face base image training sample and with
First kind neural response data training sample caused by the dynamic human face base image sample.
Using the dynamic human face base image sample as output quantity, using the first kind neural response data sample as input
Amount, by following formula two to feature vector and the dynamic human face basis of s-t transformation matrix, first kind neural response data sample
The feature vector of image training sample carries out parameter learning, obtains s in face image modeltest-ttestTransformation matrix, people
The feature vector of first kind neural response data sample in face image reconstruction model, face foundation drawing in face image model
The feature vector of picture,
Wherein, X is the dynamic human face base image sample,It is the average image of X, Y is that the first kind nerve is rung
Data sample is answered,It is the average data of Y, s is the projection coordinate of Y, and t is the projection coordinate of X, and W is the s-t transformation matrix, U
It is the feature vector of Y, V is the feature vector of X;According to s in the face image modeltest-ttestIt is transformation matrix, described
The feature vector of first kind neural response data sample in face image model, face in the face image model
The feature vector of base image obtains face image model.
Optionally, described second module 402 is obtained for according to the following formula three and the second class neural response data,
Obtain Facial Expression Image;
Wherein, XE_RECONIt is the Facial Expression Image,It is preset dynamic people in human face expression reconstruction model
The average image of face facial expression image sample, YE_testIt is the second class neural response data sample,It is human face expression reconstruction model
In the second class neural response data sample caused by preset dynamic human face facial expression image sample average data, sE_testIt is
YE_testProjection coordinate, tE_testIt is XE_RECONProjection coordinate, WE_trainIt is the s in human face expression reconstruction modelE_test-
tE_testTransformation matrix, UE_trainIt is the Y of human face expression reconstruction modelE_testFeature vector, VE_trainIt is that human face expression rebuilds mould
The feature vector of the preset dynamic human face facial expression image sample of type.
Optionally, it is described second acquisition module 402 further include for obtain dynamic human face facial expression image training sample and with
Second class neural response data training sample caused by the dynamic human face facial expression image sample.
Using the dynamic human face facial expression image sample as output quantity, using the second class neural response data sample as input
Amount, by following formula four to sE-tEThe feature vector and dynamic human face table of transformation matrix, the second class neural response data sample
The feature vector of feelings image training sample carries out parameter learning, obtains s in human face expression reconstruction modelE_test-tE_testConvert square
Battle array, the feature vector of the second class neural response data sample in human face expression reconstruction model, face in human face expression reconstruction model
The feature vector of facial expression image,
Wherein, XEIt is the dynamic human face facial expression image sample,It is XEThe average image, YEIt is the second class nerve
Response data sample,It is YEAverage data, sEIt is YEProjection coordinate, tEIt is XEProjection coordinate, WEIt is the sE-tEBecome
Change matrix, UEIt is YEFeature vector, VEIt is XEFeature vector;According to the s in the human face expression reconstruction modelE_test-
tE_testThe feature vector of second class neural response data sample, the face in transformation matrix, the human face expression reconstruction model
The feature vector of Facial Expression Image in expression reconstruction model obtains human face expression reconstruction model.
Optionally, the third obtains module 403 for according to the following formula five and the third class neural response data,
Obtain face identity image;
Wherein, XI_RECONIt is the face identity image,Preset dynamic human face in face identity reconstruction model
The average image of identity image pattern, YI_testThe third class neural response data sample,It is that face identity rebuilds mould
The average data of third class neural response data sample caused by preset dynamic human face identity image pattern, s in typeI_testIt is
YI_testProjection coordinate, tI_testIt is XI_RECONProjection coordinate, WtrainIt is s in face identity reconstruction modelI_test-tI_testBecome
Change matrix, UI_trainIt is face identity reconstruction model YI_testFeature vector, VI_trainIt is to be preset in face identity reconstruction model
Dynamic human face identity image pattern feature vector.
Optionally, the third obtain module 403 further include for obtain dynamic human face identity image training sample and with
Third class neural response data training sample caused by dynamic human face identity image training sample.With the dynamic human face identity figure
Decent is output quantity, using the third class neural response data sample as input quantity, by following formula six to transformation matrix
sI-tI, third class neural response data sample feature vector and dynamic human face identity image training sample feature vector carry out
Parameter learning obtains the s in face identity reconstruction modelI_test-tI_testThird class in transformation matrix, face identity reconstruction model
The feature vector of neural response data sample and the feature vector of face identity image training sample,
Wherein, XIIt is the dynamic human face identity image pattern,It is XIThe average image, YIIt is the third class nerve
Response data sample,It is YIAverage data, sIIt is YIProjection coordinate, tIIt is XIProjection coordinate, WIIt is the sI-tIBecome
Change matrix, UIIt is YIFeature vector, VIIt is face identity reconstruction model XIFeature vector;
According to s in the face identity reconstruction modelI_test-tI_testTransformation matrix, the face identity reconstruction model
In the feature vector of third class neural response data sample, in the face identity reconstruction model face identity image feature
Vector obtains face identity reconstruction model.
Optionally, the first kind neural response data are to obtain from the brain primary visual cortex brain area of user to be measured
Neural response data.
The second class neural response data are the nerve obtained from the rear side sulcus temporalis superior and amygdaloid nucleus brain area of user to be measured
Response data.
The third class neural response data are from user to be measured from fusiform gyrus face processing area's brain area and front side temporal lobe
The neural response data that brain area obtains.
Optionally, the dynamic human face image collection module 404 is used for the face base image, the human face expression
The average image of image and the face identity image is determined as the dynamic human face image.
It referring to fig. 4, is a kind of hardware structural diagram of equipment provided in an embodiment of the present invention, which includes: place
Manage device 51, memory 52 and computer program;Wherein
Memory 52, for storing the computer program, which can also be flash memory (flash).The calculating
Machine program is, for example, to realize application program, the functional module etc. of the above method.
Processor 51, for executing the computer program of the memory storage, to realize, terminal is executed in the above method
Each step.It specifically may refer to the associated description in previous methods embodiment.
Optionally, memory 52 can also be integrated with processor 51 either independent.
When the memory 52 is independently of the device except processor 51, the equipment can also include:
Bus 53, for connecting the memory 52 and processor 51.
The present invention also provides a kind of readable storage medium storing program for executing, computer program is stored in the readable storage medium storing program for executing, it is described
The method provided when computer program is executed by processor for realizing above-mentioned various embodiments.
Wherein, readable storage medium storing program for executing can be computer storage medium, be also possible to communication media.Communication media includes just
In from a place to any medium of another place transmission computer program.Computer storage medium can be general or special
Any usable medium enough accessed with computer capacity.For example, readable storage medium storing program for executing is coupled to processor, to enable a processor to
Information is read from the readable storage medium storing program for executing, and information can be written to the readable storage medium storing program for executing.Certainly, readable storage medium storing program for executing can also be with
It is the component part of processor.Processor and readable storage medium storing program for executing can be located at specific integrated circuit (Application
SpecificIntegrated Circuits, referred to as: ASIC) in.In addition, the ASIC can be located in user equipment.Certainly,
Processor and readable storage medium storing program for executing can also be used as discrete assembly and be present in communication equipment.Readable storage medium storing program for executing can be read-only
Memory (ROM), random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc..
The present invention also provides a kind of program product, the program product include execute instruction, this execute instruction be stored in it is readable
In storage medium.At least one processor of equipment can read this from readable storage medium storing program for executing and execute instruction, at least one processing
Device executes this and executes instruction so that equipment implements the dynamic human face image rebuilding method that above-mentioned various embodiments provide.
In the embodiment of above equipment, it should be appreciated that processor can be central processing unit (English: Central
Processing Unit, referred to as: CPU), it can also be other general processors, digital signal processor (English: Digital
Signal Processor, referred to as: DSP), specific integrated circuit (English: Application Specific Integrated
Circuit, referred to as: ASIC) etc..General processor can be microprocessor or the processor is also possible to any conventional place
Manage device etc..It can be embodied directly in hardware processor in conjunction with the step of the method disclosed in the present and execute completion or use
Hardware and software module combination in reason device execute completion.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent
Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to
So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into
Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution
The range of scheme.
Claims (10)
1. a kind of method of dynamic human face image reconstruction characterized by comprising
First kind neural response data are extracted, and according to the first kind neural response data and preset face image mould
Type obtains face base image;
The second class neural response data are extracted, and rebuild mould according to the second class neural response data and preset human face expression
Type obtains Facial Expression Image;
Third class neural response data are extracted, and rebuild mould according to the third class neural response data and preset face identity
Type obtains face identity image;
According to the face base image, the Facial Expression Image and the face identity image, dynamic human face image is obtained.
2. the method according to claim 1, wherein described according to the first kind neural response data and default
Face image model, obtain face base image, comprising:
According to the following formula one and the first kind neural response data, obtain face base image;
Wherein, XG_RECONIt is the face base image,It is preset dynamic human face basis in face image model
The average image of image pattern, YtestIt is the first kind neural response data,It is to be preset in face image model
Dynamic human face base image sample caused by first kind neural response data sample average data, stestIt is middle YtestThrowing
Shadow coordinate, ttestIt is XG_RECONProjection coordinate, WtrainIt is s in face image modeltest-ttestTransformation matrix, UtrainIt is
Y in face image modeltestFeature vector, VtrainIt is preset dynamic human face foundation drawing in face image model
Decent feature vector.
3. according to the method described in claim 2, it is characterized in that, in the first kind neural response data and preset face
Image reconstruction model, obtain face base image before, further includes:
Obtain dynamic human face base image training sample and with the nerve of the first kind caused by the dynamic human face base image sample
Response data training sample;
Using the dynamic human face base image sample as output quantity, using the first kind neural response data sample as input quantity,
By following formula two to the feature vector and dynamic human face foundation drawing of s-t transformation matrix, first kind neural response data sample
As the feature vector of training sample carries out parameter learning, s in acquisition face image modeltest-ttestTransformation matrix, face
The feature vector of first kind neural response data sample in image reconstruction model, face base image in face image model
Feature vector,
Wherein, X is the dynamic human face base image sample,It is the average image of X, Y is the first kind neural response number
According to sample,It is the average data of Y, s is the projection coordinate of Y, and t is the projection coordinate of X, and W is the s-t transformation matrix, and U is Y
Feature vector, V is the feature vector of X;
According to s in the face image modeltest-ttestFirst kind mind in transformation matrix, the face image model
The feature vector of face base image in feature vector, the face image model through response data sample obtains people
Face image reconstruction model.
4. the method according to claim 1, wherein described according to the second class neural response data and default
Human face expression reconstruction model, obtain Facial Expression Image, comprising:
According to the following formula three and the second class neural response data, obtain Facial Expression Image;
Wherein, XE_RECONIt is the Facial Expression Image,It is preset dynamic human face table in human face expression reconstruction model
The average image of feelings image pattern, YE_testIt is the second class neural response data,It is preset in human face expression reconstruction model
The average data of second class neural response data sample, s caused by dynamic human face facial expression image sampleE_testIt is YE_testProjection
Coordinate, tE_testIt is XE_RECONProjection coordinate, WE_trainIt is the s in human face expression reconstruction modelE_test-tE_testTransformation matrix,
UE_trainIt is the Y of human face expression reconstruction modelE_testFeature vector, VE_trainIt is the preset dynamic people of human face expression reconstruction model
The feature vector of face facial expression image sample.
5. according to the method described in claim 4, it is characterized in that, in the second class neural response data and preset face
Expression reconstruction model, obtain Facial Expression Image before, further includes:
Obtain dynamic human face facial expression image training sample and with the nerve of the second class caused by the dynamic human face facial expression image sample
Response data training sample;
Using the dynamic human face facial expression image sample as output quantity, using the second class neural response data sample as input quantity,
By following formula four to sE-tETransformation matrix, the feature vector of the second class neural response data sample and dynamic human face expression figure
As the feature vector of training sample carries out parameter learning, s in acquisition human face expression reconstruction modelE_test-tE_testTransformation matrix, people
The feature vector of second class neural response data sample in face expression reconstruction model, human face expression figure in human face expression reconstruction model
The feature vector of picture,
Wherein, XEIt is the dynamic human face facial expression image sample,It is XEThe average image, YEIt is the second class neural response
Data sample,It is YEAverage data, sEIt is YEProjection coordinate, tEIt is XEProjection coordinate, WEIt is the sE-tEConvert square
Battle array, UEIt is YEFeature vector, VEIt is XEFeature vector;
According to the s in the human face expression reconstruction modelE_test-tE_testIn transformation matrix, the human face expression reconstruction model
The feature vector of Facial Expression Image in the feature vectors of two class neural response data samples, the human face expression reconstruction model,
Obtain human face expression reconstruction model.
6. the method according to claim 1, wherein described according to the third class neural response data and default
Face identity reconstruction model, obtain face identity image, comprising:
According to the following formula five and the third class neural response data, obtain face identity image;
Wherein, XI_RECONIt is the face identity image,Preset dynamic human face identity figure in face identity reconstruction model
Decent the average image, YI_testThe third class neural response data,It is preset in face identity reconstruction model
The average data of third class neural response data sample, s caused by dynamic human face identity image patternI_testIt is YI_testProjection
Coordinate, tI_testIt is XI_RECONProjection coordinate, WtrainIt is s in face identity reconstruction modelI_test-tI_testTransformation matrix,
UI_trainIt is face identity reconstruction model YI_testFeature vector, VI_trainIt is preset dynamic people in face identity reconstruction model
The feature vector of face identity image pattern.
7. according to the method described in claim 6, it is characterized in that, in the third class neural response data and preset face
Identity reconstruction model, obtain face identity image before, further includes:
Obtain dynamic human face identity image training sample and with the nerve of third class caused by dynamic human face identity image training sample
Response data training sample;
Using the dynamic human face identity image pattern as output quantity, using the third class neural response data sample as input quantity,
By following formula six to transformation matrix sI-tI, third class neural response data sample feature vector and dynamic human face identity figure
As the feature vector progress parameter learning of training sample, the s in face identity reconstruction model is obtainedI_test-tI_testTransformation matrix,
The spy of the feature vector of third class neural response data sample and face identity image training sample in face identity reconstruction model
Vector is levied,
Wherein, XIIt is the dynamic human face identity image pattern,It is XIThe average image, YIIt is the third class neural response
Data sample,It is YIAverage data, sIIt is YIProjection coordinate, tIIt is XIProjection coordinate, WIIt is the sI-tIConvert square
Battle array, UIIt is YIFeature vector, VIIt is face identity reconstruction model XIFeature vector;
According to s in the face identity reconstruction modelI_test-tI_testTransformation matrix, in the face identity reconstruction model
In the feature vector of third class neural response data sample, the face identity reconstruction model feature of face identity image to
Amount obtains face identity reconstruction model.
8. the method for dynamic human face image reconstruction according to claim 1, which is characterized in that the first kind neural response
Data are the neural response data obtained from the brain primary visual cortex brain area of user to be measured;
The second class neural response data are the neural response obtained from the rear side sulcus temporalis superior and amygdaloid nucleus brain area of user to be measured
Data;
The third class neural response data are from user to be measured from fusiform gyrus face processing area's brain area and front side temporal lobe brain area
The neural response data of acquisition.
9. the method for dynamic human face image reconstruction according to claim 1 characterized by comprising described according to
Face base image, the Facial Expression Image and the face identity image obtain dynamic human face image, comprising:
By the face base image, the average image of the Facial Expression Image and the face identity image, it is determined as institute
State dynamic human face image.
10. a kind of device of dynamic human face image reconstruction characterized by comprising
First obtains module, for obtaining first kind neural response data, and according to the first kind neural response data and in advance
If face image model, obtain face base image;
Second obtains module, for obtaining the second class neural response data, and according to the second class neural response data and in advance
If human face expression reconstruction model, obtain Facial Expression Image;
Third obtains module, for obtaining third class neural response data, and according to the third class neural response data and in advance
If face identity reconstruction model, obtain face identity image;
Dynamic human face image collection module, for according to the face base image, the Facial Expression Image and the face
Identity image obtains dynamic human face image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910382834.6A CN110148468B (en) | 2019-05-09 | 2019-05-09 | Method and device for reconstructing dynamic face image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910382834.6A CN110148468B (en) | 2019-05-09 | 2019-05-09 | Method and device for reconstructing dynamic face image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110148468A true CN110148468A (en) | 2019-08-20 |
CN110148468B CN110148468B (en) | 2021-06-29 |
Family
ID=67594881
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910382834.6A Expired - Fee Related CN110148468B (en) | 2019-05-09 | 2019-05-09 | Method and device for reconstructing dynamic face image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110148468B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112784660A (en) * | 2019-11-01 | 2021-05-11 | 财团法人工业技术研究院 | Face image reconstruction method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008013575A2 (en) * | 2006-01-31 | 2008-01-31 | University Of Southern California | 3d face reconstruction from 2d images |
CN101159015A (en) * | 2007-11-08 | 2008-04-09 | 清华大学 | Two-dimension human face image recognizing method |
CN101320484A (en) * | 2008-07-17 | 2008-12-10 | 清华大学 | Three-dimensional human face recognition method based on human face full-automatic positioning |
CN102254154A (en) * | 2011-07-05 | 2011-11-23 | 南京大学 | Method for authenticating human-face identity based on three-dimensional model reconstruction |
CN108109198A (en) * | 2017-12-18 | 2018-06-01 | 深圳市唯特视科技有限公司 | A kind of three-dimensional expression method for reconstructing returned based on cascade |
CN109255830A (en) * | 2018-08-31 | 2019-01-22 | 百度在线网络技术(北京)有限公司 | Three-dimensional facial reconstruction method and device |
-
2019
- 2019-05-09 CN CN201910382834.6A patent/CN110148468B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008013575A2 (en) * | 2006-01-31 | 2008-01-31 | University Of Southern California | 3d face reconstruction from 2d images |
CN101159015A (en) * | 2007-11-08 | 2008-04-09 | 清华大学 | Two-dimension human face image recognizing method |
CN101320484A (en) * | 2008-07-17 | 2008-12-10 | 清华大学 | Three-dimensional human face recognition method based on human face full-automatic positioning |
CN102254154A (en) * | 2011-07-05 | 2011-11-23 | 南京大学 | Method for authenticating human-face identity based on three-dimensional model reconstruction |
CN108109198A (en) * | 2017-12-18 | 2018-06-01 | 深圳市唯特视科技有限公司 | A kind of three-dimensional expression method for reconstructing returned based on cascade |
CN109255830A (en) * | 2018-08-31 | 2019-01-22 | 百度在线网络技术(北京)有限公司 | Three-dimensional facial reconstruction method and device |
Non-Patent Citations (1)
Title |
---|
TECHPUNK: ""读心术:扫描大脑活动可重建你想象中的人脸图像"", 《HTTPS://WWW.SOHU.COM/A/223687701_102883》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112784660A (en) * | 2019-11-01 | 2021-05-11 | 财团法人工业技术研究院 | Face image reconstruction method and system |
CN112784660B (en) * | 2019-11-01 | 2023-10-24 | 财团法人工业技术研究院 | Face image reconstruction method and system |
Also Published As
Publication number | Publication date |
---|---|
CN110148468B (en) | 2021-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gao et al. | The neural sources of N170: Understanding timing of activation in face‐selective areas | |
Bigdely-Shamlo et al. | Measure projection analysis: a probabilistic approach to EEG source comparison and multi-subject inference | |
Wang et al. | Graph theoretical analysis of functional brain networks: test-retest evaluation on short-and long-term resting-state functional MRI data | |
Michalka et al. | Short-term memory for space and time flexibly recruit complementary sensory-biased frontal lobe attention networks | |
CN106023194B (en) | Amygdaloid nucleus spectral clustering dividing method based on tranquillization state function connects | |
Windhoff et al. | Electric field calculations in brain stimulation based on finite elements: an optimized processing pipeline for the generation and usage of accurate individual head models | |
Shinkareva et al. | Commonality of neural representations of words and pictures | |
Wang et al. | Where color rests: spontaneous brain activity of bilateral fusiform and lingual regions predicts object color knowledge performance | |
Anderson et al. | Network anticorrelations, global regression, and phase‐shifted soft tissue correction | |
JP5816917B2 (en) | Brain activity measuring device, brain activity measuring method, and brain activity estimating device | |
Nunez et al. | Multi-scale neural sources of EEG: genuine, equivalent, and representative. A tutorial review | |
US11333730B2 (en) | Systems and methods for mapping neuronal circuitry and clinical applications thereof | |
Allen et al. | A massive 7T fMRI dataset to bridge cognitive and computational neuroscience | |
CN103646183A (en) | Intelligent alzheimer disease discriminant analysis method based on artificial neural network and multi-modal MRI (Magnetic Resonance Imaging) | |
Fastenrath et al. | Human cerebellum and corticocerebellar connections involved in emotional memory enhancement | |
CA3063321A1 (en) | Method, command, device and program to determine at least one brain network involved in carrying out a given process | |
Zhao et al. | Two-stage spatial temporal deep learning framework for functional brain network modeling | |
Conte et al. | The influence of the head model conductor on the source localization of auditory evoked potentials | |
Nandy et al. | Novel nonparametric approach to canonical correlation analysis with applications to low CNR functional MRI data | |
Ghosh et al. | Organization of directed functional connectivity among nodes of ventral attention network reveals the common network mechanisms underlying saliency processing across distinct spatial and spatio-temporal scales | |
CN110148468A (en) | The method and device of dynamic human face image reconstruction | |
CN107209794A (en) | The finite element modeling of anatomical structure | |
Wagner et al. | Statistical non-parametric mapping in sensor space | |
Tsai et al. | Mapping single-trial EEG records on the cortical surface through a spatiotemporal modality | |
CN114494132A (en) | Disease classification system based on deep learning and fiber bundle spatial statistical analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210629 |