CN102081733A - Multi-modal information combined pose-varied three-dimensional human face five-sense organ marking point positioning method - Google Patents

Multi-modal information combined pose-varied three-dimensional human face five-sense organ marking point positioning method Download PDF

Info

Publication number
CN102081733A
CN102081733A CN 201110007180 CN201110007180A CN102081733A CN 102081733 A CN102081733 A CN 102081733A CN 201110007180 CN201110007180 CN 201110007180 CN 201110007180 A CN201110007180 A CN 201110007180A CN 102081733 A CN102081733 A CN 102081733A
Authority
CN
China
Prior art keywords
face
point
mark
dimensional
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201110007180
Other languages
Chinese (zh)
Other versions
CN102081733B (en
Inventor
张艳宁
郭哲
林增刚
郗润平
梁君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Haixun Railway Equipment Group Co., Ltd.
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201110007180A priority Critical patent/CN102081733B/en
Publication of CN102081733A publication Critical patent/CN102081733A/en
Application granted granted Critical
Publication of CN102081733B publication Critical patent/CN102081733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-modal information combined pose-varied three-dimensional human face five-sense organ marking point positioning method, which is used for solving the technical problem that the pose robustness of the traditional pose-varied three-dimensional human face five-sense organ marking point positioning method is poor. The invention adopts the technical scheme that: three-dimensional human face five-sense organ marking points are positioned through multi-modal information combination; and by means of the advantages of clear five-sense organ structure outlines in a two-dimensional image and obvious concave and convex five-sense organ marking areas in a three-dimensional human face model, the pose of the human face model is not required to be estimated and compensated previously, so that the robustness to pose is relatively strong. The average positioning accuracy to the five-sense organ marking points of the three-dimensional human face model is up to 98.5 percent; to the human face model with relatively low pose variation, the positioning accuracy is improved to 91.6 percent from 88.3 percent in the background technology; and to the human face model with relatively high pose variation, the method has obvious superiority and the positioning accuracy is improved to 73.5 percent from 57.5 percent in the background technology.

Description

The facial face monumented point of the colourful attitude three-dimensional face localization method of multi-modal information combination
Technical field
The present invention relates to the facial face monumented point of a kind of colourful attitude three-dimensional face localization method, the particularly a kind of facial face monumented point of colourful attitude three-dimensional face localization method of multi-modal information combination.
Background technology
Colourful attitude people's face face monumented point extraction is the gordian technique in the modeling of people's face, expression analysis and the recognition of face, and the facial face feature extraction of precise and high efficiency can lay a good foundation for above-mentioned application.
Document " Guangpeng Zhang; Yunhong Wang.A 3D facial feature point localization method based on statistical shape model.in Proceeding of ICASSP2007, pp.249-252. " discloses the facial face monumented point of a kind of colourful attitude three-dimensional face localization method.This method changes by detecting facial curve form, and sets up face distribution statistics model people's face face monumented point is positioned.At first, adopt curve form index feature that people's face surf zone is carried out coarse segmentation, be partitioned into the face zone; Next adopts facial face statistical shape model that divided area is carried out positioning feature point, satisfy statistical shape model in the face zone and cut apart the face unique point that rule and the nearest point of distance areas central point are final location, thereby realize the extraction of attitude invariant features point.But there is following problem in this method: at first, the curve form index in this method is subjected to the influence of curved surface noise spot bigger, and locating accuracy has only 88.3%; Secondly, adopt the face statistical shape model that attitude is had bigger dependence, though can overcome the influence of part attitude, for attitude variation faceform greatly, locating accuracy only reaches 57.5%.
Summary of the invention
In order to overcome the deficiency of the facial face monumented point of existing colourful attitude three-dimensional face localization method, the invention provides a kind of facial face monumented point of colourful attitude three-dimensional face localization method of multi-modal information combination to the attitude robust difference.This method adopts multi-modal information in conjunction with carrying out the facial face monumented point of three-dimensional face location, by the concavo-convex clearly demarcated advantage of face mark region in the clear and three-dimensional face model of face structure outline in the two dimensional image, can in advance faceform's attitude be estimated and compensate, thereby attitude is had stronger robustness.
The technical solution adopted for the present invention to solve the technical problems: a kind of facial face monumented point of colourful attitude three-dimensional face localization method of multi-modal information combination is characterized in comprising the steps:
(a) people's face 2 d texture image is carried out feature point detection, to carrying out accurate emulation by head-up position around the image deformation distortion that the camera light direction of principal axis change to produce, the inclination conversion degree is by the t sub sampling of orientation with every width of cloth people face 2 d texture image, emulating image according to
Figure BSA00000417870200011
Carry out the φ rotation, adopt standard deviation in the x direction
Figure BSA00000417870200012
Gaussian function image is carried out convolution operation, c=0.8; In the formula, φ is the camera longitude angle, and θ is the camera angle of latitude;
The deformation variable is regulated by camera longitude angle φ and camera angle of latitude θ;
Adopt the constant matching process of similarity that all emulating images are compared, extract the most representative face feature description point in the unique point that extracts;
(b) with detected face characteristic point data U={u in people's face 2 d texture image i∈ R 2: i=1,2 ..., N}, R 2Expression 2-D data space is mapped in the three-dimensional face model according to corresponding relation; In the two dimensional image in the non-face zone eigenwert of detected unique point corresponding point in the three-dimensional face data be 0, reject non-face region point in promptly non-face zone from the three-dimensional face data; To three-dimensional face model data set P{p i, i=1,2 ..., each the summit p among the N} i, calculate by maximum curvature
Figure BSA00000417870200021
And minimum curvature
Figure BSA00000417870200022
The structrual description information crestal line l of the three-dimensional face face concavo-convex variation in zone of expression RidgeWith valley line l Valley:
l ridge = { p i , if ( k 1 pi > k thresh 1 ) | p i ∈ P } - - - ( 1 )
l valley = { p i , if ( k 2 pi > k thresh 2 ) | p i ∈ P } - - - ( 2 )
Part sudden change point set as people's face face; In the formula, k Thresh1, k Thresh2Be respectively the thresholding curvature of regional area, calculate, obtain the crestal line l of people's face face regional location by genetic algorithm RidgeWith valley line l Valley
(c) Step1: respectively to different attitude three-dimensional face data set M fAnd M NfCatastrophe point set P F|mark| fAnd P G|mark NfIn each monumented point p I, F|mark f, p J, F|markn f, set up shape coding, and compare M fAnd M NfUnique point to restriction relation
Figure BSA00000417870200025
Step2: the point that R is set up calculates P to constraint F|mark fAnd P F|mark NfMatching error D (the p of each point I, F|mark f, p J, F|mark Nf), count deviation threshold τ;
Step3: remove P F|mark fAnd P F|mark NfMiddle error amount is set up new set P greater than the point of deviation threshold τ F|mark f' and P F|mark Nf', change Step2, less than given error, iteration finishes as if adjacent twice threshold difference;
Step4:P F|mark f' and P F|mark Nf' promptly be the face monumented point in the corresponding human face data model.
The invention has the beneficial effects as follows: owing to adopt multi-modal information in conjunction with carrying out the facial face monumented point of three-dimensional face location, by the concavo-convex clearly demarcated advantage of face mark region in the clear and three-dimensional face model of face structure outline in the two dimensional image, can in advance faceform's attitude be estimated and compensate, thereby attitude there is stronger robustness, the average locating accuracy of face monumented point to the front three-dimensional face model reaches 98.5%, attitude is changed less faceform, and locating accuracy brings up to 91.6% by 88.3% of background technology; For attitude variation faceform greatly, the present invention has significant superiority, and locating accuracy brings up to 73.5% by 57.5% of background technology, is higher than the locating accuracy of background technology far away.
Below in conjunction with embodiment the present invention is elaborated.
Embodiment
The facial face monumented point of the colourful attitude three-dimensional face localization method of the multi-modal information combination of the present invention at first adopts affine constant Affine-SIFT method to carry out feature point detection to the two-dimension human face texture image; Then, utilize mapping relations that detected unique point in the two-dimensional space is projected to three dimensions, each summit to the three-dimensional face model data centralization, the structrual description information crestal line and the valley line of the three-dimensional face face concavo-convex variation in zone that calculating is represented by maximum, minimum curvature are as the part sudden change point set of people's face face; At last, set up different attitude human face characteristic points to restriction relation, carry out matching optimization with least square method, matching result is carried out statistical study, draw a certain deviate, make most points all in this deviate, as the big point of threshold value filtering deviation, set up new set with this deviate, think that the point that is filtered off is non-face monumented point, less than given error amount, the point in set this moment is determined to be people's face face monumented point up to the difference of adjacent twice threshold.Because the present invention adopts the thinking of multi-modal information combination to carry out the facial face monumented point of three-dimensional face location, by the easy concavo-convex clearly demarcated advantage of face mark region in the clear and three-dimensional face model of face structure outline in the two dimensional image, can in advance faceform's attitude be estimated and compensate, thereby attitude there is stronger robustness, has realized location the facial face monumented point of colourful attitude three-dimensional face.
1, based on the feature point detection of 2 d texture image.
People's face can carry out the motion of 6DOF in three dimensions, simultaneously the facial image that different cameral gets access in the practical application is owing to the axial variation of camera light produces image fault, thereby makes the detection of human face characteristic point be faced with bigger difficulty.ASIFT (Affine Scale Invariant Feature Transform) is keeping SIFT method premium properties, as being changed, rotation, scale, translation, brightness maintains the invariance, visual angle change, noise are also kept on the basis of stability to a certain degree affine variation is had stronger insensitivity.The ASIFT method is at first carried out accurate emulation to changing the image fault that produces by the camera light direction of principal axis, then, adopts the step identical with SIFT to carry out feature point detection.ASIFT can carry out emulation to three parameters, and promptly yardstick, camera longitude angle and camera angle of latitude carry out normalization to rotation and translation parameters simultaneously, therefore affined transformation are had unchangeability.
ASIFT method specific implementation step is as follows:
Step1: to carrying out emulation by head-up position around the affine deformation that the camera light direction of principal axis may cause, this deformation is determined by two parameters with every width of cloth figure: camera longitude angle φ and angle of latitude θ.Image is via according to inclination conversion degree correlation parameter
Figure BSA00000417870200031
Carry out the φ rotation.For digital picture, the inclination conversion degree is finished by the t sub sampling of an orientation, and this process need at first uses one on the x direction have standard deviation
Figure BSA00000417870200041
Gaussian function come image is carried out convolution operation, choose c=0.8.
Step2: these rotations and inclination conversion degree variable are regulated by a series of limited latitudes and longitude angle, and the sampling step assurance emulating image of these parameters and other view similaritys by camera longitude angle φ and angle of latitude θ generation are stronger.
Step3: all emulating images compare with the constant matching process SIFT of similarity.
ASIFT has unchangeability to affined transformation, simultaneously because it has kept the invariant feature of SIFT to yardstick, rotation, translation itself, so ASIFT comparatively the different attitude facial images of the same individuality of robust ground detection are right to a unique point that is complementary.But most of unique points of being extracted by ASIFT do not have practical significance at people's face face aspect describing, and describe monumented point for the needed of paramount importance face of recognition of face, and most unique points are redundancy feature.In order to carry out human face analysis effectively, must from the unique point that extracts, extract the most representative face feature description point.
2, the local sudden change of face is described.
Two-dimensional image data U={u in the face database i∈ R 2: i=1,2 ..., N} and 3D grid data V={v j=(x j, y j, z j) ∈ R 3: j=1,2 ..., there are mapping relations in M}, is designated as Ψ: u i→ v j,
Figure BSA00000417870200042
Figure BSA00000417870200043
R 2, R 3Expression two dimension respectively, three-dimensional data space.The unique point of two dimensional image is shone upon to three dimensions, in the two dimensional image in the non-face zone eigenwert of detected unique point corresponding point in the three-dimensional face data be 0, be non-face zone, non-facial zone point can be rejected from the three-dimensional face data automatically thus.
With P{p i, i=1,2 ..., N} represents the three-dimensional face data acquisition, to each summit p of this data centralization i, its minimum and maximum curvature is expressed as respectively
Figure BSA00000417870200044
With
Figure BSA00000417870200045
The important information of curved-surface structure can be described, and the concavo-convex variation that crestal line on the three-dimension curved surface of being calculated by the minimax flexometer and valley line can be described people's face face zone preferably, respectively with l RidgeAnd l ValleyExpression, its calculating principle is:
l ridge = { p i , if ( k 1 pi > k thresh 1 ) | p i ∈ P } - - - ( 1 )
l valley = { p i , if ( k 2 pi > k thresh 2 ) | p i ∈ P } - - - ( 2 )
K wherein Thresh1, k Thresh2Be respectively the thresholding curvature of regional area, its value can be calculated by genetic algorithm.By the calculating principle of above valley line and crestal line, can obtain the valley line and the crestal line of people's face face regional location respectively.Suppose to be mapped to three-dimensional facial zone feature point set by
Figure BSA00000417870200048
Expression is satisfied the point of crestal line and valley line according to formula (1) and (2) calculating, and is kept corresponding crestal line and valley line in this set.Make feature point set P FInterior crestal line and valley line are labeled as P respectively F|ridge, P F|valley, and merge into P with two F|mark, the part sudden change point set of expression people face face.
3, iterative constrained face monumented point is located.
In order to gather P from the local catastrophe point of face F|markIn determine final monumented point, need set up the restriction relation between different attitude people's face data centralization face monumented points, and adopt the mode of iteration optimization to revise, reach Optimum Matching.Algorithm thought is: set up different attitude human face characteristic points to restriction relation, carry out matching optimization with least square method then, matching result is carried out statistical study, draw a certain deviate τ, make most points all in this deviate, as the big point of threshold value filtering deviation, set up new set P with this deviate F|mark', think that the point that is filtered off is non-face monumented point, less than given error amount, will gather P up to the difference of adjacent twice threshold at this moment F|mark' in point determine to be people's face face monumented point.
Algorithmic procedure is as follows:
Step1: suppose different attitude three-dimensional face data set M f, respectively to M fAnd M NfCatastrophe point set P F|mark fAnd P F|mark NfIn each monumented point p I, F|mark f, p J, F|mark Nf, set up shape coding, and compare M fAnd M NfUnique point to restriction relation
Figure BSA00000417870200051
Step2: the point that R is set up calculates P to constraint F|mark fAnd P F|mark NfMatching error D (the p of each point I, F|mark f, p J, F|mark Nf), statistics draws deviation threshold τ;
Step3: remove P F|mark fAnd P F|mark NfMiddle error amount is set up new set P greater than the point of threshold tau F|mark f' and P F|mark Nf', change Step2, less than given error, iteration finishes as if adjacent twice threshold difference;
Step4: note P at this moment F|mark f' and P F|mark Nf' be the face monumented point in the corresponding human face data model.

Claims (1)

1. the facial face monumented point of the colourful attitude three-dimensional face localization method of a multi-modal information combination is characterized in that comprising the steps:
(a) people's face 2 d texture image is carried out feature point detection, to carrying out accurate emulation by head-up position around the image deformation distortion that the camera light direction of principal axis change to produce, the inclination conversion degree is by the t sub sampling of orientation with every width of cloth people face 2 d texture image, emulating image according to
Figure FSA00000417870100011
Carry out the φ rotation, adopt standard deviation in the x direction Gaussian function image is carried out convolution operation, c=0.8; In the formula, φ is the camera longitude angle, and θ is the camera angle of latitude;
The deformation variable is regulated by camera longitude angle φ and camera angle of latitude θ;
Adopt the constant matching process of similarity that all emulating images are compared, extract the most representative face feature description point in the unique point that extracts;
(b) with detected face characteristic point data U={u in people's face 2 d texture image i∈ R 2: i=1,2 ..., N}, R 2Expression 2-D data space is mapped in the three-dimensional face model according to corresponding relation; In the two dimensional image in the non-face zone eigenwert of detected unique point corresponding point in the three-dimensional face data be 0, reject non-face region point in promptly non-face zone from the three-dimensional face data; To three-dimensional face model data set P{p i, i=1,2 ..., each the summit p among the N} i, calculate by maximum curvature
Figure FSA00000417870100013
And minimum curvature
Figure FSA00000417870100014
The structrual description information crestal line l of the three-dimensional face face concavo-convex variation in zone of expression RidgeWith valley line l Valley:
l ridge = { p i , if ( k 1 pi > k thresh 1 ) | p i ∈ P } - - - ( 1 )
l valley = { p i , if ( k 2 pi > k thresh 2 ) | p i ∈ P } - - - ( 2 )
Part sudden change point set as people's face face; In the formula, k Thresh1, k Thresh2Be respectively the thresholding curvature of regional area, calculate, obtain the crestal line l of people's face face regional location by genetic algorithm RidgeWith valley line l Valley
(c) Step1: respectively to different attitude three-dimensional face data set M fAnd M NfCatastrophe point set P F|mark fAnd P F|mark NfIn each monumented point p IF|mark f, p J, F|mark Nf, set up shape coding, and compare M fAnd M NfUnique point to restriction relation
Figure FSA00000417870100017
Step2: the point that R is set up calculates P to constraint F|mark fAnd P F|mark NfMatching error D (the p of each point I, F|mark f, p J, F|mark Nf), count deviation threshold τ;
Step3: remove P F|mark fAnd P F|mark NfMiddle error amount is set up new set P greater than the point of deviation threshold τ F|mark f' and P F|markn f', change Step2, less than given error, iteration finishes as if adjacent twice threshold difference;
Step4:P F|mark f' and P F|markn f' promptly be the face monumented point in the corresponding human face data model.
CN201110007180A 2011-01-13 2011-01-13 Multi-modal information combined pose-varied three-dimensional human face five-sense organ marking point positioning method Active CN102081733B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110007180A CN102081733B (en) 2011-01-13 2011-01-13 Multi-modal information combined pose-varied three-dimensional human face five-sense organ marking point positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110007180A CN102081733B (en) 2011-01-13 2011-01-13 Multi-modal information combined pose-varied three-dimensional human face five-sense organ marking point positioning method

Publications (2)

Publication Number Publication Date
CN102081733A true CN102081733A (en) 2011-06-01
CN102081733B CN102081733B (en) 2012-10-10

Family

ID=44087689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110007180A Active CN102081733B (en) 2011-01-13 2011-01-13 Multi-modal information combined pose-varied three-dimensional human face five-sense organ marking point positioning method

Country Status (1)

Country Link
CN (1) CN102081733B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247032A (en) * 2013-04-26 2013-08-14 中国科学院光电技术研究所 Weak extended target positioning method based on attitude compensation
CN103310179A (en) * 2012-03-06 2013-09-18 上海骏聿数码科技有限公司 Method and system for optimal attitude detection based on face recognition technology
CN104966316A (en) * 2015-05-22 2015-10-07 腾讯科技(深圳)有限公司 3D face reconstruction method, apparatus and server
CN107122705A (en) * 2017-03-17 2017-09-01 中国科学院自动化研究所 Face critical point detection method based on three-dimensional face model
CN108280803A (en) * 2018-01-22 2018-07-13 盎锐(上海)信息科技有限公司 Image generating method and device based on 3D imagings
CN109190484A (en) * 2018-08-06 2019-01-11 北京旷视科技有限公司 Image processing method, device and image processing equipment
CN109446879A (en) * 2018-09-04 2019-03-08 南宁学院 A kind of Intelligent human-face recognition methods
CN110084200A (en) * 2019-04-29 2019-08-02 重庆指讯科技股份有限公司 A kind of retail method based on recognition of face, system and terminal device
CN110443885A (en) * 2019-07-18 2019-11-12 西北工业大学 Three-dimensional number of people face model reconstruction method based on random facial image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000306106A (en) * 1999-02-15 2000-11-02 Medeikku Engineering:Kk Method for orientating three-dimensional directed object and image processor
TW200725433A (en) * 2005-12-29 2007-07-01 Ind Tech Res Inst Three-dimensional face recognition system and method thereof
CN101127075A (en) * 2007-09-30 2008-02-20 西北工业大学 Multi-view angle three-dimensional human face scanning data automatic registration method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000306106A (en) * 1999-02-15 2000-11-02 Medeikku Engineering:Kk Method for orientating three-dimensional directed object and image processor
TW200725433A (en) * 2005-12-29 2007-07-01 Ind Tech Res Inst Three-dimensional face recognition system and method thereof
CN101127075A (en) * 2007-09-30 2008-02-20 西北工业大学 Multi-view angle three-dimensional human face scanning data automatic registration method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《2009 Fifth International Conference on Image and Graphics》 20091231 Zhe Guo等 A Method Based on Geometric Invariant Feature for 3D Face Recognition , *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310179A (en) * 2012-03-06 2013-09-18 上海骏聿数码科技有限公司 Method and system for optimal attitude detection based on face recognition technology
CN103247032B (en) * 2013-04-26 2015-12-02 中国科学院光电技术研究所 Weak extended target positioning method based on attitude compensation
CN103247032A (en) * 2013-04-26 2013-08-14 中国科学院光电技术研究所 Weak extended target positioning method based on attitude compensation
CN104966316B (en) * 2015-05-22 2019-03-15 腾讯科技(深圳)有限公司 A kind of 3D facial reconstruction method, device and server
CN104966316A (en) * 2015-05-22 2015-10-07 腾讯科技(深圳)有限公司 3D face reconstruction method, apparatus and server
WO2016188318A1 (en) * 2015-05-22 2016-12-01 腾讯科技(深圳)有限公司 3d human face reconstruction method, apparatus and server
US10055879B2 (en) 2015-05-22 2018-08-21 Tencent Technology (Shenzhen) Company Limited 3D human face reconstruction method, apparatus and server
CN107122705A (en) * 2017-03-17 2017-09-01 中国科学院自动化研究所 Face critical point detection method based on three-dimensional face model
CN107122705B (en) * 2017-03-17 2020-05-19 中国科学院自动化研究所 Face key point detection method based on three-dimensional face model
CN108280803A (en) * 2018-01-22 2018-07-13 盎锐(上海)信息科技有限公司 Image generating method and device based on 3D imagings
CN109190484A (en) * 2018-08-06 2019-01-11 北京旷视科技有限公司 Image processing method, device and image processing equipment
US11461908B2 (en) 2018-08-06 2022-10-04 Beijing Kuangshi Technology Co., Ltd. Image processing method and apparatus, and image processing device using infrared binocular cameras to obtain three-dimensional data
CN109446879A (en) * 2018-09-04 2019-03-08 南宁学院 A kind of Intelligent human-face recognition methods
CN110084200A (en) * 2019-04-29 2019-08-02 重庆指讯科技股份有限公司 A kind of retail method based on recognition of face, system and terminal device
CN110443885A (en) * 2019-07-18 2019-11-12 西北工业大学 Three-dimensional number of people face model reconstruction method based on random facial image
CN110443885B (en) * 2019-07-18 2022-05-03 西北工业大学 Three-dimensional human head and face model reconstruction method based on random human face image

Also Published As

Publication number Publication date
CN102081733B (en) 2012-10-10

Similar Documents

Publication Publication Date Title
CN102081733B (en) Multi-modal information combined pose-varied three-dimensional human face five-sense organ marking point positioning method
Hausler et al. Patch-netvlad: Multi-scale fusion of locally-global descriptors for place recognition
CN108090958B (en) Robot synchronous positioning and map building method and system
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
Zhu et al. Single image 3d object detection and pose estimation for grasping
CN102880866B (en) Method for extracting face features
Castaldo et al. Semantic cross-view matching
CN109872397A (en) A kind of three-dimensional rebuilding method of the airplane parts based on multi-view stereo vision
CN105005755A (en) Three-dimensional face identification method and system
CN103136520B (en) The form fit of Based PC A-SC algorithm and target identification method
CN103136525B (en) High-precision positioning method for special-shaped extended target by utilizing generalized Hough transformation
CN101398886A (en) Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN101782969B (en) Reliable image characteristic matching method based on physical positioning information
Tung et al. Dynamic surface matching by geodesic mapping for 3d animation transfer
WO2012077286A1 (en) Object detection device and object detection method
CN103413347A (en) Extraction method of monocular image depth map based on foreground and background fusion
CN104598878A (en) Multi-modal face recognition device and method based on multi-layer fusion of gray level and depth information
CN107862735B (en) RGBD three-dimensional scene reconstruction method based on structural information
CN104091162A (en) Three-dimensional face recognition method based on feature points
CN103366400A (en) Method for automatically generating three-dimensional head portrait
CN102903109B (en) A kind of optical image and SAR image integration segmentation method for registering
CN104143080A (en) Three-dimensional face recognition device and method based on three-dimensional point cloud
CN108509866B (en) Face contour extraction method
CN103679702A (en) Matching method based on image edge vectors
CN109087323A (en) A kind of image three-dimensional vehicle Attitude estimation method based on fine CAD model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: JIANGSU HAIXUN RAILWAY EQUIPMENT GROUP CO., LTD.

Free format text: FORMER OWNER: NORTHWESTERN POLYTECHNICAL UNIVERSITY

Effective date: 20140813

Owner name: NORTHWESTERN POLYTECHNICAL UNIVERSITY

Effective date: 20140813

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 710072 XI AN, SHAANXI PROVINCE TO: 226600 NANTONG, JIANGSU PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20140813

Address after: 226600 Nantong, Haian, east of the town of East China Sea Road (East), No. 18, No.

Patentee after: Jiangsu Haixun Railway Equipment Group Co., Ltd.

Patentee after: Northwestern Polytechnical University

Address before: 710072 Xi'an friendship West Road, Shaanxi, No. 127

Patentee before: Northwestern Polytechnical University