WO2012142756A1 - 基于人眼图像的多特征融合身份识别方法 - Google Patents

基于人眼图像的多特征融合身份识别方法 Download PDF

Info

Publication number
WO2012142756A1
WO2012142756A1 PCT/CN2011/073072 CN2011073072W WO2012142756A1 WO 2012142756 A1 WO2012142756 A1 WO 2012142756A1 CN 2011073072 W CN2011073072 W CN 2011073072W WO 2012142756 A1 WO2012142756 A1 WO 2012142756A1
Authority
WO
WIPO (PCT)
Prior art keywords
iris
human eye
image
features
feature
Prior art date
Application number
PCT/CN2011/073072
Other languages
English (en)
French (fr)
Inventor
谭铁牛
孙哲南
张小博
张慧
Original Assignee
中国科学院自动化研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院自动化研究所 filed Critical 中国科学院自动化研究所
Priority to PCT/CN2011/073072 priority Critical patent/WO2012142756A1/zh
Priority to CN201180005239.2A priority patent/CN102844766B/zh
Priority to US13/519,728 priority patent/US9064145B2/en
Publication of WO2012142756A1 publication Critical patent/WO2012142756A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities

Definitions

  • Multi-feature fusion identification method based on human eye image
  • the invention relates to the fields of pattern recognition and statistical learning, and particularly relates to a multi-feature fusion identity recognition method based on human eye images.
  • the identity of a person is the basic information of a person, and its importance is self-evident.
  • Traditional password, password, ID card and other knowledge-based and physical-based identity authentication technologies are difficult to meet the needs of large-scale applications and high security levels, and also bring inconvenience to users.
  • intelligentization and informationization have become an inevitable trend of development.
  • Large-scale identity authentication technology is of great significance to homeland security, public security, financial security and network security.
  • Biometrics technology uses human physiological and behavioral characteristics for identity identification. It has unique features such as high uniqueness, ease of use, and good security.
  • Existing mainstream biometric modalities include face, iris, voice, fingerprint, palm print, signature, gait, and the like.
  • the corresponding biometric identification system has also been successfully applied in various fields of social life, such as access control and network security.
  • the user's information is to perform real-time identity authentication on the person in the scene without the active cooperation of the user.
  • some modalities such as faces, gaits, etc.
  • the human eye area mainly includes pupils, irises, eyelids, skin around the eyes, eyebrows, eyelashes, and the like.
  • iris recognition based on the uniqueness of iris texture has become one of the most effective biological features.
  • the relevant recognition system has not only been successfully applied in banks, coal mines, customs, airports, etc., but also in social issues such as welfare distribution and missing children. It played an important role.
  • the skin texture around the human eye has also been proven to have good discrimination in recent literature and can be used for identification.
  • the iris and eye skin regions exhibit color characteristics and can serve as an auxiliary feature.
  • the human eye region also has strong semantic features, such as left and right eyes, single/double eyelids, eyelid contours, etc., and also has a certain classification ability. It can be seen that the multi-featured nature of the eye region enables the eye region to be the most distinguishing biometric.
  • biometrics based on the eye area are also the most easily used and popularized biometrics.
  • the eye area is the most critical part of the face.
  • the human eye is the visual organ that perceives the outside world, which also makes the human eye area usually visible externally, even in the case of occlusion of the human face, the human eye area is the area with the least occlusion. Therefore, the human eye area is most easily acquired by a visual sensor such as a camera.
  • active imaging systems have also become possible, and related systems can obtain clear human eye images within a distance of more than ten meters or even tens of meters. These enable the identification of human eye images to achieve friendly human-computer interaction and active recognition.
  • iris recognition is easily limited by the application scenario. For example, some eye diseases affect the iris texture, making it unusable for iris recognition. Since the human eye region includes a variety of biometric modalities, irises, skin textures, etc., multimodal biometrics can meet the needs of a variety of application scenarios.
  • the iris recognition system based on iris uniqueness is the main method for identity authentication using eye region information, and other features of the eye are not used.
  • existing iris recognition patents such as the iris recognition algorithm using the Gabor filter for feature encoding proposed by Dr. Daugman of the University of Cambridge, UK (US Patent 5291560), the Chinese Academy of Sciences automatically researchers such as Tan Tieniu, who used the analysis of iris plaque characteristics (CN 1684095) to identify the iris, etc., all identify the local characteristics of the iris texture of the human eye for identification. In practice, they are susceptible to noise and rely on the iris. The accuracy of the segmentation.
  • the method of the invention utilizes the global feature of the iris, based on the sparse coding recognition method, can effectively overcome the influence of noise, and does not require additional image segmentation based noise monitoring.
  • the traditional fractional fusion does not take into account the effects of fractional distribution and data noise, and does not fully exploit the complementary characteristics between the various modes.
  • a multi-feature fusion identity recognition method based on human eye images including registration and identification, wherein the registration includes:
  • a normalized human eye image and a normalized iris image are obtained; a multimodal feature is extracted from the human eye image of the registered user, and the multimodal feature of the human eye image is saved as registration information.
  • the registration database To the registration database;
  • the identifying includes: obtaining, for a given recognized human eye image, a normalized human eye image and a normalized iris image; extracting a multimodal feature from the human eye image of the user to be recognized;
  • the extracted multimodal features are compared with the multimodal features in the database to obtain a comparison score, and the fusion score is obtained by fractional fusion;
  • Multi-feature fusion identification of human eye images using a classifier Multi-feature fusion identification of human eye images using a classifier.
  • the invention integrates various feature information of the human eye region for identity authentication, and the system identification accuracy is high, and can be used for an application site with a high security level.
  • the invention reduces the user's cooperation degree and can be used for long-distance, active identity authentication technology.
  • the present invention is applicable not only to human eye images under visible light, but also to human eye images under other monochromatic light.
  • DRAWINGS 1(a) shows a registration process of a multi-feature fusion identification method based on human eye images
  • FIG. 1(b) shows a recognition process of a multi-feature fusion identification method based on human eye images
  • FIG. 2(a) shows Grayscale human eye image
  • Figure 2 (b) shows a color human eye image
  • FIG. 3 shows a human eye image preprocessing process based on a multi-feature fusion identification method of a human eye image
  • FIG. 4 shows an iris region localization result and an eye region localization result of a human eye image
  • FIG. 5(a) shows a grayscale normalized human eye image
  • Figure 5 (b) shows a color normalized human eye image
  • Figure 6 (a) shows a grayscale normalized iris image
  • Figure 6 (b) shows a color normalized iris image
  • Figure 7 shows color normalized iris image segmentation results in iris color feature extraction
  • Figure 8 shows color normalized human eye image segmentation results in eye appearance feature extraction
  • Figure 9 shows eye semantics Region of interest selection and filter design results in feature extraction
  • Figure 10 (a) shows the texture primitive training process
  • Figure 10 (b) shows the texture primitive histogram construction process
  • Figure 11 illustrates an eye semantic feature extraction process
  • the multi-feature fusion identity recognition method based on human eye image proposed by the invention comprises a registration step and an identification step:
  • Step R As shown in FIG. 1(a), first, the obtained human eye image of the user to be registered is preprocessed R0 to obtain a normalized iris image and a normalized human eye image that can be used for feature extraction. Then, using the feature extraction method, the normalized image is extracted from the multimodal feature, and the registration information of the human eye image is obtained and saved to the registration database. It mainly includes the following steps: Step RO: Pre-processing the acquired human eye image of the user to be registered, including iris positioning, iris normalization, eye region localization, and eye region normalization. A normalized human eye image and an iris image are obtained.
  • Step R1 1 Normalize the iris image of the gray image of the human eye to be registered. After downsampling, all pixel values are arranged in rows to form an iris texture feature vector V ' texture , which is saved to the iris texture feature database.
  • Step R12 Color normalized iris image of the human eye image to be registered, extracting the iris color feature vector based on the color histogram method, and saving to the iris color feature data.
  • Step R13 Color normalization of the human eye image to be registered The human eye image is extracted from the eye apparent feature vector V '' text based on the eye texture primitive histogram. n, save to the eye appearance feature database.
  • Step R14 color normalizing the human eye image of the human eye image to be registered, extracting the eye semantic feature vector V 1 ' semantic based on the differential filtering and the sequencing measurement, and saving to the eye semantic feature database.
  • Identification step S As shown in Fig. 1(b), first, the obtained human eye image of the user to be identified is preprocessed to obtain a normalized iris image and a normalized human eye image which can be used for feature extraction. Then, the feature extraction method is used to extract the multi-modal features, and then the matching method is used to compare the obtained features with the features in the database to obtain the comparison score. The final matching score is obtained by fractional level fusion. The recognition result is obtained using the nearest neighbor classifier. As shown in Figure 1 (b), it mainly includes the following steps:
  • Step SO Pre-processing the acquired human eye image of the user to be identified, including iris positioning, iris normalization, eye region localization, and eye region normalization. A normalized human eye image and an iris image are obtained.
  • Step S1 multi-modal feature extraction of the human eye image. Includes the following steps: Step S11 : normalized iris image to be identified, based on sparse coding method, extraction
  • Step S12 extracting the iris color feature vector based on the color histogram method for the normalized iris image to be recognized
  • Step S13 The normalized human eye image to be recognized is extracted based on the eye texture primitive histogram, and the eye apparent feature vector v s text is extracted. nie.
  • Step S14 The normalized human eye image to be recognized, based on the differential filtering and the sequencing measurement, the eye-eye semantic feature vector v s semant i C .
  • Step S2 Multimodal feature vector alignment. Includes the following steps:
  • Step S21 The image of the iris registration iris characteristics of each type of image recognition, and the vector v s texture feature vector database reconstruction error is calculated as a ratio of the score S textul e..
  • Step S22 The iris color feature vector v s ⁇ 1 of the image will be identified. r is compared with the iris color feature vector of the registered image in the database, and the Euclidean distance is calculated to obtain the comparison score.
  • Step S23 The eye apparent feature vector V S text of the image will be identified. n and the eye's apparent feature vector V' teXt of the registered image in the database. n performing an alignment, calculating the Euclidean distance, and obtaining the comparison score step S24: comparing the eye semantic feature vector v s semant i c of the recognition image with the eye semantic feature vector V 1 ' semantic of the registration image in the database, Calculate the XOR distance and get the comparison s semantic °
  • Step S3 Multimodal comparison score fusion. Through the adaptive fractional level fusion strategy, the final alignment score is obtained.
  • Step S4 Classification is performed using the Nearest Neighborhood. Image preprocessing:
  • the acquired human eye original image (shown in Figure 2 (b)) needs to be iris-positioned (shown in Figure 4), and then a normalized human eye image that can be used for feature extraction ( Figure 5) and the normalized iris image (shown in Figure 6), as shown in Figure 3, the specific steps are as follows:
  • Iris positioning Use the two circles on the acquired original image of the human eye (shown in Figure 2 (b), resolution 640 X 480, RGB image) to fit the inner and outer circle boundaries of the iris, ie the boundary between the iris and the pupil and the iris and sclera
  • the boundary (the image of the iris positioning result of the preprocessed iris original image shown in Fig. 4), the iris positioning can be performed by the integral difference operator in the patent US Patent 5291560.
  • the integral difference operator operation is performed on the grayscale image (shown in Fig. 2(a)) to obtain the result of iris localization.
  • the mathematical expression of the integral difference operator is as follows:
  • G r is a Gaussian function with variance ⁇
  • ⁇ ( ⁇ , is the iris image
  • (r, x,, _y Q ) is the parameter of the circle.
  • the integral difference operator is a circular boundary
  • the basic idea of the detector is to find a circular boundary defined along the parameter in the parameter space of the circle (r, x., %), first perform the difference operation, and then integrate the integral operation to obtain the value. Normalized according to the circumference of the circle to obtain the integrated differential energy value on this parameter, the parameter value with the largest energy value in the parameter space is used as the final detected circle.
  • the boundary between the pupil and the iris is circular.
  • the two parameters with the largest integral difference value are taken according to the radius of the radius : the radius is small as the positioning result of the pupil and the iris boundary, and the radius is the iris and scleral boundary.
  • the positioning result shown in Figure 4, the two red circular areas are iris positioning results).
  • iris size in different iris images is not the same, in addition, The pupils also zoom in or out as the light changes, and the iris regions of different sizes need to be normalized before feature extraction. Since the inner and outer boundaries of the iris have been obtained from the iris positioning
  • iris normalization uses the spring rubber band model used in the patent US Patent 5291560.
  • the basic idea is to normalize the annular iris region on the original image to a fixed size by Cartesian to polar coordinate transformation.
  • Rectangular color normalized iris image shown in Figure 6 (b), resolution 512 X 66, color RGB image
  • image interpolation using linear interpolation Start counterclockwise with a horizontal angle of 0 degrees ( Figure 6 (b), in the direction of the red circular arrow) Sampling in the angular direction, starting with the boundary of the pupil and the iris, the sampling in the radial direction (shown in Figure 6 (b), the direction of the red radius arrow). Then take the red channel as the The grayscale normalized iris image (shown in Figure 6 (a), resolution 512 X 66).
  • Normalized human eye image acquisition Positioned at the center of the iris (shown in Figure 4) on the acquired original image of the human eye (shown in Figure 1, resolution 640 X 480, RGB image), based on the center of the human eye and the radius of the iris fixed on the normalized image Size ⁇ specifies a fixed-size rectangular area (shown in Figure 4, the center of the rectangle is the center of the outer boundary of the iris, the rectangle is 4 times the iris radius, and the width is 3 times the iris radius) as the region of interest of the human eye, bilinearly interpolated
  • the method maps the region of interest of the human eye to a fixed size region as a normalized image of the human eye. Take the red channel of the color normalized human eye image (shown in Figure 5 (b), resolution 200 X 150) as the grayscale normalized human eye image ( Figure 5 (a), resolution 200 X 150) .
  • the features adopted in the method of the present invention specifically include iris texture features based on sparse coding, iris color features based on color histograms, eye appearance features based on SIFT texture primitive histograms, eyes based on differential filters and sequencing measurements Semantic features.
  • the specific extraction steps are as follows:
  • the iris texture feature is the basic feature of the iris.
  • the feature representation based on sparse coding can overcome the influence of noise such as occlusion in face recognition, and achieve a high recognition rate. Therefore, the iris texture feature extraction in the present invention also adopts sparse coding.
  • the basic idea of sparse coding is that a category, The samples of 3 ⁇ 4 can also be obtained by linear combination of their own limited samples. Given W iris categories, each category/contains "a registered sample, all collections of registered samples: ( 1 ⁇ 2 , . 1 ⁇ 2 ....,. 1 ⁇ 2 ,,..., ⁇ .., , ,..
  • a argmin
  • the registered sample is the iris texture feature vector v'' totore for registration
  • the pixel gray value of the downsampled component is 8448.
  • iris texture feature vectors v s to , the size is mXn.
  • the iris Under the illumination of visible light, the iris exhibits color characteristics.
  • Color histograms are a common method of characterizing color features. The color features of the iris have localized regional characteristics, with different regions exhibiting different color distributions under ambient light illumination. Therefore, the use of block histogram representation can better characterize the color of the iris.
  • the present invention converts this to an expression of the 1 alpha beta color space, extracting iris color features in the 1 alpha beta color space.
  • the color normalized iris image is divided into 3X1 small blocks, that is, 3 divisions in the vertical direction (as shown in Fig.
  • each small block size is 22X512, respectively, on the three channels 1, ⁇ , ⁇ , The frequency at which each color value appears is counted, and a histogram is created.
  • the color space size of each channel is 256.
  • the conversion of the RGB color space to the 1 ⁇ ⁇ color space is as follows:
  • the apparent features of the entire eye area also have a certain degree of discrimination.
  • Texture primitive histogram is one of the most effective methods in texture analysis.
  • the basic idea is that a texture pattern is composed of basic elements (texture primitives). Different categories in the pattern are different due to the distribution of basic elements.
  • the acquisition process of the texture primitive is as follows.
  • the feature vector is extracted for each pixel of each channel of the normalized color human eye image for training by the SIFT local descriptor, and the size is 128 dimensions, and then all the local feature vectors obtained by the K-means clustering method are obtained. Aggregate k sets, and construct the texture primitive dictionary by taking the centers of k sets as k texture primitives.
  • Three texture primitive dictionaries are obtained on the three channels of ⁇ G, B.
  • the ocular appearance feature construction process is shown in Figure 10 (b).
  • For each color normalized human eye image divide into 2 X 2 and 4 total local blocks, for each color channel in each block.
  • the SIFT local feature vector is extracted on each pixel, and all the obtained local feature vectors are quantized into texture primitives closest to the distance.
  • the Euclidean distance is used to measure the distance between the local feature vector and the texture primitive, and then each texture primitive is counted.
  • the frequency of the texture primitive histogram on the single channel of the region.
  • the left and right eye label features are used to characterize the eye semantic features. Mark the left/right eye with 0/1. Specifically, the left and right eyelashes of the upper eyelid are differently distributed for marking.
  • the human eye label is marked by comparing the density scores of the two parts of the lacrimal gland and away from the lacrimal gland. Given the color normalized human eye image, the extraction process of the eye semantic features is shown in the figure, including eyelid fitting, region of interest selection, filter design, eyelash density estimation, and semantic feature coding ( Figure 11). Eyelid fitting: Firstly, the Canny edge detector is used to obtain the edge information on the gray normalized human eye image. Then, based on the iris positioning, the boundary points of the upper left corner area and the upper right corner area of the iris outer circle are selected for straight line fitting. , get a rough fit of the upper eyelid (as shown, white lines).
  • Selection of regions of interest Select the region of interest based on the results of the eyelid fitting.
  • the black frame area is the selected region of interest, and the selection of the right region of interest is taken as an example.
  • 0 is the center of the outer circle of the iris, and R is the radius. It is the diameter 0 of the vertical direction and the intersection of the straight line LE R with the right side of the eyelid.
  • the degree of L fi E R is R, ? ⁇ Yes! The midpoint of ⁇ .
  • the selected region of interest is a black frame region with a length of R and a width of R/2. The long side is parallel to the LE R.
  • the method of selecting the region of interest on the left is similar.
  • Filter design Design two left and right differential filters, corresponding to the left and right regions of interest. Take the right filter as an example. The size and direction are the same as the area of interest on the right. As shown in the figure, the red area is all set to 1, and the blank part is set to 0. The direction of the differential filter is perpendicular to the line fitted to the right of the eyelid to achieve directional adaptation. According to the radius of the iris, the radius of the circle obtained by fitting the filter length to R and the width of R/2 R to the outer boundary of the iris is achieved, and the scale adaptation is achieved. The left filter is obtained in a similar way.
  • Eyelash Density Estimation Estimate the eyelash density of the region of interest on a color normalized human eye image, taking the right side as an example. On each color channel, convolution is performed using the right filter and the right region of interest to obtain the response on each channel, and the response results on the three channels of ⁇ GB are matched, and the final result is used as the right side.
  • the eyelash density of the area of interest is estimated to be 3 ⁇ 4.
  • the eyelash density estimate D t of the region of interest on the left is obtained by a similar method.
  • Matching strategy - in the identification the feature vector to be identified and the registered feature vector need to be matched, based on t.
  • the original four comparison scores need to be normalized to the same scale range [0, 1].
  • score normalization methods There are a variety of score normalization methods in the literature, and the maximum and minimum normalization method is the simplest and most effective method, given a set of alignment scores 8 : ⁇ 3 1 , 3 2 ,..., 3 11 ⁇ , max.
  • the minimum normalization is as follows:
  • a weighted summation strategy is used to perform fractional fusion to obtain the ratio after fusion. For the score, as follows:
  • the corrected fusion score SV can be corrected by the fused alignment score, and the correction criterion is as follows:
  • the meaning of the first criterion is that when the semantic features of the human eye image and the registered human eye image are different, and the result of the fusion is The recognition human eye image is similar to the registered human eye image, then it is more inclined to recognize that the human eye image and the registered human eye image are not similar, and the original fusion score is enlarged to M".
  • the meaning of the second criterion is that when both The semantic features are similar, but the results of the other three fusions show dissimilarity, which tends to be similar, narrowing the original fusion score to M 2.
  • the classifier used in this method is the nearest neighbor classifier (Nearest Neighborhood), ie matching The category with the lowest score is the final identified identity category.
  • Implementation Case 1 The application of multi-feature fusion identification method based on human eye image in the network transaction platform.
  • the invention can be widely applied to online platform identity authentication based on webcam.
  • the development of e-commerce technology has made the trading of network platforms gradually enter social life. And online fraud has followed.
  • the security of the traditional password- and password-based authentication mode is difficult to meet the actual needs, and biometrics technology has become an effective solution.
  • Identification based on human eye images can play an important role.
  • the eye area information transmitted by the ordinary webcam is transmitted to the third-party authentication center.
  • the remote authentication center registers the biometric information of the user to the system database by using the registration algorithm in the present invention.
  • the webcam transmits the collected eye area information to a third-party authentication center.
  • the remote authentication center uses the identification algorithm in the present invention to retrieve the system database for identity authentication. This method can realize identity authentication conveniently and effectively, thereby ensuring the security of personal identity information on the network platform.
  • Implementation Case 2 Application of multi-feature fusion identification method based on human eye image in security monitoring scenario.
  • the invention can be widely applied to security monitoring scenarios.
  • the security monitoring scenario it is required to be able to control the personnel appearing in the scene. If an illegal person appears, it is necessary to promptly report the alarm.
  • a criminal has been arrested, and in order to prevent him from continuing to commit crimes, his eye area information is registered in the criminal system database. But the criminals committed another crime.
  • the processing terminal determines its identity through the identification algorithm of the present invention, and if it is confirmed as a criminal, it promptly alarms and ropes it. To be lawful.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Description

基于人眼图像的多特征融合身份识别方法
技术领域
本发明涉及模式识别和统计学习等领域,特别涉及一种基于人眼图像 的多特征融合身份识别方法。
背景技术
人的身份特征是人的基本信息, 其重要性不言而喻。传统的口令, 密 码, ID 卡等基于知识和实物的身份鉴别技术难以满足大规模应用和高安 全级别的需求, 同时也给用户的使用带来了不便。 当今社会, 智能化、 信 息化成为发展的必然趋势, 大规模的身份鉴别技术对于国土安全、 公共安 全、金融安全和网络安全都具有重要意义。生物特征识别技术是利用人的 生理和行为特征进行身份鉴别, 具有唯一性高, 使用方便, 安全性好等特 点。现有的主流的生物特征识别模态包括人脸, 虹膜, 语音, 指纹, 掌纹, 签名, 步态等。相应的生物特征识别***也成功应用于社会生活的各个领 域, 例如访问控制、 网络安全等。
现有的大多数生物特征识别技术需要用户的高度配合。例如大多数的 指紋、掌纹识别设备都是接触式的, 而非接触式的设备仍然需要用户以固 定的方式配合。一方面给用户的使用带来了不便,降低了***的识别效率, 难以满足大规模的识别场景(例如机场、 海关、 车站等) 中低响应时间和 高流通量的需求; 另一方面, 这种高度配合使得***只能处于被动的识别 模式, 也就是传感器只能被动地接受数据信息, 而多数的安全场景需要主 动识别,即传感器能够在用户配合较少甚至不配合的情况下主动的获取用 户的信息, 例如在监控场景中要对场景中的人进行实时的身份鉴别, 而不 需要用户的主动配合。 尽管某些模态, 例如人脸、 步态等, 可以在用户非 配合情况下进行主动式身份鉴别, 但是其识别精度难以满足实际需求。 人的眼部区域主要包括瞳孔、 虹膜、 眼皮、 眼周围皮肤、 眉毛、 睫毛 等。其中基于虹膜纹理唯一性的虹膜识别已经成为当前最有效的生物特征 之一, 相关的识别***不仅在银行、 煤矿、 海关、 机场等场所成功应用, 而且在福利分发、失踪儿童寻找等社会问题中发挥了重要作用。 除了虹膜 纹理, 人眼周围的皮肤纹理也在最近的文献里被证实具有良好的区分度, 能够用于身份鉴别。 另外, 在可见光下, 虹膜和眼部皮肤区域呈现出颜色 特性, 能够作为辅助特征。 除了表观特征, 人眼区域也具有很强的语义特 征, 例如左右眼, 单 /双眼皮, 眼皮轮廓等, 也具有一定的分类能力。 由 此可见, 眼部区域的这种多特征的特性, 使得眼部区域能够成为最具有区 分度的生物特征。
除了高区分度特性,基于眼部区域的生物特征也是最易使用和推广的 生物特征。眼部区域是人脸的最为关键的一部分。人眼是感知外部世界的 视觉器官, 这也使得人眼区域通常是外部可见的, 即使在人脸有遮挡的情 况下, 人眼区域也是遮挡最少的区域。 因此, 人眼区域是最容易被摄像机 等视觉传感器获取。随着光学成像技术的飞速发展, 主动成像***也己经 成为可能,相关的***能够在十几米甚至几十米的距离内获取到清晰的人 眼图像。这些都使得基于人眼图像的身份鉴别能够达到友好的人机交互和 主动识别功能。
另外,基于人眼区域的身份鉴别,具有很强的鲁棒性。在实际应用中, 单一的虹膜识别容易受到应用场景的限制。例如, 有些眼部疾病影响虹膜 纹理, 使其不能用于虹膜识别。 由于人眼区域包括多种生物特征模态, 虹 膜、 皮肤紋理等, 多模态的生物特征能够满足多种应用场景的需求。
在已有的专利中,基于虹膜唯一性的虹膜识别***是利用眼部区域信 息进行身份鉴别的主要方法, 而眼部的其他特征并未被使用。 另外, 现有 的虹膜识别专利,例如英国剑桥大学的 Daugman博士提出的利用 Gabor滤 波器进行特征编码的虹膜识别算法 (U. S. Patent 5291560 ), 中科院自动 化所谭铁牛研究员等通过分析虹膜斑块特征 (CN 1684095 )进行虹膜识别 的方法等, 都是通过分析人眼虹膜纹理特征的局部特性来进行身份鉴别, 在实际中容易受到噪声的影响, 依赖虹膜分割的准确度。本发明的方法利 用虹膜的全局特征,基于稀疏编码的识别方法,能有效的克服噪声的影响, 不需要额外的基于图像分割的噪声监测。 另外, 传统的分数级融合没有考 虑到分数分布和数据噪声的影响, 没有充分发挥各个模态之间的互补特性。
发明内容
本发明的目的是提供一种基于人眼图像的多特征融合身份识别方法。 为实现上述目的, 一种基于人眼图像的多特征融合身份识别方法, 包 括注册和识别, 其中, 所述注册包括:
对于给定注册的人眼图像, 获得归一化人眼图像和归一化虹膜图像; 对待注册用户的人眼图像抽取多模态特征,将得到人眼图像的多模态 特征作为注册信息保存到注册数据库;
所述识别包括- 对于给定识别的人眼图像, 获得归一化人眼图像和归一化虹膜图像; 对待识别用户的人眼图像抽取多模态特征;
将提取的多模态特征与数据库中多模态特征进行比对, 以得到比对分 数, 通过分数级融合得到融合分数;
利用分类器进行人眼图像的多特征融合身份识别。
本发明融合人脸眼部区域的多种特征信息进行身份鉴别,***识别精 度高, 能够用于安全级别较高的应用场所。 本发明减少了用户的配合度, 可以用于远距离、主动式的身份认证技术。 本发明不仅适用于可见光下的 人眼图像, 而且也适用于其他单色光下的人眼图像。
附图说明 图 1 (a)示出基于人眼图像的多特征融合身份识别方法的注册过程; 图 1 (b)示出基于人眼图像的多特征融合身份识别方法的识别过程; 图 2 (a)示出灰度人眼图像;
图 2 (b)示出彩色人眼图像;
图 3示出基于人眼图像的多特征融合身份识别方法的人眼图像预处理 过程;
图 4示出人眼图像的虹膜区域定位结果和眼部区域定位结果; 图 5 (a)示出灰度归一化人眼图像;
图 5 (b)示出彩色归一化人眼图像;
图 6 (a)示出灰度归一化虹膜图像;
图 6 (b)示出彩色归一化虹膜图像;
图 7示出虹膜颜色特征抽取中的彩色归一化虹膜图像分块结果; 图 8示出眼部表观特征抽取中的彩色归一化人眼图像分块结果; 图 9示出眼部语义特征抽取中感兴趣区域选取和滤波器设计结果; 图 10 (a)示出紋理基元训练过程;
图 10 (b)示出紋理基元直方图构建过程;
图 11示出眼部语义特征抽取过程;
具体实施方式
为使本发明的目的、技术方案和优点更加清楚明白, 以下结合具体实 施例, 并参照附图, 对本发明进一步详细说明。
本发明所提出的基于人眼图像的多特征融合身份识别方法包括注册 步骤和识别步骤:
注册步骤 R: 如图 1 (a)所示, 首先, 对获得的待注册用户的人眼图像 进行预处理 R0, 得到能够用于特征抽取的归一化虹膜图像和归一化人眼 图像。 然后, 采用特征抽取方法, 对待归一化的图像抽取多模态特征, 得 到人眼图像的注册信息并保存到注册数据库。 主要包括以下步骤: 歩骤 RO : 对获得的待注册用户的人眼图像进行预处理, 包括虹膜定 位, 虹膜归一化, 眼部区域定位, 眼部区域归一化。 得到归一化的人眼图 像和虹膜图像。
步骤 R1 1 : 对待注册的人眼图像的灰度归一化虹膜图像, 将其降采样 后, 所有像素值按行排列组成虹膜紋理特征向量 V ' texture, 保存到虹膜紋 理特征数据库。
步骤 R12: 对待注册的人眼图像的彩色归一化虹膜图像, 基于颜色直 方图方法对其抽取虹膜颜色特征向量 , 保存到虹膜颜色特征数据 步骤 R13 : 对待注册的人眼图像的彩色归一化人眼图像, 基于眼部纹 理基元直方图对其抽取眼部表观特征向量 V ''text。n,保存到眼部表观特征数 据库。
步骤 R14: 对待注册的人眼图像的彩色归一化人眼图像, 基于差分滤 波和定序测量抽取眼部语义特征向量 V 1' semantic, 保存到眼部语义特征数据 库。
识别步骤 S : 如图 1 (b)所示, 首先, 对获得的待识别用户的人眼图像 进行预处理, 得到能够用于特征抽取的归一化虹膜图像和归一化人眼图 像。然后采用特征抽取方法,分别抽取多模态的特征,然后采用匹配方法, 将得到的特征与数据库中的特征比对',求得比对分数。通过分数级别融合, 得到最终的匹配分数。 利用最近邻分类器获得识别结果。 如图 1 (b)所示, 主要包括以下步骤:
步骤 SO : 对获得的待识别用户的人眼图像进行预处理, 包括虹膜定 位, 虹膜归一化, 眼部区域定位, 眼部区域归一化。 得到归一化的人眼图 像和虹膜图像。
步骤 S1 : 人眼图像多模态特征抽取。 包括以下步骤: 步骤 Sll: 对待识别的归一化虹膜图像, 基于稀疏编码的方法, 抽取
Figure imgf000008_0001
步骤 S12: 对待识别的归一化虹膜图像, 基于颜色直方图方法, 抽取 虹膜颜色特征向量
Figure imgf000008_0002
步骤 S13: 对待识别的归一化人眼图像, 基于眼部紋理基元直方图, 抽取眼部表观特征向量 vs text。„。
步骤 S14: 对待识别的归一化人眼图像, 基于差分滤波和定序测量, 抽眼部语义特征向量 vs semantiC
步骤 S2: 多模态特征向量比对。 包括以下步骤:
步骤 S21:将识别图像的虹膜纹理特征向量 vs texture和数据库中的每类 的注册图像的虹膜紋理特征向量计算重构误差作为比对分数 Stextul.e
步骤 S22: 将识别图像的虹膜颜色特征向量 vs ∞1r和数据库中的注册 图像的虹膜颜色特征向量 进行比对, 计算欧式距离, 得到比对分数
步骤 S23: 将识别图像的眼部表观特征向量 VStext。n和数据库中的注册 图像的眼部表观特征向量 V'teXt。n进行比对,计算欧式距离,得到比对分数 步骤 S24: 将识别图像的眼部语义特征向量 vs semantic和数据库中的注 册图像的眼部语义特征向量 V1' semantic进行比对, 计算异或距离, 得到比对 教 s semantic °
步骤 S3: 多模态比对分数融合。 通过自适应分数级别融合策略, 得 到最终的比对分数 。
步骤 S4: 利用最近邻分类器 (Nearest Neighborhood) 进行分类。 图像预处理:
无论是注册或者识别过程, 采集到的人眼原始图像 (图 2 (b)所示) 都需要经过虹膜定位 (图 4所示), 然后得到能够用以特征抽取的归一化 人眼图像 (图 5所示) 和归一化虹膜图像 (图 6所示), 如图 3所示, 具 体步骤如下:
虹膜定位。在采集到的人眼原始图像(图 2 (b)所示,分辨率 640 X 480, RGB 图像)上使用两个圆分别拟合虹膜的内外圆边界, 即虹膜与瞳孔的边 界和虹膜与巩膜的边界(图 4所示经过预处理的虹膜原始图像的虹膜定位 结果图像), 虹膜定位可以采用专利 U. S. Patent 5291560中的积分差分 算子进行。 在其灰度图像上(图 2 (a)所示)进行积分差分算子运算, 得到 虹膜定位的结果。 积分差分算子的数学表达如下:
arg max GCT(r) 其中 G r)是方差为 σ 的高斯函数, Ι(χ, 是虹膜图像, (r,x。,_yQ)是圆 的参数。积分差分算子是一个圆形边界检测子, 其基本思想就是在圆的参 数空间 (r,x。,%)中, 寻找一个沿着该参数定义的圆形边界, 先做差分运算, 再做积分运算积分后,得到的值再按照圆的周长进行归一化得到在该参数 上的积分差分能量值,在参数空间中具有最大的能量值的参数值作为最终 检测到的圆形。在虹膜图像中, 瞳孔与虹膜的边界, 虹膜与巩膜的边界均 为圆形,一般取积分差分值最大的两个参数,根据其半径的大小进行区分 : 半径小的作为瞳孔与虹膜边界的定位结果,半径大的作为虹膜与巩膜边界 的定位结果 (图 4所示, 两个红色圆形区域为虹膜定位结果)。
归一化虹膜图像获取。 不同虹膜图像中的虹膜大小并不相同, 另外, 瞳孔也会随着光线的变化而放大或缩小, 在特征抽取之前, 需要将不同大 小的虹膜区域进行归一化。 由于虹膜的内外边界已经从虹膜定位中得到
(图 4所 7j , 虹膜归一化采用专利 U. S. Patent 5291560中使用的弹簧橡 皮筋模型。基本思想是通过笛卡尔坐标到极坐标的变换, 将原始图像上的 环形虹膜区域归一化到固定大小的矩形彩色归一化虹膜图像 (图 6 ( b )所 示, 分辨率 512 X 66, 彩色 RGB图像), 采用线性插值方式进行图像映射。 以水平方向为 0度角开始逆时针方向 (图 6 (b)所示, 红色圆弧箭头方向) 进行角度方向采样,以瞳孔与虹膜的边界开始进行半径方向(图 6 (b)所示, 红色半径箭头方向)的采样。然后取其红色通道作为灰度归一化虹膜图像 (图 6 (a)所示, 分辨率 512 X 66)。
归一化人眼图像获取。 在采集到的人眼原始图像 (图 1所示, 分辨率 640 X 480, RGB图像) 上定位到虹膜中心 (图 4所示), 根据归一化图像上 固定的人眼中心位置和虹膜半径大小^ 规定一个固定大小矩形区域(图 4 所示, 矩形中心为虹膜外边界中心, 矩形长为 4倍虹膜半径, 宽为 3倍虹膜 半径)作为人眼感兴趣区域, 通过双线性插值的方法将人眼感兴趣区域映 射到固定大小区域,作为人眼归一化图像。取彩色归一化人眼图像(图 5 (b) 所示, 分辨率 200 X 150 ) 的红色通道作为灰度归一化人眼图像 (图 5 (a) 所示, 分辨率 200 X 150)。
特征抽取:
获得归一化虹膜图像和归一化人眼图像之后, 需要进行特征抽取。本 发明方法中采用的特征具体包括基于稀疏编码的虹膜紋理特征、基于颜色 直方图的虹膜颜色特征、 基于 SIFT紋理基元直方图的眼部表观特征、 基 于差分滤波器和定序测量的眼部语义特征。 其具体抽取步骤如下:
虹膜紋理特征是虹膜的基本特征,基于稀疏编码的特征表达在人脸识 别中能够克服遮挡等噪声的影响, 而达到很高的识别率, 因此本发明中的 虹膜纹理特征抽取也采用稀疏编码。 稀疏编码的基本思想是, 一个类别, ¾所 的样本都也可以由其自身有限的儿个样本线性组合得到。给定 W个 虹膜类别, 每个类别 /包含《个注册样本, 所有的注册样本组成集合: (½,.½....,.½,,...,^.., , ,...^,,}, 给定待识别虹膜样本 求以下优化: a =argmin||a||i, 约束条件是 ¾ = _y。其中注册样本 是用于注册的虹膜纹 理特征向量 v''totore, 由待注册的灰度归一化虹膜图像 4倍降采样后的像素 灰度值组成, 大小为 512X66/4=8448。 是待识别的样本, 由待识别的灰 度归一化虹膜图像 4倍降采样后的像素灰度值组成, 大小为 8448。 求解该 优化问题, 得到的解 :
Figure imgf000011_0001
别的虹膜紋理特征向量 vs to, , 大小为 mXn。 在可见光的照射下, 虹膜呈现出颜色特性。颜色直方图是常用的刻画 颜色特征的方法。虹膜的颜色特征具有局部区域特性, 其不同区域在环境 光的照射下呈现不同的颜色分布。 因此, 采用分块直方图表达能够更好地 刻画虹膜的颜色特征。 对于 RGB彩色人眼图像, 本发明将其转换为 1 α β 颜色空间的表达, 在 1α β颜色空间抽取虹膜颜色特征。例如, 将彩色的 归一化虹膜图像分为 3X1小块, 即在垂直方向进行 3等分 (如图 7), 每 个小块大小为 22X512, 分别在 1、 α、 β 三个通道上, 统计每个颜色值 出现的频率, 建立直方图, 每个通道的颜色空间大小为 256。 然后将 3个 通道, 3个子块的共 9个直方图串接起来生成颜色直方图, 作为虹膜颜色 特征向量 vs c。iOT, 大小为 256X3X9=6912。其中 RGB颜色空间到 1 α β 颜 色空间的转换如下:
L 0.3811 0.5783 0.0402 R
Μ 0.0606 0.3804 0.0453 G
S 0.0241 0.1228 0.8444 Β
Figure imgf000012_0001
除了虹膜特征以外, 整个眼部区域的表观特征也具有一定的区分度。
I艮部纹理和皮肤纹理作为统一的眼部表观特征用于身份鉴别。紋 理基元直方图是纹理分析中最有效的方法之一,其基本思想是一个纹理模 式是由基本的元素(紋理基元)组成的, 模式中不同的类别由于基本元素 的分布不同而不同。如图 10 (a)所示,紋理基元的获取过程如下。通过 SIFT 局部描述子对所有的用于训练的归一化彩色人眼图像的每个通道的每个 像素抽取特征向量, 大小为 128维, 然后通过 K均值聚类方法将得到的所 有局部特征向量聚集成 k个集合, 将 k个集合的中心作为 k个紋理基元, 建立纹理基元词典。 在^ G, B三个通道上得到三个紋理基元词典。 眼部 表观特征构建过程如图 10 (b)所示, 对于每幅彩色归一化人眼图像, 等分 为 2 X 2共 4个局部块, 对每个块内每个颜色通道的每个像素上抽取 SIFT 局部特征向量,将得到的所有局部特征向量量化为离其距离最近的紋理基 元, 一般采用欧式距离度量局部特征向量和紋理基元的距离, 然后统计每 个紋理基元出现的频率,得到该区域在单通道上的紋理基元直方图。最终, 将所有的纹理基元直方图 (共 3通道 X 4局部块 =12) 串接起来组成眼部 表观特征向量 vs texton, 大小为 kX 12。 SIFT计算过程参见美国专利 U. S. Patent 6, 711, 293。
左右眼标号特征用于刻画眼部语义特征。 用 0/1标识左 /右眼。 具体 是利用上眼皮睫毛的左右分布不同来进行标识。通过比较靠近泪腺和远离 泪腺两个部分的稠密程度分数,标记人眼标号。给定彩色归一化人眼图像, 眼部语义特征的抽取过程如图所示, 包括眼皮拟合, 感兴趣区域选取, 滤 波器设计, 睫毛密度估计, 语义特征编码 (如图 11 )。 眼皮拟合:首先在灰度归一化人眼图像上利用 Canny边缘检测子得到 边缘信息, 然后在虹膜定位的基础上, 选择虹膜外圆左上角区域和右上角 区域的边界点进行直线拟合, 得到上眼皮的粗略拟合(如图所示, 白色线 条)。
感兴趣区域选取:根据眼皮拟合的结果,选取感兴趣区域。如图所示, 黑色框区域为所选取的感兴趣区域, 以右侧感兴趣区域的选取为例。 0,是 虹膜外圆中心, R是半径。 是垂直方向的直径 0,与眼皮右侧拟合直线 L ER的交点。 LfiER程度为 R, ?^是!^^^ 的中点。 选取的感兴趣区域为黑 框区域, 其长为 R, 宽为 R/2。 长边与 L ER平行。 左侧感兴趣区域选取的 方法类似。
滤波器设计: 设计左右两个差分滤波器, 分别对应左右两个感兴趣区 域。以右侧滤波器为例,其大小和方向与右侧感兴趣区域一致,如图所示, 红色区域全部设置为 1, 空白部分设置为 0。 差分滤波器的方向与眼皮右 边拟合的直线垂直,达到方向自适应。根据虹膜半径大小设置滤波器长度 为 R和宽度为 R/2 R为虹膜外边界拟合得到的圆的半径, 达到尺度自适 应。 左侧滤波器通过类似的方法得到。
睫毛密度估计: 在彩色归一化人眼图像上, 估计感兴趣区域的睫毛密 度, 以右侧为例。 在每个颜色通道上, 使用右侧滤波器和右侧感兴趣区域 进行卷积, 得到每个通道上的响应, 将^ G B三个通道上的响应结果相 力口, 最终结果作为右侧感兴趣区域的睫毛密度估计值 ¾。左侧感兴趣区域 的睫毛密度估计值 Dt通过类似方法得到。
语义特征编码:通过定序测量特性,生成眼部语义特征向量 vs se„ 如果 DL> DK VS cmtic = 1, 否贝 lj V S semantic =0
匹配策略- 在识别中, 需要将待识别的特征向量和注册特征向量进行匹配, 基于 t.述的 4 种特征, 其中, 虹膜语义特征的匹配采用逻辑异或距离, 即
Ssemailllc = XOR(v' semantic <> V se man lie
用欧式距离, 如下所示: S - ά(ν,, ν2 )二 ν, -ν
5 对于虹膜紋理特征, 给定用于识别的样本 和注册样本特征向量
X : {χη]2,...,χ,...,χ^...,χ>!ιη2,...,χι;υ1}, 采用稀疏编码方式得到用于 识 别 的 虹 膜 纹 理 特 征 向 量 vszw , 即 : {α π, 12,...,a u,,...,a ij,..., mi, α ,, ,..., a m,i}。 使用每类的重构系数 α :{α 样本 : {x ,xi2,...,xm}得到重构的识别样本/, 识 10 别样本与每类的所有样本的匹配分数为重构误差, 如下所示:
y - y
注意, 以上 4种比对分数的值越小, 则注册特征和识别特征越相似。 在获得 4种特征的匹配分数后, 即 Stexture, Scolor , Stextn, Ssem itic。 采 用自适应的分数级融合策略获得最终的比对分数。自适应的分数级融合策
-化, 加权求和, 自适应调整。
Figure imgf000014_0001
在分数融合之前, 需要将原始 4种比对分数归一化到同 一个尺度范围 [0, 1], 比对分数的值越小, 则注册特征和识别特征越相 似。文献中有多种分数归一化方法, 而最大最小归一化方法是最简单有效 的方法, 给定一组比对分数8 : {31,32,..., 311}, 最大最小归一化如下:
, — Sj - min(S)
0
1 max(S) - min(S) 对原始的 4个匹配分数进行归一化, 得到 4个归一化的分数 S: texture color, texton, semantic 0
分数归一化后, 采用加权求和策略进行分数级融合, 得到融合后的比 对分数, 如下:
texture + W 2 S 'color + Wi S' texton
其中 Wi , (i = l,2,3, wi + W2 + W3 二 1)是权重, 一般取相等的值, 表示 每个特征具有相同的重要性。
对于融合后的比对分数, 为了去除噪声的影响, 根据眼部语义特征匹 配分数,可以对融合后的比对分数进行修正的得到修正后的融合分数 SV , 修正准则如下:
如果 S semantic ~ 1并且 Sf < Μι,则 S'f = Μι
如果 8 隱 = 0并且 Sf > M2,则 S'f = M2 , M l < M2 第一条准则的意义是,当识别人眼图像和注册人眼图像的语义特征不 相同, 并且融合的结果是识别人眼图像和注册人眼图像比较相似, 那么则 更倾向于识别人眼图像和注册人眼图像不相似,就将原始的融合分数放大 到 M" 第二条准则的意义是, 当两者的语义特征相似, 但是另外三者融 合的结果表明不相似, 则更倾向于相似, 缩小原始的融合分数到 M2 。 本方法中采用的分类器为最近邻分类器 (Nearest Neighborhood ), 即匹配分数最小者的类别为最终识别的身份类别。
实施案例 1 : 基于人眼图像的多特征融合身份识别方法在网络交易平 台中的应用。
本发明可广泛应用到基于网络摄像头的网上平台身份认证。电子商务 技术的发展, 使得网络平台的交易逐渐走入社会生活。而网上欺诈行为也 随之而来。传统的基于密码和口令的身份认证模式的安全度难以满足实际 需求, 生物特征识别技术成为一种有效的解决方案。基于人眼图像的身份 鉴别可以发挥重要作用。用户进行注册的时候, 通过普通的网络摄像头将 的眼部区域信息, 传输到第三方认证中心。远程认证中心采用本发明中的 注册算法, 将用户的生物特征信息注册到***数据库。当用户进行网络平 台身份认证时,网络摄像头将采集到的眼部区域信息传输到第三方认证中 心。远程认证中心采用本发明中的识别算法, 检索***数据库, 进行身份 鉴别。 此方法可以方便、 有效地实现身份认证, 从而保障了网络平台上个 人身份信息的安全性。
实施案例 2 : 基于人眼图像的多特征融合身份识别方法在安全监控场 景中的应用。
本发明可广泛应用到安全监控场景。在安全监控场景中, 要求能够对 场景中出现的人员进行控制, 如果出现非法人员, 要及时报警。 例如: 一 犯罪分子曾经被抓捕过, 为防止其以后继续作案, 将其眼部区域信息注册 到犯罪分子***数据库中。但该犯罪分子再次作案。 当其出现在网络监控 摄像头的采集范围内, 其眼部区域信息通过网络传输到处理终端, 处理终 端通过本发明的识别算法, 确定其身份, 如果确认为犯罪分子, 则及时报 警, 将其绳之以法。
以上所述, 仅为本发明中的具体实施方式, 但本发明的保护范围并不 局限于此。任何熟悉该技术的本领域技术人员在本发明所公开的技术范围 内, 可以根据本发明公开的内容进行各种变换或替换,但这些变化和替换 都应涵盖在本发明公开的范围之内。 因此, 本发明的保护范围应该以权利 要求的保护范围为准。

Claims

权 利 要 求
1.一种基于人眼图像的多特征融合身份识别方法, 包括注册和识别, 其中, 所述注册包括:
对于给定注册的人眼图像, 获得归一化人眼图像和归一化虹膜图像; 对待注册用户的人眼图像抽取多模态特征,将得到人眼图像的多模态 特征作为注册信息保存到注册数据库;
所述识别包括- 对于给定识别的人眼图像, 获得归一化人眼图像和归一化虹膜图像; 对待识别用户的人眼图像抽取多模态特征;
将提取的多模态特征与数据库中多模态特征进行比对, 以得到比对分 数, 通过分数级融合得到融合分数;
利用分类器进行人眼图像的多特征融合身份识别。
2. 根据权利要求 1所述的方法, 其特征在于所述多模态特征包括虹 膜纹理特征、 虹膜颜色特征、 眼部表观特征和眼部语义特征。
3. 根据权利要求 1所述的方法, 其特征在于还包括图像预处理, 所 述图像预处理包括虹膜图像预处理和人眼图像预处理。
4. 根据权利要求 3所述的方法, 其特征在于所述虹膜图像预处理包 括虹膜定位和虹膜图像归一化。
5. 根据权利要求 4所述的方法, 其特征在于所述虹膜定位采用双圆 拟合方法, 用两个圆分别拟合虹膜与瞳孔的边界和虹膜与巩膜的边界。
6. 根据权利要求 4所述的方法, 其特征在于所述虹膜图像归一化包 括:
利用迪卡坐标系到极坐标系的转换,将原始的环形虹膜区域映射到固 定大小的矩形区域。
7. 根据权利要求 3所述的方法, 其特征在于所述人眼图像预处理包 括人眼区域定位和人眼区域归一化。
8. 根据权利要求 7所述的方法,其特征在于所述人眼区域定位包括: 人眼区域是一个矩形区域,矩形的中心为虹膜与巩膜边界拟合所得的 圆的中心。
9. 根据权利要求 7所述的方法, 其特征在于所述人眼区域归一化包 括:
将原始人眼区域缩放为固定大小的矩形区域。
10. 根据权利要求 2所述的方法,其特征在于通过稀疏编码抽取虹膜 纹理特征。
11. 根据权利要求 2所述的方法,其特征在于通过颜色直方图抽取虹 膜颜色特征。
12. 根据权利要求 2所述的方法,其特征在于通过纹理基元表达抽取 眼部表观特征。
13.根据权利要求 2所述的方法, 其特征在于通过差分滤波器和定序 测量特性抽取眼部语义特征。
14.根据权利要求 11所述的方法,其特征在于颜色直方图在每个图像 块的每个颜色通道上抽取,所有颜色直方图串接起来组成虹膜颜色特征向
- .
里。
15.根据权利要求 11所述的方法,其特征在于颜色直方图在 1 α β 颜 色空间上抽取。
16. 根据权利要求 2所述的方法, 其特征在于采用纹理基元直方图抽 取眼部表观特征。
17.根据权利要求 16所述的方法,其特征在于所述采用紋理基元直方 图抽取眼部表观特征包括步骤:
对每幅图像的每个像素采用尺度不变特征变换抽取局部特征; 通过 K均值聚类得到 K个纹理基元, 构建纹理基元词典; 对归 ·化虹膜图像进行不重叠分块, 统计每个紋理基元出现的频率, 构建纹理基元直方图;
将紋理基元直方图串接在一起, 构成眼部表观特征向量。
18.根据权利要求 17所述的方法,其特征在于如果每个分块图像是彩 色图像, 则在每个颜色通道上统计每个紋理基元出现的频率。
19.根据权利要求 2所述的方法, 其特征在于眼部语义特征抽取包括 步骤:
根据虹膜定位和眼皮定位的结果得到上眼皮睫毛区域的大概位置; 根据上眼皮睫毛区域和虹膜的位置,选取上眼皮睫毛区域的左右两部 分作为感兴趣区域;
根据虹膜的半径大小和上眼皮的方向信息,生成左右两个尺度和方向 自适应的差分滤波器;
将左部分自适应差分滤波器和左部分感兴趣区域进行卷积,得到左部 感兴趣区域的睫毛密度的估计结果,右部分自适应差分滤波器和右部分感 兴趣区域进行卷积, 得到右部感兴趣区域的睫毛密度的估计结果;
根据左右两个感兴趣区域的睫毛密度估计响应, 根据定序测量特性, 生成眼部语义特征向量。
20. 根据权利要求 19所述的方法, 其特征在于, 自适应差分滤波器 的设计中, 用左右两条直线拟合上眼皮, 然后左右两个差分滤波器的方向 分别与左右两条直线垂直, 达到方向自适应。滤波器的长度和宽度根据虹 膜半径大小进行设置, 达到尺度自适应。
21.根据权利要求 19所述的方法, 其特征在于对于彩色人眼图像, 滤 波器将在 R, G, B三个通道上的图像感兴趣区域进行卷积, 将所有通道上 的卷积值相加作为最终的滤波器和彩色人眼图像感兴趣区域的响应值。
22.根据权利要求 1所述的方法, 其特征在于所述分数级融合包括: 对虹膜颜色特征向量的匹配分数 、虹膜纹理特征向量的匹配分数
S.cxtu,c , 眼部纹理基元特征向量的匹配分数 St 。n, 根据各自原始分数的分 布进行自适应归一化, 得到归一化后的分数 S'e。l。r、 S'texture , S'texton;
对 S'co r 、 S'texture 、 S'texton 进行力卩权求和得到 融合分数 S f = Wl S'texture + W2S 'color + W S'texton;
对加权求和得到的分数 s f 根据眼部语义特征比对结果 s'_ , 进行 修正得到 S'f , 其中, Wi , ( i = l,2,3,wi + W2 + W3 = l )是权重。
23. 根据权利要求 22所述的方法, 其特征在于, 修正的准则是最终 的融合分数倾向与眼部语义特征的结果。 如果眼部语义特征相似而其他 特征不相似, 则修正融合的分数, 使其向相似的方向变化。 反之, 如果 眼部语义特征不相似而其他特征相似, 则修正融合的分数, 使其向不相 似的方向变化。
PCT/CN2011/073072 2011-04-20 2011-04-20 基于人眼图像的多特征融合身份识别方法 WO2012142756A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2011/073072 WO2012142756A1 (zh) 2011-04-20 2011-04-20 基于人眼图像的多特征融合身份识别方法
CN201180005239.2A CN102844766B (zh) 2011-04-20 2011-04-20 基于人眼图像的多特征融合身份识别方法
US13/519,728 US9064145B2 (en) 2011-04-20 2011-04-20 Identity recognition based on multiple feature fusion for an eye image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/073072 WO2012142756A1 (zh) 2011-04-20 2011-04-20 基于人眼图像的多特征融合身份识别方法

Publications (1)

Publication Number Publication Date
WO2012142756A1 true WO2012142756A1 (zh) 2012-10-26

Family

ID=47041032

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/073072 WO2012142756A1 (zh) 2011-04-20 2011-04-20 基于人眼图像的多特征融合身份识别方法

Country Status (3)

Country Link
US (1) US9064145B2 (zh)
CN (1) CN102844766B (zh)
WO (1) WO2012142756A1 (zh)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933344A (zh) * 2015-07-06 2015-09-23 北京中科虹霸科技有限公司 基于多生物特征模态的移动终端用户身份认证装置及方法
CN105447450A (zh) * 2015-11-12 2016-03-30 北京天诚盛业科技有限公司 虹膜识别中判断左右虹膜的方法和装置
CN105844278A (zh) * 2016-04-15 2016-08-10 浙江理工大学 一种多特征融合的织物扫描图案识别方法
US9466009B2 (en) 2013-12-09 2016-10-11 Nant Holdings Ip. Llc Feature density object classification, systems and methods
CN106355164A (zh) * 2016-09-30 2017-01-25 桂林师范高等专科学校 一种虹膜识别***
CN108009503A (zh) * 2017-12-04 2018-05-08 北京中科虹霸科技有限公司 基于眼周区域的身份识别方法
CN110348387A (zh) * 2019-07-12 2019-10-18 腾讯科技(深圳)有限公司 一种图像数据处理方法、装置以及计算机可读存储介质
CN111985925A (zh) * 2020-07-01 2020-11-24 江西拓世智能科技有限公司 基于虹膜识别和人脸识别的多模态生物识别支付的方法
TWI724736B (zh) * 2019-09-26 2021-04-11 大陸商上海商湯智能科技有限公司 圖像處理方法及裝置、電子設備、儲存媒體和電腦程式
CN113591747A (zh) * 2021-08-06 2021-11-02 合肥工业大学 一种基于深度学习的多场景虹膜识别方法
CN113673448A (zh) * 2021-08-24 2021-11-19 厦门立林科技有限公司 一种云和端集成的人脸图像质量动态检测方法及***
US11386636B2 (en) 2019-04-04 2022-07-12 Datalogic Usa, Inc. Image preprocessing for optical character recognition

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9294475B2 (en) * 2013-05-13 2016-03-22 Hoyos Labs Ip, Ltd. System and method for generating a biometric identifier
US9542626B2 (en) 2013-09-06 2017-01-10 Toyota Jidosha Kabushiki Kaisha Augmenting layer-based object detection with deep convolutional neural networks
CN104680128B (zh) * 2014-12-31 2022-10-25 北京释码大华科技有限公司 一种基于四维分析的生物特征识别方法和***
CN104573660A (zh) * 2015-01-13 2015-04-29 青岛大学 一种利用sift点描述子精确定位人眼的方法
EP3281138A4 (en) * 2015-04-08 2018-11-21 Wavefront Biometric Technologies Pty Limited Multi-biometric authentication
CN104850225B (zh) * 2015-04-28 2017-10-24 浙江大学 一种基于多层次融合的活动识别方法
CA2983749C (en) 2015-05-11 2021-12-28 Magic Leap, Inc. Devices, methods and systems for biometric user recognition utilizing neural networks
US20160366317A1 (en) * 2015-06-12 2016-12-15 Delta ID Inc. Apparatuses and methods for image based biometric recognition
CN105224918B (zh) * 2015-09-11 2019-06-11 深圳大学 基于双线性联合稀疏判别分析的步态识别方法
RU2711050C2 (ru) 2015-09-11 2020-01-14 Айверифай Инк. Качество изображения и признака, улучшение изображения и выделение признаков для распознавания по сосудам глаза и лицам и объединение информации о сосудах глаза с информацией о лицах и/или частях лиц для биометрических систем
CN105069448A (zh) * 2015-09-29 2015-11-18 厦门中控生物识别信息技术有限公司 一种真假人脸识别方法及装置
WO2017156547A1 (en) 2016-03-11 2017-09-14 Magic Leap, Inc. Structure learning in convolutional neural networks
CN106096621B (zh) * 2016-06-02 2019-05-21 西安科技大学 基于矢量约束的着降位置检测用随机特征点选取方法
US10181073B2 (en) 2016-06-29 2019-01-15 Intel Corporation Technologies for efficient identity recognition based on skin features
CN106203297B (zh) * 2016-06-30 2019-11-08 北京七鑫易维信息技术有限公司 一种身份识别方法及装置
CN107870923B (zh) * 2016-09-26 2020-05-12 北京眼神科技有限公司 图像检索方法和装置
US11205103B2 (en) 2016-12-09 2021-12-21 The Research Foundation for the State University Semisupervised autoencoder for sentiment analysis
KR102458241B1 (ko) 2016-12-13 2022-10-24 삼성전자주식회사 사용자 인식 장치 및 방법
US10534964B2 (en) * 2017-01-30 2020-01-14 Blackberry Limited Persistent feature descriptors for video
KR102369412B1 (ko) * 2017-02-02 2022-03-03 삼성전자주식회사 홍채 인식 장치 및 방법
CN107092873A (zh) * 2017-04-08 2017-08-25 闲客智能(深圳)科技有限公司 一种眼动方向识别方法及装置
CN107292285B (zh) * 2017-07-14 2020-01-14 Oppo广东移动通信有限公司 虹膜活体检测方法及相关产品
RU2670798C9 (ru) 2017-11-24 2018-11-26 Самсунг Электроникс Ко., Лтд. Способ аутентификации пользователя по радужной оболочке глаз и соответствующее устройство
CN107862305A (zh) * 2017-12-04 2018-03-30 北京中科虹霸科技有限公司 基于虹膜图像分类的虹膜身份识别比对加速方法
CN108537111A (zh) 2018-02-26 2018-09-14 阿里巴巴集团控股有限公司 一种活体检测的方法、装置及设备
US10553053B2 (en) * 2018-06-05 2020-02-04 Jeff Chen Biometric fusion electronic lock system
KR102637250B1 (ko) * 2018-11-06 2024-02-16 프린스톤 아이덴티티, 인크. 생체 측정 정확도 및/또는 효율성 강화 시스템 및 방법
CN109508695A (zh) * 2018-12-13 2019-03-22 北京中科虹霸科技有限公司 眼部多模态生物特征识别方法
CN111460880B (zh) * 2019-02-28 2024-03-05 杭州芯影科技有限公司 多模生物特征融合方法和***
WO2020210737A1 (en) * 2019-04-10 2020-10-15 Palmer Francis R Method and apparatus for facial verification
EP3973468A4 (en) 2019-05-21 2022-09-14 Magic Leap, Inc. HANDPOSITION ESTIMATING
CN110363136A (zh) * 2019-07-12 2019-10-22 北京字节跳动网络技术有限公司 用于识别眼睛设定特征的方法、装置、电子设备、及介质
CN110751069A (zh) * 2019-10-10 2020-02-04 武汉普利商用机器有限公司 一种人脸活体检测方法及装置
CN113822308B (zh) * 2020-06-20 2024-04-05 北京眼神智能科技有限公司 多模态生物识别的比对分数融合方法、装置、介质及设备
CN111767421A (zh) * 2020-06-30 2020-10-13 北京字节跳动网络技术有限公司 用于检索图像方法、装置、电子设备和计算机可读介质
CN112667840B (zh) * 2020-12-22 2024-05-28 ***股份有限公司 特征样本库构建方法、通行识别方法、装置及存储介质
CN112580530A (zh) * 2020-12-22 2021-03-30 泉州装备制造研究所 一种基于眼底图像的身份识别方法
CN113177914B (zh) * 2021-04-15 2023-02-17 青岛理工大学 基于语义特征聚类的机器人焊接方法及***
CN114913638B (zh) * 2022-04-08 2024-03-01 湖北安源建设集团有限公司 一种基于互联网的消防门禁管理方法及***
CN115083006A (zh) * 2022-08-11 2022-09-20 北京万里红科技有限公司 虹膜识别模型训练方法、虹膜识别方法及装置
CN115514564B (zh) * 2022-09-22 2023-06-16 成都坐联智城科技有限公司 基于数据共享的数据安全处理方法及***
US11762969B1 (en) * 2023-01-12 2023-09-19 King Saud University Systems and methods for facilitating biometric recognition
CN116343313B (zh) * 2023-05-30 2023-08-11 乐山师范学院 一种基于眼部特征的人脸识别方法
CN117115900B (zh) * 2023-10-23 2024-02-02 腾讯科技(深圳)有限公司 一种图像分割方法、装置、设备及存储介质
CN117687313B (zh) * 2023-12-29 2024-07-12 广东福临门世家智能家居有限公司 基于智能门锁的智能家居设备控制方法及***

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932840A (zh) * 2005-09-16 2007-03-21 中国科学技术大学 基于虹膜和人脸的多模态生物特征身份识别***

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002101322A (ja) * 2000-07-10 2002-04-05 Matsushita Electric Ind Co Ltd 虹彩カメラモジュール
WO2003003169A2 (en) * 2001-06-28 2003-01-09 Cloakware Corporation Secure method and system for biometric verification
US7278028B1 (en) * 2003-11-05 2007-10-02 Evercom Systems, Inc. Systems and methods for cross-hatching biometrics with other identifying data
US8219438B1 (en) * 2008-06-30 2012-07-10 Videomining Corporation Method and system for measuring shopper response to products based on behavior and facial expression
US7742623B1 (en) * 2008-08-04 2010-06-22 Videomining Corporation Method and system for estimating gaze target, gaze sequence, and gaze map from video
US8374404B2 (en) * 2009-02-13 2013-02-12 Raytheon Company Iris recognition using hyper-spectral signatures
US8125546B2 (en) * 2009-06-05 2012-02-28 Omnivision Technologies, Inc. Color filter array pattern having four-channels
KR100999056B1 (ko) * 2009-10-30 2010-12-08 (주)올라웍스 이미지 컨텐츠에 대해 트리밍을 수행하기 위한 방법, 단말기 및 컴퓨터 판독 가능한 기록 매체
US8274592B2 (en) * 2009-12-22 2012-09-25 Eastman Kodak Company Variable rate browsing of an image collection
US8295631B2 (en) * 2010-01-29 2012-10-23 Eastman Kodak Company Iteratively denoising color filter array images
US8885882B1 (en) * 2011-07-14 2014-11-11 The Research Foundation For The State University Of New York Real time eye tracking for human computer interaction

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932840A (zh) * 2005-09-16 2007-03-21 中国科学技术大学 基于虹膜和人脸的多模态生物特征身份识别***

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHANG HENG ET AL.: "Drivers eyes state recognition based on fuzzy fusion", COMPUTER APPLICATIONS, vol. 27, no. 2, February 2007 (2007-02-01), pages 349 - 350, 354 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11527055B2 (en) 2013-12-09 2022-12-13 Nant Holdings Ip, Llc Feature density object classification, systems and methods
US9466009B2 (en) 2013-12-09 2016-10-11 Nant Holdings Ip. Llc Feature density object classification, systems and methods
US10671879B2 (en) 2013-12-09 2020-06-02 Nant Holdings Ip, Llc Feature density object classification, systems and methods
US9754184B2 (en) 2013-12-09 2017-09-05 Nant Holdings Ip, Llc Feature density object classification, systems and methods
US10102446B2 (en) 2013-12-09 2018-10-16 Nant Holdings Ip, Llc Feature density object classification, systems and methods
CN104933344A (zh) * 2015-07-06 2015-09-23 北京中科虹霸科技有限公司 基于多生物特征模态的移动终端用户身份认证装置及方法
CN104933344B (zh) * 2015-07-06 2019-01-04 北京中科虹霸科技有限公司 基于多生物特征模态的移动终端用户身份认证装置及方法
CN105447450B (zh) * 2015-11-12 2019-01-25 北京眼神智能科技有限公司 虹膜识别中判断左右虹膜的方法和装置
CN105447450A (zh) * 2015-11-12 2016-03-30 北京天诚盛业科技有限公司 虹膜识别中判断左右虹膜的方法和装置
CN105844278B (zh) * 2016-04-15 2019-01-25 浙江理工大学 一种多特征融合的织物扫描图案识别方法
CN105844278A (zh) * 2016-04-15 2016-08-10 浙江理工大学 一种多特征融合的织物扫描图案识别方法
CN106355164A (zh) * 2016-09-30 2017-01-25 桂林师范高等专科学校 一种虹膜识别***
CN108009503A (zh) * 2017-12-04 2018-05-08 北京中科虹霸科技有限公司 基于眼周区域的身份识别方法
US11386636B2 (en) 2019-04-04 2022-07-12 Datalogic Usa, Inc. Image preprocessing for optical character recognition
CN110348387A (zh) * 2019-07-12 2019-10-18 腾讯科技(深圳)有限公司 一种图像数据处理方法、装置以及计算机可读存储介质
TWI724736B (zh) * 2019-09-26 2021-04-11 大陸商上海商湯智能科技有限公司 圖像處理方法及裝置、電子設備、儲存媒體和電腦程式
US11532180B2 (en) 2019-09-26 2022-12-20 Shanghai Sensetime Intelligent Technology Co., Ltd. Image processing method and device and storage medium
CN111985925A (zh) * 2020-07-01 2020-11-24 江西拓世智能科技有限公司 基于虹膜识别和人脸识别的多模态生物识别支付的方法
CN113591747A (zh) * 2021-08-06 2021-11-02 合肥工业大学 一种基于深度学习的多场景虹膜识别方法
CN113591747B (zh) * 2021-08-06 2024-02-23 合肥工业大学 一种基于深度学习的多场景虹膜识别方法
CN113673448A (zh) * 2021-08-24 2021-11-19 厦门立林科技有限公司 一种云和端集成的人脸图像质量动态检测方法及***

Also Published As

Publication number Publication date
US9064145B2 (en) 2015-06-23
CN102844766A (zh) 2012-12-26
US20140037152A1 (en) 2014-02-06
CN102844766B (zh) 2014-12-24

Similar Documents

Publication Publication Date Title
WO2012142756A1 (zh) 基于人眼图像的多特征融合身份识别方法
CN105825176B (zh) 基于多模态非接触身份特征的识别方法
US8917914B2 (en) Face recognition system and method using face pattern words and face pattern bytes
KR101901591B1 (ko) 얼굴 인식 장치 및 그 제어방법
WO2013087026A1 (zh) 一种虹膜定位方法和定位装置
US20060147094A1 (en) Pupil detection method and shape descriptor extraction method for a iris recognition, iris feature extraction apparatus and method, and iris recognition system and method using its
Jan Segmentation and localization schemes for non-ideal iris biometric systems
CN109376604B (zh) 一种基于人体姿态的年龄识别方法和装置
JP2003512684A (ja) 異なるイメジャからの顔及び身体の画像を整列し及び比較するための方法及び装置
KR20100134533A (ko) 트레이스 변환을 이용한 홍채 및 눈 인식 시스템
CN111178130A (zh) 一种基于深度学习的人脸识别方法、***和可读存储介质
Trabelsi et al. A new multimodal biometric system based on finger vein and hand vein recognition
Harakannanavar et al. An extensive study of issues, challenges and achievements in iris recognition
Alkoot et al. A review on advances in iris recognition methods
CN110688872A (zh) 基于唇部的人物识别方法、装置、程序、介质及电子设备
Sarode et al. Review of iris recognition: an evolving biometrics identification technology
Kushwaha et al. PUG-FB: Person-verification using geometric and Haralick features of footprint biometric
Latha et al. A robust person authentication system based on score level fusion of left and right irises and retinal features
Attallah et al. Application of BSIF, Log-Gabor and mRMR transforms for iris and palmprint based Bi-modal identification system
Deshpande et al. Fast and Reliable Biometric Verification System Using Iris
Lokhande et al. Wavelet packet based iris texture analysis for person authentication
Yan et al. Flexible iris matching based on spatial feature reconstruction
Viriri et al. Improving iris-based personal identification using maximum rectangular region detection
Hassan et al. Comparative study of different window sizes setting in median filter for off-angle iris recognition
Chowdhary Analysis of Unimodal and Multimodal Biometric System

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201180005239.2

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11863953

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 13519728

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11863953

Country of ref document: EP

Kind code of ref document: A1