CN110705487B - Palm print acquisition equipment and method and image acquisition device thereof - Google Patents

Palm print acquisition equipment and method and image acquisition device thereof Download PDF

Info

Publication number
CN110705487B
CN110705487B CN201910951155.6A CN201910951155A CN110705487B CN 110705487 B CN110705487 B CN 110705487B CN 201910951155 A CN201910951155 A CN 201910951155A CN 110705487 B CN110705487 B CN 110705487B
Authority
CN
China
Prior art keywords
palm
image
point cloud
dimensional
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910951155.6A
Other languages
Chinese (zh)
Other versions
CN110705487A (en
Inventor
张昆霭
郭振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen International Graduate School of Tsinghua University filed Critical Shenzhen International Graduate School of Tsinghua University
Priority to CN201910951155.6A priority Critical patent/CN110705487B/en
Publication of CN110705487A publication Critical patent/CN110705487A/en
Application granted granted Critical
Publication of CN110705487B publication Critical patent/CN110705487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1335Combining adjacent partial images (e.g. slices) to create a composite input or reference pattern; Tracking a sweeping finger movement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • G06V40/1312Sensors therefor direct reading, e.g. contactless acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • G06V40/1324Sensors therefor by using geometrical optics, e.g. using prisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/12Acquisition of 3D measurements of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A palm print collecting device, method and its image collecting device, the apparatus includes image collecting device and image processing unit, the image collecting device includes lens set, beam splitting module, ordinary camera and depth camera, after the light that the lens set catches is split by the beam splitting module, a part is gathered by the ordinary camera and got the two-dimentional RGB picture, another part is gathered and got the depth map of the palm by the depth camera at the same time; the image processing device calculates a 3D point cloud describing the spatial attitude of the palm by using the two-dimensional RGB image and the depth map, wherein the spatial attitude comprises two-dimensional coordinates of RGB image pixels and third-dimensional coordinates of the RGB image pixels determined by the depth map; correcting the spatial posture of the 3D point cloud to a preset positive direction through spatial transformation, and then removing third-dimensional direction information of the corrected 3D point cloud to obtain a corrected two-dimensional palm image; and performing palm print ROI extraction on the corrected two-dimensional palm image. The method can obviously improve the stability of palm print ROI extraction.

Description

Palm print acquisition equipment and method and image acquisition device thereof
Technical Field
The invention relates to the field of biological feature recognition, in particular to a palm print acquisition device, a palm print acquisition device method and an image acquisition device thereof.
Background
Palm print recognition is an important technology in biometric identification. The general process of palm print recognition is: image acquisition, image preprocessing, palm print ROI extraction, feature extraction and feature matching identification. The Region of Interest (ROI) extraction is to define a Region, usually a rectangular Region, on the palm for the next feature extraction. Therefore, the stability of ROI extraction directly affects the stability of features, thereby affecting the performance of the entire palm print recognition algorithm.
The existing technical scheme of palm print identification can be divided into contact palm print identification and non-contact palm print identification, and the two palm print ROI extraction methods are slightly different.
Contact type palm print recognition: the palm must be held against or secured to the acquisition device to ensure that the palm is in a substantially consistent position during each acquisition. The ROI is typically defined by a number of keypoints. For example, the ROI is defined with two fixed points on the device. For another example, using the connection points of adjacent fingers, such as the connection point of the index finger and middle finger, and the connection point of the middle finger and ring finger, since the finger gestures are fixed by the device, it can be considered that these two points are also fixed. In any way, as long as two stable key points can be found, and a rectangular coordinate system is established by the connecting line and the perpendicular line of the two stable key points, a rectangular region can be cut out at the palm as the ROI according to a certain rule.
Non-contact palm print recognition: the palm is not fixed, and can be relative collection equipment in space free motion, the gesture of palm, finger also is constantly changing. Therefore, usually, the palm is detected in the image to locate the position of the palm, and further locate key points of the palm, such as finger joint points and finger root points, and then a coordinate system is established with these key points, and a rectangular region is cut on the palm as the ROI with a certain rule.
The contact type palm print recognition has the following defects: the palm must be fixed on the equipment, which limits the freedom of use, and the direct contact between the palm and the equipment can cause sanitary problems, especially when a plurality of people share one equipment. In addition, in the actual use process, the change of illumination or the wrong placement of the palm can easily cause the change of the extracted ROI.
The non-contact palm print recognition has the following disadvantages: the pose change of the fingers and the palm easily causes the instability of the extracted ROI.
Disclosure of Invention
The invention mainly aims to overcome the defects of the prior art and provides palm print acquisition equipment, a palm print acquisition equipment method and an image acquisition device thereof.
In order to realize the purpose, the invention adopts the following technical scheme:
a palm print acquisition device comprises an image acquisition device and an image processing device, wherein the image acquisition device comprises a lens module, a light splitting module, a common camera and a depth camera, the lens module captures light rays from a palm, after the light rays are split by the light splitting module, one part of the light rays are acquired by the common camera, the other part of the light rays are simultaneously acquired by the depth camera, and the image processing device acquires a two-dimensional RGB image of the palm through the common camera and acquires a depth map of the palm through the light field camera; wherein the image processing device calculates a 3D point cloud describing a spatial pose of the palm using the two-dimensional RGB image and the depth map, the spatial pose of the 3D point cloud comprising two-dimensional coordinates of RGB image pixels and third-dimensional coordinates of RGB image pixels determined by the depth map; correcting the spatial posture of the 3D point cloud to a preset positive direction through spatial transformation, and then removing third-dimensional direction information of the corrected 3D point cloud to obtain a corrected two-dimensional palm image; and performing palm print ROI extraction on the corrected two-dimensional palm image.
Further, the method comprises the following steps:
the depth camera is a light field camera.
The light splitting module is a semi-transparent semi-reflecting mirror.
The image acquisition device also comprises a control unit or a synchronous trigger, which is used for controlling or triggering the common camera and the depth camera to take pictures together at the same time.
The image processing device amplifies the depth map to be equal to the two-dimensional RGB image in size, preferably amplifies the depth map through bilinear interpolation, enables each pixel on the amplified depth map to correspond to a pixel of the two-dimensional RGB image one by one, and takes the depth value of each pixel as a third coordinate of the pixel of the RGB image to generate a group of dense point clouds;
wherein the two-dimensional RGB image is F, the amplified depth map is G, the point cloud is H, and the point (x) on the RGB image s ,y s ) Has a value of F (x) s ,y s ) The corresponding depth value on the depth map is G (x) s ,y s ) And the corresponding coordinates of the point in the point cloud are as follows:
(x s ,y s ,z s )=(x s ,y s ,G(x s ,y s ))
the RGB values are:
H(x s ,y s ,z s )=F(x s ,y s )。
the positive direction is defined as: in the case of a naturally open palm with all five fingers open, the palm plane is parallel to the imaging plane of the depth camera, the ray emanating from the midpoint of the base of the middle finger passes through the tip of the middle finger, and the direction of this ray in the depth image is oriented vertically upward.
The Spatial transformation is performed by a Spatial Transformer Networks (STN) trained by collecting a large number of positive hand palm samples and non-positive hand palm samples.
The spatial transformation is performed by the following mapping relation:
Figure BDA0002225721470000031
wherein (x) s ,y s ,z s ) As coordinates of the 3D point cloud before transformation, (x) t ,y t ,z t ) To the coordinates of the transformed 3D point cloud,
Figure BDA0002225721470000032
namely A θ Is a transformation matrix;
the trained STN acts on the original point cloud including the fingers, and the RGB value of the point cloud after transformation is calculated according to the RGB value of the original point cloud, and the point cloud after H transformation of the original point cloud is H T ,ψ(H,x s ,y s ,z s ) Representing points from point cloud H (x) s ,y s ,z s ) RGB value obtained by interpolation calculation, point (x) in point cloud after conversion t ,y t ,z t ) The RGB values of (A) are:
H T (x t ,y t ,z t )=ψ(H,x s ,y s ,z s ),
removing the transformed point cloud H T In the z direction of all points in the image, to obtain a corrected two-dimensional palm image F T ,H T Point (x) of (1) t ,y t ,z t ) At F T The coordinate of the upper corresponding point is (x) t ,y t ) The RGB values are:
F T (x t ,y t )=H T (x t ,y t ,z t )。
a palm print collecting method uses the palm print collecting device to collect palm prints to obtain a palm print ROI.
The utility model provides an image acquisition device that is used for palm print collection equipment, includes camera lens module, spectral module, ordinary camera and depth camera, the camera lens module is caught the light that comes from the palm, through after the spectral module beam split, partly light by ordinary camera gathers, another part light by simultaneously the depth camera is gathered, ordinary camera acquire the two-dimentional RGB image of palm with the light field camera acquires the depth map of palm and is used for producing the two-dimentional palm image after the space gesture is rectified to carry out palm print ROI to two-dimentional palm image after rectifying and draw.
The invention has the following beneficial effects:
the invention provides equipment and a method for acquiring a palm print image and extracting and positioning a palm print interested region by combining depth map imaging and common imaging. According to the equipment and the method, the light splitting device, the depth camera and the common camera are used for simultaneously acquiring the depth map and the two-dimensional RGB image of the palm, wherein the depth camera obtains the depth map of the palm through a depth map imaging technology (such as light field imaging), the depth map and the two-dimensional RGB image are combined to generate a 3D point cloud, then the space posture of the palm is corrected through a trained space transformation network, and a stable ROI is extracted, so that the ROI extraction stability can be remarkably improved, and the palm print recognition reliability is improved. The method can be used for non-contact palm print recognition, and overcomes the defect that the extracted ROI is unstable easily caused by the posture change of fingers and palms in the existing non-contact palm print recognition.
Drawings
FIG. 1 is a flow chart of a palm print collection method according to the present invention;
FIG. 2 is a schematic structural diagram of an image capturing device in an embodiment of a palm print capturing apparatus according to the present invention;
FIG. 3 is a schematic structural diagram of an image capturing device in another embodiment of the palm print capturing apparatus of the present invention;
Fig. 4 is a schematic structural diagram of a camera in an embodiment of the palm print acquisition device of the present invention;
fig. 5 is a schematic structural diagram of a light field camera in an embodiment of the palm print acquisition apparatus of the present invention.
FIG. 6 is a schematic view of a light field camera imaging;
FIG. 7 is a schematic view of a microlens array of a light field camera;
Detailed Description
The embodiments of the present invention will be described in detail below. It should be emphasized that the following description is merely exemplary in nature and is not intended to limit the scope of the invention or its application.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element. The connection may be for fixation or for circuit connection.
It is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings for convenience in describing the embodiments of the present invention and to simplify the description, and are not intended to indicate or imply that the referenced device or element must have a particular orientation, be constructed in a particular orientation, and be in any way limiting of the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present invention, "a plurality" means two or more unless specifically limited otherwise.
Referring to fig. 1 to 7, in an embodiment, a palm print collecting apparatus includes an image collecting device and an image processing device, the image collecting device includes a lens module, a light splitting module, a general camera and a depth camera, the lens module captures light from a palm, after the light is split by the light splitting module, a part of the light is collected by the general camera, another part of the light is collected by the depth camera at the same time, the image processing device obtains a two-dimensional RGB image of the palm through the general camera, and obtains a depth map of the palm through the light field camera; wherein the image processing device calculates a 3D point cloud describing a spatial pose of the palm using the two-dimensional RGB image and the depth map, the spatial pose of the 3D point cloud comprising two-dimensional coordinates of RGB image pixels and third-dimensional coordinates of RGB image pixels determined by the depth map; correcting the spatial posture of the 3D point cloud to a preset positive direction through spatial transformation, and then removing third-dimensional direction information of the corrected 3D point cloud to obtain a corrected two-dimensional palm image; and performing palm print ROI extraction on the corrected two-dimensional palm image.
Referring to fig. 2-3, in a preferred embodiment, the depth camera is a light field camera, and the depth map of the palm is obtained from the light field image captured by the light field camera.
Referring to fig. 2 to 3, in a preferred embodiment, the light splitting module is a half-mirror (also called a spectroscope).
Referring to fig. 2, the image capturing apparatus may include a control unit, wherein the control unit is connected to the general camera and the depth camera, and is configured to control the general camera and the depth camera to take pictures together at the same time.
Referring to fig. 3, in a preferred embodiment, the image capturing device includes a synchronization trigger connected to the general camera and the depth camera for triggering the general camera and the depth camera to take a picture together at the same time.
Referring to fig. 2 to 7, in another embodiment, an image capturing device for use in the palm print capturing apparatus of any one of the foregoing embodiments includes a lens module, a light splitting module, a normal camera and a depth camera, where the lens module captures light from a palm, and after the light is split by the light splitting module, a part of the light is captured by the normal camera, and another part of the light is captured by the depth camera at the same time, the normal camera acquires a two-dimensional RGB image of the palm and the light field camera acquires a depth map of the palm, and the two-dimensional RGB image and the depth map are provided to an image processing device for processing, so as to generate a two-dimensional palm image after spatial pose correction, so as to perform palm print ROI extraction on the corrected two-dimensional palm image.
In another embodiment, a palm print acquisition method uses the palm print acquisition apparatus of any one of the foregoing embodiments to perform palm print acquisition, so as to obtain a palm print ROI. The processing flow of the method is shown in fig. 1, and comprises the following steps: image acquisition, point cloud generation, palm posture correction and ROI positioning.
The principles, features and advantages of particular embodiments of the present invention are further described below in conjunction with the following drawings.
Image acquisition:
the light field imaging technology can acquire the angle information of light rays so as to calculate a depth map, but an original light field image is actually the fusion of a plurality of visual angle images, has certain redundant information, and cannot directly express the details of a palm print. The resolution of the original light field image decomposed into a single view angle image is smaller than the original resolution, and palm print details are lost. The depth map is computed from a plurality of single-view images, the resolution of which is consistent with that of the single-view images, and palm print details are also lost. Therefore, it is not suitable for palm print recognition in both original light field images, single-view images, and depth maps. In the embodiment of the invention, the image acquisition device is provided, the incident light is divided into two by adopting the spectroscope, the common RGB camera and the depth camera are used for simultaneously acquiring the palm print image, the high-resolution 2D palm print image is reserved, and the light field image of the palm print can be acquired.
In a preferred embodiment, the image capturing device is shown in fig. 3 and includes a lens group, a beam splitter, a general camera, a light field camera and a synchronization trigger.
A lens group: the lens comprises a set of lenses, is detachable and replaceable, and can be used for adapting to different shooting distances.
Spectroscope: also known as a half mirror, allows 50% of incident light to pass through and 50% to reflect.
A general camera: including the lens and image sensor, as shown in fig. 4.
A light field camera: a microlens array is added between the lens and the image sensor of a general camera as shown in fig. 5. And selecting proper focal length and placing position to focus the light on the image sensor after the light is refracted by the lens and the micro-lens array.
A synchronous trigger: the common camera and the light field camera are connected, and the two cameras can be triggered to shoot together at the same time.
In the image acquisition device, the lens group and the optical field camera are coaxial, the included angle between the spectroscope and the optical axis of the lens group is 45 degrees, one beam of light penetrating through the spectroscope is coincided with the optical axis of the optical field camera, and the other beam of light reflected by the spectroscope is coincided with the optical axis of the common camera.
The light field (lightfield) is information of light in space collected by a group of lens arrays arranged according to a specific rule, and is mainly characterized in that the light field can collect angle information of light besides two-dimensional image information collected by a single lens. Fig. 6 is a diagram illustrating a typical light field imaging, where the imaging elements include a main lens, a microlens array, and an image sensor. The microlens array is generally formed by arranging a plurality of tiny convex lenses with the same aperture and the same focal length equidistantly in a matrix plane, as shown in fig. 7. Any one of the sub-apertures, as shown in fig. 6 a, is selected on the main lens, and after passing through this sub-aperture a, the light from different positions of the scene to be photographed passes through each microlens in the microlens array, and then is imaged on the image sensor behind the microlens, and the images a1, a2, A3 and a4 are only one part of the scene. Thus, different portions of the scene are dispersed behind each microlens by a sub-aperture. When we select sub-apertures at other positions, the imaged position behind the microlens will change accordingly. The above process can be considered as that a plurality of small holes with different positions are formed on the main lens to observe the shot scenery, and the observation angle changes along with the position change of the sub-aperture. Therefore, light field imaging can acquire angle information of light, scenery observed from different viewing angles can be recovered through a later image processing algorithm, and depth information of each position in the scenery can be calculated and obtained by utilizing a multi-view vision technology to form a depth map (depth map).
Point cloud generation:
the palm image collected by the light field camera can be calculated to obtain a depth map, and the resolution of the depth map is smaller than that of the original image. And (3) amplifying the depth map to be the same as the size of the palm RGB image acquired by the common camera through bilinear interpolation, wherein each pixel on the amplified depth map corresponds to the RGB image pixel one by one. And taking the depth value as a third-dimensional coordinate of the RGB image pixel, namely generating a dense point cloud.
And setting the RGB image as F, the amplified depth map as G and the point cloud as H. A certain point (x) on the RGB map s ,y s ) Has a value of F (x) s ,y s ) The corresponding depth value on the depth map is G (x) s ,y s ) Then the corresponding coordinate of the point in the point cloud is (x) s ,y s ,z s )=(x s ,y s ,G(x s ,y s ))
Its RGB value is H (x) s ,y s ,z s )=F(x s ,y s )。
Correcting the palm posture:
since the posture of the fingers is unstable, first, the finger portions are removed according to the depth map of the palm, and only the palm portion is retained. The 3D point cloud of the palm part describes the space posture of the palm and needs to be corrected to a positive direction through space transformation. We define the positive direction of the palm as: under the condition that the palm is naturally opened and the five fingers are fully opened, the palm plane is parallel to the imaging plane of the light field camera, a ray emitted from the middle finger root middle point passes through the middle finger tip, and the direction of the ray in the light field image is vertically upward.
A large number of positive direction palm samples and non-positive direction palm samples are collected, and a three-dimensional version space variation network (STN) is trained, so that palm point clouds in various postures can be corrected to the positive direction after passing through the STN. Suppose the point cloud coordinates before transformation are (x) t ,y t ,z t ) The transformed point cloud coordinate is (x) t ,y t ,z t ) Then the mapping relationship is
Figure BDA0002225721470000081
Wherein the transformation matrix A θ Is the weight matrix that needs to be trained.
The trained STN acts on the original point cloud containing the fingers, and the RGB value of the point cloud after transformation is calculated according to the RGB value of the original point cloud, and the point cloud after H transformation of the original point cloud is H T ,ψ(H,x s ,y s ,z s ) Representing points from point cloud H (x) s ,y s ,z s ) RGB value obtained by interpolation calculation, point (x) in point cloud after conversion t ,y t ,z t ) The RGB values of (A) are:
H T (x t ,y t ,z t )=ψ(H,x s ,y s ,z s ),
our goal is to compute the transformed point cloud H T RGB values of all the above points, and H T A certain point (x) on t ,y t ,z t ) RGB value of (H) T (x t ,y t ,z t ) Calculated from the RGB values of their corresponding pre-transformed points. In the actual calculation, the coordinate (x) is firstly calculated t ,y t ,z t ) Coordinates (x) are deduced inversely s ,y s ,z s ) But due to the back-derived (x) s ,y s ,z s ) Possibly not an integerPossibly not present in the original point cloud H, for which purpose (x) is passed s ,y s ,z s ) Interpolating (x) from the RGB values of the points that actually exist in the vicinity s ,y s ,z s ) The RGB values of (a).
ROI positioning:
removing the transformed point cloud H T In the z direction of all points in the image, to obtain a corrected two-dimensional palm image F T ,H T Point (x) of (1) t ,y t ,z t ) At F T The coordinate of the upper corresponding point is (x) t ,y t ) The RGB values are:
F T (x t ,y t )=H T (x t ,y t ,z t )。
and obtaining a more stable palm print ROI by using the conventional ROI extraction method on the corrected two-dimensional palm image.
The foregoing is a more detailed description of the invention in connection with specific/preferred embodiments and is not intended to limit the practice of the invention to those descriptions. It will be apparent to those skilled in the art that various substitutions and modifications can be made to the described embodiments without departing from the spirit of the invention, and these substitutions and modifications should be considered to fall within the scope of the invention. In the description herein, references to the description of the term "one embodiment," "some embodiments," "preferred embodiments," "an example," "a specific example," or "some examples" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction. Although embodiments of the present invention and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope of the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. One of ordinary skill in the art will readily appreciate that the above-disclosed, presently existing or later to be developed, processes, machines, manufacture, compositions of matter, means, methods, or steps, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps. .

Claims (7)

1. A palm print collecting device is characterized by comprising an image collecting device and an image processing device, wherein the image collecting device comprises a lens module, a light splitting module, a common camera and a depth camera, the lens module captures light rays from a palm, after the light rays are split by the light splitting module, one part of the light rays are collected by the common camera, the other part of the light rays are collected by the depth camera at the same time, and the image processing device obtains a two-dimensional RGB image of the palm through the common camera and obtains a depth map of the palm through the depth camera; wherein the image processing device calculates a 3D point cloud describing a spatial pose of the palm using the two-dimensional RGB image and the depth map, the spatial pose of the 3D point cloud comprising two-dimensional coordinates of RGB image pixels and third-dimensional coordinates of RGB image pixels determined by the depth map; correcting the spatial attitude of the 3D point cloud to a preset positive direction through spatial transformation so as to correct the spatial attitude of the palm, and then removing the corrected third-dimensional direction information of the 3D point cloud to obtain a corrected two-dimensional palm image; performing palm print ROI extraction on the corrected two-dimensional palm image;
Wherein the spatial transformation is performed by the following mapping relationship:
Figure FDA0003705464670000011
wherein (x) s ,y s ,z s ) As coordinates of the 3D point cloud before transformation, (x) t ,y t ,z t ) To the coordinates of the transformed 3D point cloud,
Figure FDA0003705464670000012
namely A θ Is a transformation matrix;
the trained space transformation network STN acts on the original point cloud including the fingers, and the RGB value of the transformed point cloud is calculated according to the RGB value of the original point cloud, and the point cloud after the original point cloud H is transformed into H T ,ψ(H,x s ,y s ,z s ) Representing points from point cloud H (x) s ,y s ,z s ) RGB value obtained by interpolation calculation, point (x) in point cloud after conversion t ,y t ,z t ) The RGB values of (A) are:
H T (x t ,y t ,z t )=ψ(H,x s ,y s ,z s ),
removing the transformed point cloud H T In the z direction of all points in the image, to obtain a corrected two-dimensional palm image F T ,H T Point (x) of (1) t ,y t ,z t ) At F T The coordinate of the upper corresponding point is (x) t ,y t ) The RGB values are:
F T (x t ,y t )=H T (x t ,y t ,z t )
the image processing device amplifies the depth map to be equal to the size of the two-dimensional RGB image, each pixel on the amplified depth map corresponds to the pixels of the two-dimensional RGB image one by one through bilinear interpolation amplification, the depth value of each pixel is used as a third-dimensional coordinate of the pixels of the RGB image, and a group of dense point clouds are generated;
wherein the two-dimensional RGB image is F, the amplified depth map is G, and the points areCloud is H, point (x) on RGB graph s ,y s ) Has a value of F (x) s ,y s ) The corresponding depth value on the depth map is G (x) s ,y s ) And the corresponding coordinates of the point in the point cloud are as follows:
(x s ,y s ,z s )=(x s ,y s ,G(x s ,y s ))
the RGB values are:
H(x s ,y s ,z s )=F(x s ,y s )。
2. the palm print acquisition device of claim 1 wherein the depth camera is a light field camera.
3. The palm print acquisition device according to claim 1 or 2, wherein the light splitting module is a half-mirror.
4. The palmprint acquisition device of claim 1 or 2, wherein the image acquisition means further comprises a control unit or a synchronization trigger for controlling or triggering the normal camera and the depth camera to take pictures together at the same time.
5. The palm print acquisition device according to claim 1 or 2, characterized in that the positive direction is defined as: in the case of a naturally open palm with all five fingers open, the palm plane is parallel to the imaging plane of the depth camera, the ray emanating from the midpoint of the base of the middle finger passes through the middle finger tip, and the direction of this ray in the depth image is oriented vertically upward.
6. The palm print acquisition device of claim 1 or 2, wherein the spatial transformation network is trained by acquiring a plurality of positive hand samples and non-positive hand samples.
7. A palm print acquisition method, characterized in that palm print acquisition is performed using the palm print acquisition apparatus of any one of claims 1 to 6 to obtain a palm print ROI.
CN201910951155.6A 2019-10-08 2019-10-08 Palm print acquisition equipment and method and image acquisition device thereof Active CN110705487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910951155.6A CN110705487B (en) 2019-10-08 2019-10-08 Palm print acquisition equipment and method and image acquisition device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910951155.6A CN110705487B (en) 2019-10-08 2019-10-08 Palm print acquisition equipment and method and image acquisition device thereof

Publications (2)

Publication Number Publication Date
CN110705487A CN110705487A (en) 2020-01-17
CN110705487B true CN110705487B (en) 2022-07-29

Family

ID=69198209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910951155.6A Active CN110705487B (en) 2019-10-08 2019-10-08 Palm print acquisition equipment and method and image acquisition device thereof

Country Status (1)

Country Link
CN (1) CN110705487B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709268B (en) * 2020-04-24 2022-10-14 中国科学院软件研究所 Human hand posture estimation method and device based on human hand structure guidance in depth image
CN115032756B (en) * 2022-06-07 2022-12-27 北京拙河科技有限公司 Micro-lens array positioning method and system of light field camera

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101196988A (en) * 2007-12-25 2008-06-11 哈尔滨工业大学 Palm locating and center area extraction method of three-dimensional palm print identity identification system
CN102572249A (en) * 2012-03-08 2012-07-11 湖南创远智能科技有限公司 Dual-mode imaging optical system for face and iris
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN106548489A (en) * 2016-09-20 2017-03-29 深圳奥比中光科技有限公司 The method for registering of a kind of depth image and coloured image, three-dimensional image acquisition apparatus
CN109199392A (en) * 2018-08-09 2019-01-15 重庆邮电大学 A kind of infant foot bottom image data acquiring and 3D shape modeling method
CN109255813A (en) * 2018-09-06 2019-01-22 大连理工大学 A kind of hand-held object pose real-time detection method towards man-machine collaboration
CN110135340A (en) * 2019-05-15 2019-08-16 中国科学技术大学 3D hand gestures estimation method based on cloud

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101196988A (en) * 2007-12-25 2008-06-11 哈尔滨工业大学 Palm locating and center area extraction method of three-dimensional palm print identity identification system
CN102572249A (en) * 2012-03-08 2012-07-11 湖南创远智能科技有限公司 Dual-mode imaging optical system for face and iris
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN106548489A (en) * 2016-09-20 2017-03-29 深圳奥比中光科技有限公司 The method for registering of a kind of depth image and coloured image, three-dimensional image acquisition apparatus
CN109199392A (en) * 2018-08-09 2019-01-15 重庆邮电大学 A kind of infant foot bottom image data acquiring and 3D shape modeling method
CN109255813A (en) * 2018-09-06 2019-01-22 大连理工大学 A kind of hand-held object pose real-time detection method towards man-machine collaboration
CN110135340A (en) * 2019-05-15 2019-08-16 中国科学技术大学 3D hand gestures estimation method based on cloud

Also Published As

Publication number Publication date
CN110705487A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
US10914576B2 (en) Handheld large-scale three-dimensional measurement scanner system simultaneously having photogrammetric and three-dimensional scanning functions
US10311648B2 (en) Systems and methods for scanning three-dimensional objects
CN105894499B (en) A kind of space object three-dimensional information rapid detection method based on binocular vision
TWI555379B (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
JP3624353B2 (en) Three-dimensional shape measuring method and apparatus
WO2019015154A1 (en) Monocular three-dimensional scanning system based three-dimensional reconstruction method and apparatus
JP6223169B2 (en) Information processing apparatus, information processing method, and program
CN108269238B (en) Depth image acquisition device, depth image acquisition system and image processing method thereof
WO2017121058A1 (en) All-optical information acquisition system
WO2014044126A1 (en) Coordinate acquisition device, system and method for real-time 3d reconstruction, and stereoscopic interactive device
KR20130068193A (en) Multi images supplying system and multi images shooting device thereof
WO2014006545A1 (en) 3-d scanning and positioning system
CN111009030A (en) Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method
CN110705487B (en) Palm print acquisition equipment and method and image acquisition device thereof
CN111047678B (en) Three-dimensional face acquisition device and method
CN107578450A (en) A kind of method and system for the demarcation of panorama camera rigging error
WO2022100668A1 (en) Temperature measurement method, apparatus, and system, storage medium, and program product
JPWO2014192487A1 (en) Multi-view imaging system, acquired image composition processing method, and program
CN106846385B (en) Multi-sensing remote sensing image matching method, device and system based on unmanned aerial vehicle
CN109614909B (en) Iris acquisition equipment and method for extending acquisition distance
JP7300895B2 (en) Image processing device, image processing method, program, and storage medium
CN110059537A (en) A kind of three-dimensional face data acquisition methods and device based on Kinect sensor
TWI569642B (en) Method and device of capturing image with machine vision
JP3862402B2 (en) 3D model generation apparatus and computer-readable recording medium on which 3D model generation program is recorded
CN116597488A (en) Face recognition method based on Kinect database

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant