WO2024088640A1 - Miniaturized 3d-sensing camera system - Google Patents

Miniaturized 3d-sensing camera system Download PDF

Info

Publication number
WO2024088640A1
WO2024088640A1 PCT/EP2023/075127 EP2023075127W WO2024088640A1 WO 2024088640 A1 WO2024088640 A1 WO 2024088640A1 EP 2023075127 W EP2023075127 W EP 2023075127W WO 2024088640 A1 WO2024088640 A1 WO 2024088640A1
Authority
WO
WIPO (PCT)
Prior art keywords
view
imaging device
light sensing
field
light
Prior art date
Application number
PCT/EP2023/075127
Other languages
French (fr)
Inventor
Andreas Brueckner
Alessandro PIOTTO
Preethi PADMANABHAN
Markus Rossi
Robert Gove
Original Assignee
Ams International Ag
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ams International Ag filed Critical Ams International Ag
Publication of WO2024088640A1 publication Critical patent/WO2024088640A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/08Stereoscopic photography by simultaneous recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/20Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/705Pixels for depth measurement, e.g. RGBZ

Definitions

  • the present disclosure relates generally to an imaging device adapted to have a reduced footprint, and to a mobile communication device including the adapted imaging device.
  • 3D-sensors for face recognition and authentication in modern smartphones, in which a 3D-camera system is used to collect three-dimensional data representing the face of a user to allow or deny access to the functionalities of the smartphone.
  • Another application example may include factory automation for Industry 5.0, in which 3D-sensors may be implemented to assist the movement of autonomous robots or the execution of an automated task.
  • Other examples may include authentication systems for electronic payments, augmented reality (AR), virtual reality (VR), internet-of-things (loT) environments, and the like. Improvements in 3D-sensors may thus be of particular relevance for the further advancement of several technologies.
  • FIG.1 A shows a 3D-sensor in a schematic representation, according to various aspects
  • FIG. IB shows a relationship between a field of view of the 3D-sensor and the geometry of an active area of the 3D-sensor, according to various aspects
  • FIG.2A shows a top view of an imaging device in a schematic representation, according to various aspects
  • FIG.2B shows a side view of the imaging device in a schematic representation, according to various aspects
  • FIG.2C shows a splitting of a field of view of the imaging device in a schematic representation, according to various aspects
  • FIG.2D shows possible arrangements of light sensing areas of the light sensing portion of the imaging device in a schematic representation, according to various aspects
  • FIG.2E shows a comparison between a single light sensing area and a split-configuration for the light sensing areas in a schematic representation, according to various aspects
  • FIG.3 shows possible configurations for the relative orientation between light sensing areas and partial fields of view in a schematic representation, according to various aspects
  • FIG.4 shows a processor for use in an imaging device in a schematic representation, according to various aspects
  • FIG.5 shows a projector for use in an imaging device in a schematic representation, according to various aspects
  • FIG.6A, FIG.6B, FIG.6C, FIG.6D each shows a relative arrangement of a projector and light sensing areas of an imaging device in a schematic representation, according to various aspects
  • FIG.6E shows possible configurations of projectors in relation to light sensing areas of an imaging device in a schematic representation, according to various aspects.
  • FIG.7A, FIG.7B, and FIG.7C each shows an optical system for use in an imaging device in a schematic representation, according to various aspects. Description
  • 3D-sensing such as via structured light, active stereo vision, or time-of-flight systems.
  • each of these techniques allows generating or reconstructing three-dimensional information about a scene, e.g., as a three-dimensional image, a depth-map, or a three-dimensional point cloud.
  • 3D-sensing allows determining information about objects present in the scene, such as their position in the three-dimensional space, their shape, their orientation, and the like.
  • Exemplary applications of active 3D-sensing include their use in automotive, e.g., to assist autonomous driving, and in portable devices (e.g., smartphones, tablets, and the like) to implement various functionalities such as face or object recognition, autofocusing, gaming activities, etc.
  • portable devices e.g., smartphones, tablets, and the like
  • functionalities such as face or object recognition, autofocusing, gaming activities, etc.
  • a smartphone various components are disposed in a notch that is visible on the user-side of the smartphone.
  • Such components may include, an ambient light sensor, a proximity sensor, a flood illuminator, a 3D-sensor (e.g., including a projector and a camera), a microphone, a front camera, and the like.
  • the notch takes away space for the screen of the smartphone, so that it may be desirable to reduce its area as much as possible.
  • the miniaturization of the overall dimension of the notch may be limited by the minimum dimensions of the components, in particular by the dimensions of the 3D-sensor, as discussed in further detail below.
  • FIG.1A shows a 3D-sensor 100 in a schematic view, according to various aspects.
  • the 3D-sensor 100 may be configured to carry out 3D-sensing via structured light, and may include a projector module (P) 102 and a camera module 104 (e.g., a Near-Infrared camera module).
  • the 3D-sensor 100 may have a conventional configuration, and is shown to illustrate some limitations of such arrangement.
  • the camera module 104 may include a receiver module (R) 108 (e.g., one or more optical components to collect light from the scene), and an image sensor active area (C) 106, illustratively a light sensing area 106, whose dimensions may be limiting for the overall size of the 3D-sensor 100.
  • the lateral dimension of the active area 106 may be limiting for the module footprint (MFP) on a substrate, e.g. on a printed circuit board.
  • the limiting dimension may be the vertical dimension of the active area 106 as seen from the front of the smartphone, illustratively, the y-size of the active area 106, I y .
  • the y-size of the active area 106 may thus be or define a critical dimension (CD) of the 3D-sensor 100, which may be the limiting factor for the miniaturization of the 3D-sensor 100.
  • FIG.1B shows a relationship between the field of view of the 3D-sensor 100 and the geometry of the active area 106 in a schematic view, according to various aspects.
  • the angular size of the field of view may be expressed in terms of the field of view angle 110, a, and may be defined by application requirements.
  • the field of view angle 110 may be dependent on the focal length 112, f, and on the image size 114, n, on the active area 106.
  • the image size 114 may be expressed as a radius, n, and may be given by the number of pixels 116 of the active area 106, e.g.
  • the number of pixels N x , N y in the x-direction and y-direction, as well as on the pixel pitch p px (illustratively, a pixel-to-pixel distance, which may be equal along the x-direction and y-direction, as an example).
  • a relationship between, the field of view angle 110, a, focal length 112, f, and image size 114, n, may be expressed via the following equation,
  • I x I y .
  • the y-size of the active area 106 which then defines the critical dimension (CD), may be related to the radius according to the following equation in this scenario,
  • the critical dimension (CD) of the 3D-sensor 100 may thus be reduced, for example, by reducing the image radius, n, according to equation 3, which however causes a loss of image resolution.
  • a further option would be to reduce the pixel pitch and/or the pixel size, which however leads to a loss in signal-to-noise ratio (illustratively, a reduced SNR, and thus a more noisy detection).
  • Yet another option would be to increase the angular sampling, (p, which however causes a loss of angular resolution.
  • the present disclosure may thus be based on the realization that a reduction in the critical dimension of a 3D-sensor may be of great importance for its integration in miniaturized devices, and may be based on a strategy that allows reducing such critical dimension without having to compromise other performance parameters such as the resolution or the SNR.
  • the approach described herein may be based on splitting the active area of a camera module of a 3D-sensor and contextually the imaged field of view into a plurality of portions. Such sub-division of the active area allows an additional degree of freedom in the geometrical arrangement of the camera, by enabling a disposition of the active areas in a manner that allows reducing the critical dimension while maintaining a same resolution (e.g., image resolution and angular resolution) and SNR.
  • the present disclosure is related to a 3D-sensor with a segmentation of the active area (and a corresponding segmentation of the field of view), which allows a y-size reduction.
  • a 3D-sensor as described herein may be for integration in a mobile communication device, e.g. a smartphone, a tablet, a laptop, and the like.
  • a 3D-sensor as described herein may allow reducing the y-dimension of a notch at the user-side, thus increasing the available screen area.
  • the applications of the 3D-sensor are not limited to the context of mobile communication devices, but the 3D-sensor may in general be applied in any device or system in which the overall reduced dimension of the 3D-sensor may be beneficial for the arrangement.
  • 3D-sensing may be the most relevant use case for an imaging device configured as described herein, but such configuration may in general be applied also for other applications, such as distance measurements, two-dimensional imaging, etc.
  • an imaging device may include: a camera, wherein the camera includes a plurality of light sensing areas, wherein the light sensing areas are arranged along a first direction; and an optical system configured to define a plurality of imaging channels, wherein each imaging channel corresponds to a partial field of view covering a respective portion of a total field of view of the imaging device, wherein the portions of the total field of view covered by the partial fields of view are arranged along a second direction at an angle with the first direction, and wherein each imaging channel is configured to direct light from the corresponding partial field of view towards a respective light sensing area of the plurality of light sensing areas.
  • the imaging device may be configured for imaging of infrared light (e.g., near-infrared light).
  • the light sensing areas may be configured to be sensitive for infrared light
  • the optical system may be designed for light with wavelength in the infrared range.
  • Infrared light may provide imaging in real-world scenarios with less influence from ambient light (e.g., from sunlight, or other sources of visible light in a scene, such as the headlights of a car, a street lamp, etc.), and may be the preferred choice for 3D-sensing applications such as for face recognition and authentication.
  • the optical components may be various options for the optical components that allow a segmentation of the field of view of the imaging device.
  • the segmentation is achieved by means of one or more meta-surface optical elements, e.g. one or more meta-surface lenses.
  • a lens including a meta-surface allows a flexible control of the properties of light passing through the metasurface, and thus provide a convenient and easily scalable manner of implementing the segmentation of the field of view.
  • an imaging device may include: a camera, wherein the camera includes a plurality of light sensing areas, wherein the light sensing areas are arranged along a first direction; and one or more meta-surface optical elements, wherein each metasurface optical element is configured to direct light from a corresponding partial field of view covering a respective portion of a total field of view of the imaging device towards one or more of the plurality of light sensing areas, wherein the portions of the total field of view covered by the partial fields of view are arranged along a second direction at an angle with the first direction.
  • optical system may alternatively include other types of optical and/or mechanical components to manipulate the incoming light and achieve the segmentation of the field of view, as discussed in further detail below.
  • orientations described in the present disclosure refer to the scenario in which a light sensing area is arranged to face the field of view of the corresponding imaging device.
  • a x-dimension (or x-size) of the light sensing area may be the lateral dimension along the horizontal field of view direction
  • a y-dimension (or y-size) of the light sensing area may be the lateral dimension along the vertical field of view direction.
  • horizontal or vertical orientation, as well as the definition of x-dimension or y-dimension may be arbitrary.
  • FIG.2A and FIG.2B show a top view and a side view of an imaging device 200, respectively, in a schematic representation according to various aspects.
  • the imaging device 200 may be or may be configured as a 3D-sensor, e.g. may be a sensor configured to capture three-dimensional information from the field of view, e.g. three-dimensional information of objects present in the field of view.
  • the imaging device 200 may be or may be configured as a time-of-flight sensor, a depth-measuring device, a proximity sensor, and the like.
  • the imaging device 200 may include a light sensing portion 202 and an optical system 204 (illustrated with a dotted line in FIG.2A to allow visualizing the underlying light sensing portion 202).
  • the light sensing portion 202 and the optical system 204 may be configured to provide a segmentation of a field of view 220 of the imaging device 200 (see also FIG.2C) and to enable sensing light from different segments of the field of view 220.
  • FIG.2A and FIG.2B is simplified for the purpose of illustration, and that the imaging device 200 may include additional components with respect to those shown, e.g. one or more amplifiers to amplify a signal representing the received light, one or more noise filters, one or more processors, and the like.
  • the field of view 220 may extend along three directions 252, 254, 256, referred to herein as field of view directions.
  • the first field of view direction 252 may be a first direction along which field of view 220 has a first lateral extension (e.g., a width, or a first angular extension)
  • the second field of view direction 254 may be a second direction along which the field of view 220 has a second lateral extension (e.g., a height, or a second angular extension).
  • the first field of view direction 252 may be a horizontal direction
  • the second field of view direction 254 may be a vertical direction.
  • first field of view direction 252 and second field of view direction 254 may be arbitrary.
  • the first field of view direction 252 and the second field of view direction 254 may be orthogonal to one another, and may be orthogonal to a direction along which an optical axis of the optical system 204 is aligned (e.g., the optical axis may be aligned along a third field of view direction 256).
  • the third field of view direction 256 may illustratively represent a depth of the field of view 220.
  • the field of view 220 may be a field of view of the light sensing portion 202 as defined by the optical system 204.
  • the field of view 220 may be understood as the angular range within which the imaging device 200 may operate, e.g. the angular range within which the imaging device 200 may carry out an imaging operation.
  • the field of view 220 may be or correspond to a field of illumination of the imaging device 200, e.g. a field of illumination of a projector of the imaging device (see also FIG.5A).
  • the field of view 220 may illustratively be a detection area of the imaging device 200.
  • the light sensing portion 202 may include a plurality of light sensing areas 206 configured to be sensitive for light.
  • a light sensing area 206 may be an active area of the light sensing portion 202 configured to convert light energy (illustratively, photons) of light impinging onto the light sensing area 206 in electrical energy (e.g., in a current, illustratively a photo current).
  • a light sensing area 206 may be an optoelectronic component with a spectral sensitivity in a predefined wavelength range, for example in the visible range (e.g., from about 380 nm to about 700 nm), infrared range and/or near infrared range (e.g., in the range from about 700 nm to about 5000 nm, for example in the range from about 860 nm to about 1600 nm), or ultraviolet range (e.g., from about 100 nm to about 400 nm).
  • the light sensing areas 206 may be configured to be sensitive for infrared light, in particular for near-infrared light.
  • a light sensing area 206 may be an image sensor, or may be part of an image sensor.
  • the plurality of light sensing areas 206 may be understood as a plurality of image sensors.
  • the geometry (e.g., the shape and lateral dimensions) of a light sensing area 206 may be adapted according to the system requirements, e.g. according to an overall dimension of the imaging device, according to fabrication constraints, etc.
  • the light sensing area 206 are shown as having a rectangular shape, but it is understood that a light sensing area 206 may have any suitable shape (e.g., a square shape, or even asymmetric shapes).
  • a light sensing area 206 may include a plurality of pixels, e.g.
  • the light sensing area 206 may include a two-dimensional array of pixels.
  • a number of pixels Nx, N y in each direction, as well as a pixel pitch may be adapted depending on the desired dimension of the light sensing areas 206.
  • a light sensing area 206 may have a reduced dimension with respect to a conventional configuration, e.g. a reduced lateral extension for at least one dimension, as discussed in further detail below (see FIG.2E).
  • the light sensing areas 206 may be integrated in a substrate, e.g. a printed circuit board.
  • the first dimension of a light sensing areas 206 may be a dimension along a direction defined by a main dimension of the substrate, and the second dimension of a light sensing areas 206 may be a dimension along a direction defined by a secondary dimension of the substrate.
  • the light sensing areas 206 may all be integrated on a common substrate, e.g. on a CMOS chip.
  • the light sensing areas 206 may all have a same area, e.g. may all have a same lateral extension in the x-dimension and y-dimension. In other aspects, the light sensing areas 206 may have different areas, and corresponding different x-dimension and/or y-dimension, to provide a further degree of control over the geometrical arrangement of the imaging device 200. In an exemplary configuration, a light sensing area 206 may have a longer lateral extension along the first field of view direction 252 (e.g., a lateral dimension along the first field of view direction 252 greater than a lateral dimension along the second field of view direction 254). It is however understood that also other configurations may be provided, e.g.
  • the y-dimension of a light sensing are 206 may be in the range from 2 mm to 5 mm, for example 3 mm.
  • a light sensing area 206 may be configured according to Complementary-Metal-Oxide-Semiconductor (CMOS) technology, e.g. a light sensing area 206 may be a CMOS image sensor.
  • a light sensing area 206 may include a plurality of CMOS-pixels, each including a photodetector (e.g., a photodetector configured to be sensitive for infrared light) that accumulates an electrical charge based on the amount of light impinging onto the photodetector.
  • a light sensing area 206 may be configured according to Charged Coupled Device (CCD) technology, e.g.
  • CCD Charged Coupled Device
  • a light sensing area 206 may be a CCD image sensor.
  • the light sensing area 206 may include a plurality of CCD-pixels with a photoactive region and a transmission region.
  • the light sensing portion 202 may be or include an image sensor, wherein a light sensing area of the image sensor is segmented into a plurality of light sensing areas 206 (also referred to as photo-sensitive areas, or photo-sensitive regions).
  • the light sensing portion 202 may be or include a camera (e.g., an infrared camera, in particular a near-infrared camera).
  • the imaging device 200 may include one or more processors configured to receive the signals generated by the light sensing areas 206.
  • the one or more processors may be configured to convert the electrical signals from the light sensing areas 206 into corresponding digital signals to allow further processing, as discussed in further detail below (see also FIG.4).
  • the light sensing areas 206 may be arranged along a first direction 210.
  • the splitting of the sensing areas 206 and their disposition one next to the other allows reducing the critical dimension of the imaging device 200 (illustratively, allows reducing at least one lateral dimension of the light sensing areas 206 compared to a conventional configuration), while maintaining a same resolution and SNR in view of the splitting of the field of view 220 discussed in further detail in relation to FIG.2C.
  • the light sensing areas 206 may form an array along the first direction 210, e.g. a one-dimensional array or a substantially one-dimensional array (see also FIG.3).
  • the light sensing areas 206 may be arranged to face the field of view 220 of the imaging device 200.
  • the light sensing areas 206 may be spaced apart from one another along the first direction 210, e.g., there may be a distance g between adjacent light sensing areas 206 (for example measured edge-to-edge).
  • the distance g may be a regular distance, e.g. each light sensing area 206 may be spaced by the distance g from the adjacent light sensing area 206.
  • the distance between light sensing areas 206 may vary, so that a first light sensing area may be spaced by a first distance from a second light sensing area, the second light sensing area may be spaced by a second distance from a third light sensing area, etc.
  • a varying distance may allow a flexible adaptation of the arrangement of the light sensing areas to requirements of the device, e.g. to fabrication constraints.
  • the optical system 204 may be configured to provide a splitting of the field of view 220 of the imaging device 200 into a plurality of partial fields of view 220-1, 220-2, each imaged via a corresponding light sensing area 206.
  • the optical system 204 may be configured to define a plurality of imaging channels 222, each corresponding to a partial field of view 220-1, 220-2 that covers a respective portion of a total field of view 220 of the imaging device 200.
  • the light sensing portion 202 may include two light sensing areas 206, so that the optical system 204 may define two imaging channels (e.g., a first imaging channel 222-1 corresponding to a first partial field of view 220-1, and a second imaging channel 222-2 corresponding to a second partial field of view 220-2).
  • the field of view 220 may also be referred to herein as total field of view 220 of the imaging device 200.
  • An imaging channel 222 may be understood as a light collection area (e.g., a light collection cone) from which the optical system 204 collects light to direct it onto a corresponding light sensing area 206.
  • each imaging channel 222 may be configured to direct light (e.g., infrared light, in particular near-infrared light) from the corresponding partial field of view towards a respective light sensing area 206.
  • the optical system 204 may be configured to image different portions of the field of view 220 (illustratively, different areas, or regions, of the field of view 220) onto different light sensing areas 206.
  • An imaging channel 222 may illustratively be an optical construct defined by the optical system 204 (e.g., by the optical components of the optical system 204, as discussed in relation to FIG.6A to FIG.6C) to guide light from a corresponding partial field of view on one of the light sensing areas 206.
  • the optical system 204 may be mechanically coupled to a substrate on which the light sensing areas 206 are formed.
  • a sum of the areas of the partial fields of view 220-1, 220-2 may be less than or equal to a field of illumination of a projector of the imaging device 200 (described in further detail in FIG.5).
  • a full bounding box of the field of view of the imaging channels also referred to herein as receiver channels R1 and R2 may be smaller or equal to the field of illumination of the projector. This configuration may ensure an efficient utilization of the light sensing areas 206 to capture only light relevant for the operation of the imaging device 200, and reducing the overall noise of a measurement.
  • the portions of the field of view 220 covered by the partial fields of view 220-1, 220-2 may be arranged along a second direction 212 at an angle with the first direction 210.
  • the first direction 210 may correspond to a horizontal direction 252 of the field of view 220 of the imaging device 200
  • the second direction 212 may correspond to a vertical direction 254 of the field of view 220 of the imaging device 200. It is however understood that other arrangements may be provided, as shown in FIG.3, as long as a splitting of the sensing area(s) and detection area along different directions is provided.
  • the splitting of the light sensing areas 206 along the horizontal direction 252 allows reducing the critical y-dimension, whereas the contextual splitting of the field of view 220 along the vertical direction 254 allows maintaining an overall resolution at a same level with respect to a conventional configuration with a single active area.
  • the optical system 204 may be configured to define partial fields of view 220-1, 220-2 having the same area, e.g. having the same lateral (and angular) extension in the first field of view direction 252 and second field of view direction 254.
  • the lateral (or angular) extension of a partial field of view 220-1, 220-2 along the second (e.g., vertical) field of view direction 254 may be less than the lateral (or angular) extension of a partial field of view 220-1, 220-2 along the first (e.g., horizontal) field of view direction 252, e.g.
  • a partial field of view 220-1, 220-2 may have a rectangular shape elongated in the horizontal direction. However, it is understood that also other geometries may be provided, e.g. a partial field of view 220-1, 220-2 may have a square shape, or a rectangular shape elongated in the vertical direction, as other examples.
  • the optical system 204 may be configured to define partial fields of view 220-1, 220-2 having different areas, e.g. having a different lateral (and angular) extension in the first field of view direction 252 and/or a different lateral (and angular) extension second field of view direction 254.
  • the first partial field of view 220-1 may be longer and/or higher than the second partial field of view 220-2, or vice versa.
  • This configuration may provide an additional degree of freedom in the arrangement of the light sensing areas 206, allowing to use light sensing areas 206 with different dimensions (e.g., the area and/or lateral dimensions of the light sensing areas 206 may vary in a corresponding manner as the respective partial fields of view 220-1, 220-2).
  • This arrangement may provide disposing the light sensing areas 206 in combination with other components of the imaging device in a more flexible and adaptable manner.
  • adjacent partial fields of view 220-1, 220-2 may have an overlapping region 224.
  • the first imaging channel 222-1 and the second imaging channel 222-2 may be configured such that the first partial field of view 220-1 and the second partial field of view 220-2 have partial overlap 224.
  • the partial overlap 224 (illustratively, the overlapping region) may be at a border, e.g. an interface, between the partial fields of view 220-1, 220-2.
  • the presence of an overlapping region 224 ensures a continuous coverage of the field of view 220, without leaving “blind” areas that could otherwise be caused by the splitting of the field of view 220.
  • FIG.2D shows various options for the segmentation of the plurality of light sensing areas 206a, 206b, 206c.
  • the plurality of light sensing areas 206 may include any suitable number N of light sensing areas, e.g. two, three, four, five, or more than five light sensing areas.
  • N the number of areas in which the light sensing portion 202 is split (and the corresponding number of partial fields of view in which the field of view 220 is split) may also define the overlapping regions for each light sensing area 206a, 206b, 206c and partial field of view.
  • a region 226a- 1 of a first light sensing areas 206a- 1 and a region 226a-2 of a second light sensing areas 206a- 1 may receive light from the same part of the field of view 220, illustratively from the overlapping region 224 between the corresponding partial fields of view 220-1, 220-2.
  • At least one partial field of view may have two overlapping regions, illustratively one with the partial field of view disposed “above” along the second direction 212, and another one with the partial field of view disposed “below” along the second direction 212.
  • the light sensing portion 202 includes three light sensing areas 206b
  • a region 226b- 1 of a first light sensing areas 206b- 1 and a region 226b-2 of a second light sensing areas 206b- 1 may receive light from the same part of the field of view 220
  • another region 226b-3 of the second light sensing areas 206b- 1 and a region 226b-4 of a third light sensing areas 206b-2 may receive light from the same part of the field of view 220.
  • the light sensing portion 202 includes four light sensing areas 206c
  • at least two of the light sensing areas 206-2c, 206-3 c may have two regions 226c-2, 226c-3, 226c-4, 226c-5, receiving light from the same parts of the field of view as regions 226c- 1, 226c-6 of other light sensing areas 206- 1c, 206-4c, and so on for increasing number of light sensing areas 206.
  • the advantages of the “split-configuration” are shown in FIG.2E with respect to a conventional configuration for a light sensing area 228.
  • the segmentation of the field of view 220 into N partial areas, and the corresponding segmentation of the light sensing area into N light sensing areas 206, allows reducing the lateral extension of the light sensing areas 204 along the y-dimension, e.g. the y-size, Iy’, according to the following equation, where c represents the relative amount of the full image y-size taking into account an overlap between the partial fields of view 220-1, 220-2 (as shown with the overlapping region 224 in FIG.2C).
  • the equation 4 may give,
  • a difference between the y-size of the single light sensing area 228 and the y-size of the light sensing areas 206 in the split-configuration may be expressed as
  • Equation 5 may thus represent the amount of reduction in the critical dimension of the imaging device 200 which may be achieved with respect to a conventional configuration while maintaining same resolution and SNR.
  • a footprint (MFP’) of the imaging device 200 may be reduced in the critical dimension with respect to the footprint of a conventional configuration.
  • the configuration described herein may provide a segmentation of the full (e.g., vertical) field of view by splitting receiver into N or at least two channels (R1 and R2) each having a smaller (vertical) y-size in a way which is favorable to reduce the size of the module along the critical dimension (CD).
  • the segmentation of the full (e.g., vertical) field of view by splitting receiver’s active area of the image sensor into multiple pixel arrays leads to a reduction of the image y-size according to equation 4 with accounting for an overlap of the partial fields of view of the individual channels ( ⁇ J).
  • the overlap ( ⁇ J) may be a small integer fraction of the original image y-size (I y ) in order to efficiently use the active area (-few percent, e.g. 0.05), e.g. the area of the CMOS image sensor.
  • the y-size reduction in the critical dimension (CD) according the proposed segmentation in N parts is expressed by equation 5, and it is significant when assuming that the additional margins and mechanical adders (e) between edge of the active area and that of the module are fixed and small compared to the image y-size (I y ).
  • FIG.3 shows possible configurations for the relative orientation between light sensing areas 206 and partial fields of view 220-1, 220-2 in a schematic representation, according to various aspects.
  • the first direction 210 and the second direction 212 may be orthogonal to one another, e.g. may be aligned with the horizontal field of view direction and with the vertical field of view direction, respectively.
  • This configuration may provide a compact and precise splitting, with a simple configuration for the optical system 204.
  • slight variations from this configuration 300a may be provided, in which the light sensing areas 206 remain substantially disposed along a horizontal direction and the partial fields of view 220-1, 220-2 remain substantially disposed along a vertical direction.
  • first direction 210 and the second direction 212 may be at an angle with respect to one another that is greater than 0° and less than 180°, for example an angle in the range from 70° to 110°, for example an angle in the range from 80° to 100°, for example an angle of 90°.
  • the alignment of the segments may be adapted according to a desired configuration of the imaging device, e.g. according to space constraints, or according to a relative arrangement with other components, as examples.
  • the first direction 210 may be rotated by a positive angle with respect to the horizontal field of view direction, so that the angle between the first direction 210 and the second direction 212 may be less than 90° by a corresponding amount, e.g. by 5°, 10°, or 15° as numerical examples.
  • the first direction 210 may be rotated by a negative angle with respect to the horizontal field of view direction, so that the angle between the first direction 210 and the second direction 212 may be more than 90° by a corresponding amount, e.g. by 5°, 10°, or 15° as numerical examples.
  • the second direction 212 may be rotated by a negative angle with respect to the vertical field of view direction, so that the angle between the first direction 210 and the second direction 212 may be less than 90° by a corresponding amount, e.g. by 5°, 10°, or 15° as numerical examples.
  • the second direction 212 may be rotated by a positive angle with respect to the vertical field of view direction, so that the angle between the first direction 210 and the second direction 212 may be more than 90° by a corresponding amount, e.g. by 5°, 10°, or 15° as numerical examples.
  • both the first direction 210 and the second direction 212 may be rotated with respect to the horizontal field of view direction and vertical field of view direction.
  • the first direction 210 may be rotated by a positive angle with respect to the horizontal field of view direction
  • the second direction 212 may be rotated by a negative angle with respect to the vertical field of view direction, as shown in a sixth configuration 300f.
  • the first direction 210 may be rotated by a negative angle with respect to the horizontal field of view direction
  • the second direction 212 may be rotated by a positive angle with respect to the vertical field of view direction, as shown in a seventh configuration 300g.
  • both the first direction 210 and the second direction 212 may be rotated by a positive angle, or both by a negative angle, with respect to the horizontal field of view direction and vertical field of view direction.
  • FIG.4 shows a processor 400 for use in an imaging device (e.g., in the imaging device 200) in a schematic representation, according to various aspects.
  • the imaging device 200 may include one or more processors 400 to carry out a processing of the sensing signals generated by the light sensing areas.
  • the processor 400 may be illustratively configured to stitch together the separately recorded segments 220-1, 220-2 to yield an image of the field of view 220 by means of post-processing using an image stitching procedure. Any suitable stitching procedure known in the art may be implemented for such purpose.
  • the processor 400 may be configured to receive light sensing data 404 from the light sensing portion of the imaging device (e.g., from the light sensing portion 202).
  • the light sensing data 404 may include or represent the light from each partial field of view 220-1, 220-2 sensed at a corresponding light sensing area.
  • the light sensing data 404 may represent, for each light sensing area, an intensity of the light received at the light sensing area (e.g., at each pixel) from the corresponding partial field of view.
  • the processor 400 may be configured to combine the light sensing data 404 from the plurality of partial fields of view to reconstruct three-dimensional information (or other types of information for different types of applications) about the field of view of the imaging device. For example, the processor 400 may be configured to reconstruct a three-dimensional image of the field of view or of a portion of the field of view. As another example, the processor 400 may be configured to generate a depth-map of the field of view or of a portion of the field of view. As a further example, the processor 400 may be configured to generate a cloud of data points representing the field of view or a portion of the field of view.
  • the processor 400 may be configured to carry out a face recognition process, e.g. a face-authentication process, using the light sensing data 404 received from the light sensing areas.
  • the field of view 220 may include a face 402, e.g. the face of a user of a smartphone for example. Different portions of the face 402 may belong to different partial fields of view 220-1, 220-2, so that the different portions of the face 402 may be imaged onto different light sensing areas.
  • a first portion of the face 402-1 may correspond to a first partial field of view 220-1, and a second portion of the face 402-2 may correspond to a second partial field of view 220-2, but it is understood that the aspects described in relation to FIG.4 may be extended in a corresponding manner to a configuration with N>2.
  • the processor 400 may be configured to reconstruct the face 402 from the portions of the face 402-1, 402-2 imaged by the segmented light sensing areas, and may be configured to compare the reconstructed face with predefined data (e.g., stored in a memory of the imaging device, or in a memory of the smartphone), e.g. with a predefined facial profile of known (authorized) users.
  • predefined data e.g., stored in a memory of the imaging device, or in a memory of the smartphone
  • the processor 400 may be part of the imaging device, or may be communicatively coupled with the imaging device.
  • the processor 400 in some aspects, may be a processor of the mobile communication device receiving the light sensing data 404.
  • the processor 400 may be configured to determine a distortion of a predefined dot pattern and generate a three-dimensional map of a target (e.g., of the face 402) in the field of view of the imaging device according to the determined distortion.
  • the imaging device may be configured to emit/detected structured light, and derive three-dimensional information based on the distortion of the emitted pattern as received at the imaging device. The general principles of such technique are in general known in the art, so that a detailed discussion is not included herein.
  • FIG.5 shows a projector 500 for use in an imaging device (e.g., in the imaging device 200) in a schematic representation, according to various aspects.
  • an imaging device e.g., the imaging device 200
  • the imaging device may include, in addition to the receiver side described in relation to FIG.2A to FIG.4, an emitter side for emitting light towards the field of view.
  • the imaging device may include one or more projectors 500 (see also FIG.6E), illustratively one or more light emitting circuits.
  • a projector may also be referred to herein as illuminator module.
  • the projector 500 may be configured to emit light 502 (e.g., infrared light, in particular near-infrared light) towards the field of view of the imaging device.
  • the projector 500 may have a field of illumination, e.g. defined by emitter optics of the projector (not shown), and the field of illumination of the projector 500 may cover the field of view of the imaging device.
  • the projector 500 (or, in other aspects, the plurality of projectors) may be configured to emit light 502 to fully illuminate the field of view of the imaging device (illustratively, with a single emission, without a sequential scanning of the field of view).
  • the projector 500 may include a light source 504 configured to emit light, and a controller 506 configured to control the light emission by the light source 504.
  • the light source 504 may be configured to emit light having a predefined wavelength, for example in the infrared and/or near-infrared range, such as in the range from about 700 nm to about 5000 nm, for example in the range from about 860 nm to about 1600 nm.
  • the light source 504 may include an optoelectronic light source (e.g., a laser source).
  • the light source 504 may include one or more light emitting diodes.
  • the light source may include one or more laser diodes, e.g.
  • the light source 504 may include an array of emitter pixels, e.g. a one-dimensional or two-dimensional array of emitter pixels (e.g., an array of light emitting diodes or laser diodes).
  • the controller 506 may be configured to provide a control signal to the light source 504 to prompt (e.g., to trigger, or to start) an emission of light by the light source 504.
  • the controller 506 may be configured to control the light source to emit light towards the field of view of the imaging according to a predefined dot pattern.
  • the dot pattern may be achieved by suitably controlling the light source 504 and/or by suitably configuring emitter optics of the projector 500, as generally known in the art for structured light.
  • the projector 500 may be configured to project a predefined pattern of dots on the field of view of the imaging device (e.g., on an object, or on the face of a user), which may be used for object- and/or face-recognition.
  • FIG.6A, FIG.6B, FIG.6C, FIG.6D, and FIG.6E show possible relative arrangements of one or more light sensing areas 602 with respect to one or more projectors 604 for integration in an imaging device (e.g., in the imaging device 200).
  • FIG.6A to FIG.6E illustrate possible dispositions of the light sensing areas 206 discussed in relation to FIG.2A to FIG.2E with respect to one or more projectors configured as the projector 500 discussed in relation to FIG.5.
  • FIG.6A A preferred configuration of an imaging device 600a is illustrated in FIG.6A in a top view.
  • the projector 604 may be disposed between two light sensing areas 602.
  • a distance between the projector 604 and the first light sensing area 602 may be equal to the distance between the projector 604 and the second light sensing area 602.
  • the imaging device may have a symmetric arrangement of the light sensing areas 602 at two sides of the projector 604 along the first direction 210.
  • the symmetric arrangement provides an equal baseline distance for both light sensing areas 602 (Cl, C2) when receiving the light emitted by the projector (and reflected back towards the imaging device). This ensures equal parallax values for both detections, thus reducing or preventing distortions in the measurements.
  • the configuration of the imaging device 600a may thus be understood as the light source (illustratively, the patterned illuminator) (L) as being centered in between two separate image sensors 610 (CIS1, CIS2) including the light sensing areas 602, for example on a common board.
  • This modular system setup also provides a relatively reduced x-size of the imaging device.
  • the symmetric arrangement is also illustrated with respect to the imaging channels 606 and projecting channel 608 (illustratively, an optical channel defined by the emitter optics of the projector).
  • the receiver channels 606 (Rl, R2) may be disposed at two sides of the projecting channel 608 (P).
  • FIG.6B shows another possible configuration of an imaging device 600b.
  • the projector 604 is disposed spaced apart from the light sensing areas 602 (illustratively, from the image sensors 610).
  • the projector 604 may be disposed at a distance from the light sensing areas 602 along the first direction 210.
  • the projector 604 may be disposed outside the array formed by the light sensing areas 602, e.g. at the left-hand side of the array or at the right-hand side of the array.
  • This configuration may provide a closer spacing between the light sensing areas 602 (illustratively, a small gap, g) which may be beneficial for the stitching of the partial fields of view.
  • Separating the projector 604 from the image sensors 608 also simplifies the heat management of the imaging device 600b and allows an easier system integration.
  • the imaging device 600c may be configured in a similar manner as the imaging device 600a, with a projector 604 disposed between two light sensing areas 602, but in the configuration of the imaging device 600c, the projector and the light sensing areas may be integrated on a common substrate 612.
  • the light source e.g., the patterned illuminator
  • L may be integrated on a common substrate (e.g., a common opto-electronic chip) (CIS) together with the active areas 602 (Cl, C2) in a hybridmanner.
  • CIS common opto-electronic chip
  • the imaging device 600d may be configured in a similar manner as the imaging device 600b, with a projector 604 disposed spaced apart from the light sensing areas 602, but in the configuration of the imaging device 600d, the light sensing areas 602 may be integrated on a common substrate 612.
  • the separate active areas 602 of the CIS (Cl, C2) may be integrated on a common substrate (CIS) in a linear array along the x-dimension, and the light source (/.) may be placed on one side (either left or right in x-dimension) of the image sensor(s).
  • FIG.6A to FIG.6D have been described for an exemplary scenario with one projector 604, but they may be correspondingly extended to a configuration in which the imaging device includes a plurality of projectors, e.g. a configuration in which the projector 604 includes (illustratively, is divided into) a plurality of projectors.
  • FIG.6E shows possible configurations of imaging devices 600e-l, 600e-2, 600e-3 including a plurality of projectors 604.
  • an imaging device 600e-l may include two projectors 604-1, 604-2 both spaced apart from the light sensing areas 602-1, 602-2 (Cl, C2), e.g. one projector 604-1 (LI) may be spaced apart from the light sensing areas 602-1, 602-2 at one side of the array formed by the light sensing areas 602-1, 602-2 and another projector 604-2 (L2) may be spaced apart from the light sensing areas 602 at the opposite side of the array.
  • LI projector 604-1
  • L2 another projector 604-2
  • an imaging device 600e-2 may include two projectors 604-1, 604-2 each disposed between two respective light sensing areas 602.
  • a first projector 604-1 (LI) may be disposed between a first light sensing area 602-1 (Cl) and a second light sensing area 602-2 (C2), and a second projector 604-2 (L2), may be disposed between the second light sensing area 602-2 (C2) and a third light sensing area 602-3 (C3).
  • the configuration of the imaging device 600e-2 may be extended for increasing number of light sensing areas 602 and projectors 604, as shown for the imaging device 600e-3, which includes three projectors 604-1, 604-2, 604-3 (LI, L2, L3) distributed across an array with four light sensing areas 602-1, 602-2, 602-3, 602-4 (Cl, C2, C3, C4).
  • FIG.6E shows different options of placing of the light source (e.g., the patterned illuminator) (L) with respect to the different active areas (Ci) for different number of segments of the field of view (N).
  • the light source e.g., the patterned illuminator
  • This may also contribute to the reduction of module y-size by allowing a reduction of the y-size of the projector(s).
  • an imaging device may thus include a number N (greater than or equal to 2) of light sensing areas and a number N-l of projectors, wherein each projector is disposed between two of the light sensing areas.
  • an imaging device may include a number N (greater than or equal to 2) of light sensing areas, and one or two projectors disposed outside the array of light sensing areas.
  • a mixed configuration may be provided, in which an imaging device may include a number N (greater than or equal to 2) of light sensing areas, and at least one projector disposed outside the array of light sensing areas, and at least one further projector disposed between two of the light sensing areas.
  • FIG.7A, FIG.7B, and FIG.7C each shows an optical system 700a, 700b, 700c for use in an imaging device in a schematic representation, according to various aspects.
  • the optical systems 700a, 700b, 700c may be exemplary configurations of the optical system 204 of the imaging device 200.
  • the optical systems 700a, 700b, 700c are illustrated in a side view 702a, 702b, 702c, and in a top view 704a, 704b, 704c.
  • there may be various options to achieve the segmentation of the field of view of the imaging device e.g. various options for angular steering and for providing selection elements for selecting partial fields of view.
  • the optical systems 700a, 700b, 700c are based, respectively, on off-axis/shifted refractive lenses, off-axis meta-surface lenses, and structured gratings (e.g., diffraction gratings or meta-surface gratings). These configurations have been found to provide an efficient implementation of the segmentation strategy, and may be readily integrated in the fabrication flow of the imaging device. Furthermore, these strategies may be free of mechanically moving parts (e.g., free of oscillating microelectromechanical systems, MEMS, mirrors). However, in principle, also other configurations of the optical system may be provided, for example using planar fold-mirrors configured to define a different tilt angle for each individual channel, or using prisms (e.g., folded or upright). It is also understood that the various possible configurations for the optical system may be combined with one another.
  • structured gratings e.g., diffraction gratings or meta-surface gratings
  • optical components described in the following in relation to the optical systems 700a, 700b, 700c may be manufactured with techniques known in the art, for example by using high-precision optical technologies for mass production (MP) such as injection-molded optics (IMO), wafer-level optics (WLO), glass molded optics (GMO), Grinding and Polishing, nanoimprint lithography (NIL) or deep ultra violet (DUV) lithography for diffractive/meta- surface optics.
  • MP mass production
  • IMO injection-molded optics
  • WLO wafer-level optics
  • GMO glass molded optics
  • NIL nanoimprint lithography
  • DUV deep ultra violet
  • the optical elements of the imaging channels may be formed on a common carrier substrate (e.g., by WLO, NIL- and/or DUV-lithographic technologies).
  • a combination of Meta and WLO may be provided on diced pieces of wafers (lateral, channel-2- channel alignment and thermal stability).
  • the optical system 700a may include for at least one imaging channel (e.g., for each imaging channel) a lens element (e.g., a convex lens).
  • a lens element e.g., a convex lens
  • the optical system 700a may include a first lens element 706-1 corresponding to a first light sensing area 708-1 (and accordingly to a first imaging channel), and a second lens element 706-2 corresponding to a second light sensing area 708-2 (and accordingly to a second imaging channel).
  • Each lens element 706-1, 706-2 may be configured to receive (e.g., collect) light from the (total) field of view of the imaging device and direct the received (e.g., collected) light towards the respective light sensing area 708-1, 708-2.
  • a lens element 706-1, 706-2 may be designed for infrared light (e.g., for near-infrared light).
  • the optical system 700a may further include, for at least one (e.g., each) imaging channel, an aperture stop disposed at a decentered position with respect to a symmetry center of a surface profile of the respective lens element.
  • the optical system 700a may include a first aperture stop 710-1 disposed off-center with respect to the first lens element 706-1, and a second aperture stop 710-2 disposed off-center with respect to the second lens element 706-2.
  • the aperture stop 710-1, 710-2 may be disposed at a decentered position with respect to a geometric center of the (total) field of view of the imaging device, so that the aperture stop 710-1, 710-2 may define for the respective imaging channel an optical axis tilted with respect to the third field of view direction 256 (illustratively, the direction orthogonal to a plane defined by the first field of view direction 252 and the second field of view direction 254, e.g. a plane defined by the first direction 210 and the second direction 212).
  • the aperture stop 710-1, 710-2 may be disposed at a decentered position with respect to a geometric center of the respective light sensing area 708-1, 708-2.
  • the off-center position of the aperture stop 710-1, 710-2 may provide directing (e.g., focusing) on a light sensing area 708-1, 708-2 the rays (e.g., infrared rays) coming from a respective “tilted position” in the field of view, illustratively a respective partial field of view at a certain vertical coordinate.
  • directing e.g., focusing
  • the rays e.g., infrared rays
  • the segmentation of the field of view may thus be achieved using an in-plane deflection of the rays which would otherwise converge to the center of the respective active area 708-1, 708-2 of the image sensor(s).
  • the decentration of the aperture stop 710-1, 710-2 (and with it the symmetry center of the lens surface(s) profile) with respect to the geometric center of image (in x-y-plane) ensures tilting the direction from which each light sensing area 708-1, 708-2 receives the corresponding light.
  • the relative decentration may be implemented in such a way that the N imaging channels have significantly tilted (and hence non-parallel) optical axes which are intended to result in the vertical segmentation of full field of view, which enables the reduction of the image y-size according to Equation 4 above (e.g., with accounting for an overlap of the fields of view of the individual channels).
  • the lens element for at least one (e.g., for each) imaging channel may be or include a meta-surface optical element 712-1, 712-2, e.g. a meta-surface lens.
  • the meta-surface optical element 712-1, 712-2 may include an optical meta-surface 714-1, 714-2, e.g. a surface-type metamaterial.
  • the meta-surface optical element 712-1, 712-2 (e.g., the optical meta-surface 714-1, 714-2) may be patterned to direct the incoming light (e.g., infrared light) onto the respective light sensing area 708-1, 708-2.
  • the pattern of the optical meta-surface 714-1, 714-2 may be a sub -wavelength pattern, e.g. considering as wavelength the wavelength of the light of interest, e.g. infrared light, in particular near-infrared light.
  • the meta-surface optical element 712-1, 712-2 may include a nano-structured configured (e.g., patterned, or structured) to direct the incoming light onto the respective light sensing area 708-1, 708-2.
  • the meta-surface optical element 712-1, 712-2 may include any suitable material for forming the structures of the meta-surface optical element 712-1, 712-2.
  • the meta-surface optical element 712-1, 712-2 may include or may consist of amorphous silicon (aSi) on silicon dioxide (SiCh).
  • Other examples of materials suitable for the nano-structures may include titanium dioxide (TiO?) or gallium nitride (GaN).
  • the segmentation of the field of view may be implemented via a combination of optical elements.
  • the optical system 700c may include, for at least one (e.g., for each) imaging channel a lens element 716-1, 716-2 and a structured optical element 718-1, 718-2.
  • the structured optical element 718-1, 718-2 may be configured to receive (e.g., collect) light from the (total) field of view of the imaging device and may be configured to cause a deflection of the received light along the second field of view direction 254 (e.g., along the second direction 212).
  • the structured optical element 718-1, 718-2 may be designed for infrared light (e.g., near-infrared light).
  • the lens element 716-1, 716-2 may be receive the deflected light from the optical element and direct the deflected light towards a respective light sensing area 708-1, 708-2.
  • the lens element 716-1, 716-2 may be configured to cause the deflected light to converge towards the respective light sensing area708-l, 708-2.
  • the segmentation of the field of view may be achieved using an in-plane deflection by a micro/nano-structured optical element 718-1, 718-2 deflecting the rays along one (e.g., vertical) dimension which leads them to converge to a shifted position of the respective active area 708-1, 708-2 of the image sensor(s).
  • the deflecting optical element 718-1, 718-2 may be a structured grating (e.g., a meta-surface grating or diffraction grating).
  • the lens element 716-1, 716-2 may be configured to cause the deflected light to converge towards a position on the respective light sensing area 708-1, 708-2 decentered along the first direction 210 with respect to a geometric center of the light sensing area 708-1, 708-2.
  • the optical system 700c may include an aperture stop 710-1, 710-2 centered with respect to the structured optical element 718-1, 718-2 (and centered with respect to the lens element 716-1, 716-2), and the deflection may be achieved via the structuring of the deflecting optical element 718-1, 718-2.
  • the structured optical element 718-1, 718-2 and the lens element 716-1, 716-2 may be aligned with respect to a common axis along the third field of view direction 256, e.g. respective geometric centers of the elements may be aligned along the third field of view direction 256.
  • Example 1 is an imaging device including: a camera, wherein the camera includes a plurality of light sensing areas configured to be sensitive for infrared light, wherein the light sensing areas are arranged along a first direction; and an optical system configured to define a plurality of imaging channels, wherein each imaging channel corresponds to a partial field of view covering a respective portion of a total field of view of the imaging device, wherein the portions of the total field of view covered by the partial fields of view are arranged along a second direction at an angle with the first direction, and wherein each imaging channel is configured to direct infrared light from the corresponding partial field of view towards a respective light sensing area of the plurality of light sensing areas.
  • the imaging device according to example 1 may optionally further include that the angle is greater than 0° and less than 180°, for example greater than 0° and less than 90°.
  • Example 3 the imaging device according to example 1 or 2 may optionally further include that the first direction and the second direction are orthogonal to one another.
  • Example 4 the imaging device according to any one of examples 1 to 3 may optionally further include that the first direction corresponds to a horizontal direction of the total field of view of the imaging device, and that the second direction corresponds to a vertical direction of the total field of view of the imaging device.
  • Example 5 the imaging device according to any one of examples 1 to 4 may optionally further include that the light sensing areas are arranged to face the total field of view of the imaging device.
  • Example 6 the imaging device according to any one of examples 1 to 5 may optionally further include that at least one sensing area of the plurality of light sensing areas has a first lateral dimension along the first direction and a second lateral dimension along the second direction, and that the first lateral dimension is greater than the second lateral dimension.
  • Example 7 the imaging device according to example 6, may optionally further include that the second lateral dimension is in the range from 2 mm to 5 mm, for example the second lateral dimension may be equal to or less than 3 mm.
  • Example 8 the imaging device according to any one of examples 1 to 7 may optionally further include that the light sensing areas are spaced apart from one another along the first direction.
  • the imaging device may optionally further include that the plurality of imaging channels include at least a first imaging channel corresponding to a first light sensing area and a second imaging channel corresponding to a second light sensing area, and that the first imaging channel and the second imaging channel are configured such that a first partial field of view corresponding to the first imaging channel and a second partial field of view corresponding to the second imaging channel have a partial overlap at a border between the first partial field of view and the second partial field of view.
  • the imaging device may optionally further include that at least one light sensing area is or includes a Complementary- Metal-Oxide-Semiconductor (CMOS) sensor.
  • CMOS Complementary- Metal-Oxide-Semiconductor
  • the imaging device may optionally further include a processor, wherein the processor is configured to: receive light sensing data from the camera, wherein the light sensing data include infrared light from each of the partial fields of view sensed at the corresponding light sensing area; and combine the light sensing data from the plurality of partial fields of view to reconstruct three-dimensional information of the total field of view of the imaging device.
  • the processor is configured to: receive light sensing data from the camera, wherein the light sensing data include infrared light from each of the partial fields of view sensed at the corresponding light sensing area; and combine the light sensing data from the plurality of partial fields of view to reconstruct three-dimensional information of the total field of view of the imaging device.
  • Example 12 the imaging device according to example 11 may optionally further include that the processor is further configured to carry out a face-authentication process using the light sensing data received from the camera.
  • the imaging device according to example 12 may optionally further include that to carry out the face-authentication process, the processor is configured to determine a distortion of a predefined dot pattern and generate a three-dimensional map of a target in the total field of view of the imaging device according to the determined distortion.
  • Example 14 the imaging device according to any one of examples 1 to 13 may optionally further include that at least one light sensing area is configured to be sensitive for light having a wavelength in the near-infrared range.
  • Example 15 the imaging device according to any one of examples 1 to 14 may optionally further include that the light sensing areas are integrated on a common substrate.
  • Example 16 the imaging device according to example 15 may optionally further include that the common substrate is or includes a CMOS chip.
  • Example 17 the imaging device according to any one of examples 1 to 16 may optionally further include a projector configured to emit infrared light towards the total field of view of the imaging device.
  • the imaging device according to example 17 may optionally further include that the projector is configured to emit infrared light to fully illuminate the total field of view of the imaging device.
  • Example 19 the imaging device according to example 17 or 18 may optionally further include that the projector includes a light source configured to emit infrared light, and a controller configured to control the light source to emit infrared light towards the total field of view of the imaging device according to a predefined dot pattern.
  • the projector includes a light source configured to emit infrared light, and a controller configured to control the light source to emit infrared light towards the total field of view of the imaging device according to a predefined dot pattern.
  • the imaging device according to example 19 may optionally further include that the light source is or includes a vertical cavity surface emission laser (VCSEL).
  • VCSEL vertical cavity surface emission laser
  • Example 21 the imaging device according to any one of examples 17 to 20 may optionally further include that the projector is disposed between two light sensing areas of the plurality of light sensing areas.
  • Example 22 the imaging device according to any one of examples 17 to 20 may optionally further include that the projector is disposed at a distance from the light sensing areas along the first direction.
  • Example 23 the imaging device according to any one of examples 17 to 22 may optionally further include that the projector and the light sensing areas are integrated on a common substrate.
  • Example 24 the imaging device according to any one of examples 17 to 23 may optionally further include that the projector includes a plurality of projectors, each projector being configured to emit infrared light towards a partial field of illumination covering a respective portion of the total field of view of the imaging device.
  • the projector includes a plurality of projectors, each projector being configured to emit infrared light towards a partial field of illumination covering a respective portion of the total field of view of the imaging device.
  • the imaging device may optionally further include that the optical system includes, for at least one imaging channel: a lens elements configured to receive infrared light from the total field of view of the imaging device and direct the received infrared light towards the respective light sensing area; and an aperture stop disposed at a decentered position with respect to a symmetry center of a surface profile of the lens element.
  • the optical system includes, for at least one imaging channel: a lens elements configured to receive infrared light from the total field of view of the imaging device and direct the received infrared light towards the respective light sensing area; and an aperture stop disposed at a decentered position with respect to a symmetry center of a surface profile of the lens element.
  • the imaging device according to example 25 may optionally further include that the aperture stop is disposed at a decentered position with respect to a geometric center of the total field of view of the imaging device, in such a way that the aperture stop defines for the imaging channel an optical axis tilted with respect to a third direction, and that the third direction is perpendicular to a plane defined by the first direction and the second direction.
  • Example 27 the imaging device according to example 25 or 26 may optionally further include that the aperture stop is disposed at a decentered position with respect to a geometric center of the respective light sensing area.
  • Example 28 the imaging device according to any one of examples 25 to 27 may optionally further include that the lens element is or includes a meta-surface optical element.
  • the imaging device may optionally further include that the optical system includes, for at least one imaging channel: a lens element and a structured optical element, wherein the structured optical element is configured to receive infrared light from the total field of view of the imaging device and cause a deflection of the received infrared light along the second field of view direction, and wherein the lens element is configured to receive the deflected infrared light from the optical element and cause the deflected infrared light to converge towards the respective light sensing area.
  • the optical system includes, for at least one imaging channel: a lens element and a structured optical element, wherein the structured optical element is configured to receive infrared light from the total field of view of the imaging device and cause a deflection of the received infrared light along the second field of view direction, and wherein the lens element is configured to receive the deflected infrared light from the optical element and cause the deflected infrared light to converge towards the respective light sensing area.
  • Example 30 the imaging device according to example 29 may optionally further include that the lens element is configured to cause the deflected infrared light to converge towards a position on the respective light sensing area decentered along the first direction with respect to a geometric center of the light sensing area.
  • the lens element is configured to cause the deflected infrared light to converge towards a position on the respective light sensing area decentered along the first direction with respect to a geometric center of the light sensing area.
  • the imaging device according to example 29 or 30 may optionally further include that the lens element and the structured optical element are aligned with respect to a common axis along a third direction, and that the third direction is perpendicular to a plane defined by the first direction and the second direction.
  • Example 32 the imaging device according to any one of examples 29 to 31 may optionally further include that the optical element is or includes a structured grating, e.g. a structured meta-surface grating or a structured diffraction grating.
  • the optical element is or includes a structured grating, e.g. a structured meta-surface grating or a structured diffraction grating.
  • Example 33 the imaging device according to any one of examples 29 to 32 may optionally further include that the optical system is mechanically coupled to a substrate on which the light sensing areas are formed.
  • Example 34 is an imaging device including: a camera, wherein the camera includes a plurality of light sensing areas, wherein the light sensing areas are arranged along a first direction; and one or more meta-surface optical elements, wherein each meta-surface optical element is configured to direct light from a corresponding partial field of view covering a respective portion of a total field of view of the imaging device towards a respective light sensing area of the plurality of light sensing areas, wherein the portions of the total field of view covered by the partial fields of view are arranged along a second direction at an angle with the first direction.
  • Example 35 the imaging device of example 34 may include one or more features of any one of examples 1 to 33.
  • Example 36 is a mobile communication device including a 3D-sensor, the 3D-sensor including: a camera, wherein the camera includes a plurality of light sensing areas configured to be sensitive for infrared light, wherein the light sensing areas are arranged along a first direction; and an optical system configured to define a plurality of imaging channels, wherein each imaging channel corresponds to a partial field of view covering a respective portion of a total field of view of the imaging device, wherein the portions of the total field of view covered by the partial fields of view are arranged along a second direction at an angle with the first direction, and wherein each imaging channel is configured to direct infrared light from the corresponding partial field of view towards a respective light sensing area of the plurality of light sensing areas.
  • the mobile communication device according to example 34 may optionally further include that the 3D-sensor is arranged such that the light sensing areas of the camera face a user-side of the mobile communication device.
  • processor as used herein may be understood as any kind of technological entity that allows handling of data. The data may be handled according to one or more specific functions that the processor may execute. Further, a processor as used herein may be understood as any kind of circuit, e.g., any kind of analog or digital circuit. A processor may thus be or include an analog circuit, digital circuit, mixed-signal circuit, logic circuit (e.g., a hard-wired logic circuit or a programmable logic circuit), microprocessor, Central Processing Unit (CPU), Graphics Processing Unit (GPU), Digital Signal Processor (DSP), Field Programmable Gate Array (FPGA), integrated circuit, Application Specific Integrated Circuit (ASIC), etc., or any combination thereof. It is understood that any two (or more) of the processors detailed herein may be realized as a single entity with equivalent functionality or the like, and conversely that any single processor detailed herein may be realized as two (or more) separate entities with equivalent functionality or the like.
  • logic circuit e.g., a hard-wired logic circuit or a
  • any phrases explicitly invoking the aforementioned words e.g. “a plurality of [objects]”, “multiple [objects]” referring to a quantity of objects is intended to expressly refer more than one of the said objects.
  • the phrase “a plurality” may be understood to include a numerical quantity greater than or equal to two (e.g., two, three, four, five,tinct, etc.).
  • group refers to a quantity equal to or greater than one, i.e. one or more.
  • the phrase “at least one” and “one or more” may be understood to include a numerical quantity greater than or equal to one (e.g., one, two, three, four,tinct, etc.).
  • the phrase “at least one of’ with regard to a group of elements may be used herein to mean at least one element from the group consisting of the elements.
  • the phrase “at least one of’ with regard to a group of elements may be used herein to mean a selection of: one of the listed elements, a plurality of one of the listed elements, a plurality of individual listed elements, or a plurality of a multiple of individual listed elements.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present disclosure relates to an imaging device including: a camera, wherein the camera comprises a plurality of light sensing areas configured to be sensitive for infrared light, wherein the light sensing areas are arranged along a first direction; and an optical system configured to define a plurality of imaging channels, wherein each imaging channel corresponds to a partial field of view covering a respective portion of a total field of view of the imaging device, wherein the portions of the total field of view covered by the partial fields of view are arranged along a second direction at an angle with the first direction, and wherein each imaging channel is configured to direct infrared light from the corresponding partial field of view towards a respective light sensing area of the plurality of light sensing areas.

Description

MINIATURIZED 3D-SENSING CAMERA SYSTEM
Technical Field
[0001] The present disclosure relates generally to an imaging device adapted to have a reduced footprint, and to a mobile communication device including the adapted imaging device.
Background
[0002] In general, devices capable of capturing three-dimensional (3D) information within a scene are of great importance for a variety of application scenarios, both in industrial- as well as in home-settings. A prominent example is the use of 3D-sensors for face recognition and authentication in modern smartphones, in which a 3D-camera system is used to collect three-dimensional data representing the face of a user to allow or deny access to the functionalities of the smartphone. Another application example may include factory automation for Industry 5.0, in which 3D-sensors may be implemented to assist the movement of autonomous robots or the execution of an automated task. Other examples may include authentication systems for electronic payments, augmented reality (AR), virtual reality (VR), internet-of-things (loT) environments, and the like. Improvements in 3D-sensors may thus be of particular relevance for the further advancement of several technologies.
Brief Description of the Drawings
[0003] In the drawings, like reference characters generally refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the following description, various aspects of the invention are described with reference to the following drawings, in which:
FIG.1 A shows a 3D-sensor in a schematic representation, according to various aspects; FIG. IB shows a relationship between a field of view of the 3D-sensor and the geometry of an active area of the 3D-sensor, according to various aspects;
FIG.2A shows a top view of an imaging device in a schematic representation, according to various aspects;
FIG.2B shows a side view of the imaging device in a schematic representation, according to various aspects;
FIG.2C shows a splitting of a field of view of the imaging device in a schematic representation, according to various aspects;
FIG.2D shows possible arrangements of light sensing areas of the light sensing portion of the imaging device in a schematic representation, according to various aspects;
FIG.2E shows a comparison between a single light sensing area and a split-configuration for the light sensing areas in a schematic representation, according to various aspects;
FIG.3 shows possible configurations for the relative orientation between light sensing areas and partial fields of view in a schematic representation, according to various aspects;
FIG.4 shows a processor for use in an imaging device in a schematic representation, according to various aspects;
FIG.5 shows a projector for use in an imaging device in a schematic representation, according to various aspects;
FIG.6A, FIG.6B, FIG.6C, FIG.6D each shows a relative arrangement of a projector and light sensing areas of an imaging device in a schematic representation, according to various aspects;
FIG.6E shows possible configurations of projectors in relation to light sensing areas of an imaging device in a schematic representation, according to various aspects; and
FIG.7A, FIG.7B, and FIG.7C each shows an optical system for use in an imaging device in a schematic representation, according to various aspects. Description
[0004] The following detailed description refers to the accompanying drawings that show, by way of illustration, specific details and aspects in which the invention may be practiced. These aspects are described in sufficient detail to enable those skilled in the art to practice the invention. Other aspects may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the invention. The various aspects are not necessarily mutually exclusive, as some aspects may be combined with one or more other aspects to form new aspects.
[0005] Various strategies exist to implement active 3D-sensing, such as via structured light, active stereo vision, or time-of-flight systems. In general, each of these techniques allows generating or reconstructing three-dimensional information about a scene, e.g., as a three-dimensional image, a depth-map, or a three-dimensional point cloud. For example, 3D-sensing allows determining information about objects present in the scene, such as their position in the three-dimensional space, their shape, their orientation, and the like. Exemplary applications of active 3D-sensing include their use in automotive, e.g., to assist autonomous driving, and in portable devices (e.g., smartphones, tablets, and the like) to implement various functionalities such as face or object recognition, autofocusing, gaming activities, etc.
[0006] With the advancements of new generations of portable devices, there is a constant demand for a miniaturization of their mechanical, optical, and electrical components. In particular, in a smartphone various components are disposed in a notch that is visible on the user-side of the smartphone. Such components may include, an ambient light sensor, a proximity sensor, a flood illuminator, a 3D-sensor (e.g., including a projector and a camera), a microphone, a front camera, and the like. The notch takes away space for the screen of the smartphone, so that it may be desirable to reduce its area as much as possible. However, the miniaturization of the overall dimension of the notch may be limited by the minimum dimensions of the components, in particular by the dimensions of the 3D-sensor, as discussed in further detail below.
[0007] FIG.1A shows a 3D-sensor 100 in a schematic view, according to various aspects. The 3D-sensor 100 may be configured to carry out 3D-sensing via structured light, and may include a projector module (P) 102 and a camera module 104 (e.g., a Near-Infrared camera module). The 3D-sensor 100 may have a conventional configuration, and is shown to illustrate some limitations of such arrangement.
[0008] The camera module 104 may include a receiver module (R) 108 (e.g., one or more optical components to collect light from the scene), and an image sensor active area (C) 106, illustratively a light sensing area 106, whose dimensions may be limiting for the overall size of the 3D-sensor 100. Illustratively, the lateral dimension of the active area 106 may be limiting for the module footprint (MFP) on a substrate, e.g. on a printed circuit board. In particular, considering the integration in the user-side of a smartphone, the limiting dimension may be the vertical dimension of the active area 106 as seen from the front of the smartphone, illustratively, the y-size of the active area 106, Iy. The y-size of the active area 106 may thus be or define a critical dimension (CD) of the 3D-sensor 100, which may be the limiting factor for the miniaturization of the 3D-sensor 100.
[0009] Various possibilities may exist for reducing the critical dimension (CD), but they suffer from limitations in terms of image quality, resolution, or signal-to-noise ratio (SNR), as discussed in relation to FIG. IB.
[0010] FIG.1B shows a relationship between the field of view of the 3D-sensor 100 and the geometry of the active area 106 in a schematic view, according to various aspects.
[0011] The angular size of the field of view (FoV) may be expressed in terms of the field of view angle 110, a, and may be defined by application requirements. The field of view angle 110 may be dependent on the focal length 112, f, and on the image size 114, n, on the active area 106. The image size 114 may be expressed as a radius, n, and may be given by the number of pixels 116 of the active area 106, e.g. the number of pixels Nx, Ny in the x-direction and y-direction, as well as on the pixel pitch ppx (illustratively, a pixel-to-pixel distance, which may be equal along the x-direction and y-direction, as an example).
[0012] A relationship between, the field of view angle 110, a, focal length 112, f, and image size 114, n, may be expressed via the following equation,
Figure imgf000007_0001
[0013] The angular sampling requirement, cp, then defines the effective focal length as described by the following equation,
Figure imgf000007_0002
[0014] The radius, n, may be expressed in terms of x-size, Ix, and y-size, Iy, of the active area 106, as Ix + Iy = (2r;)2, where Ix = Nx * ppx and Iy = Ny * ppx. Considering for example a squared active area 106, then Ix = Iy. Thus, the y-size of the active area 106, which then defines the critical dimension (CD), may be related to the radius according to the following equation in this scenario,
Figure imgf000007_0003
[0015] The critical dimension (CD) of the 3D-sensor 100 may thus be reduced, for example, by reducing the image radius, n, according to equation 3, which however causes a loss of image resolution. A further option would be to reduce the pixel pitch and/or the pixel size, which however leads to a loss in signal-to-noise ratio (illustratively, a reduced SNR, and thus a more noisy detection). Yet another option would be to increase the angular sampling, (p, which however causes a loss of angular resolution.
[0016] The present disclosure may thus be based on the realization that a reduction in the critical dimension of a 3D-sensor may be of great importance for its integration in miniaturized devices, and may be based on a strategy that allows reducing such critical dimension without having to compromise other performance parameters such as the resolution or the SNR. The approach described herein may be based on splitting the active area of a camera module of a 3D-sensor and contextually the imaged field of view into a plurality of portions. Such sub-division of the active area allows an additional degree of freedom in the geometrical arrangement of the camera, by enabling a disposition of the active areas in a manner that allows reducing the critical dimension while maintaining a same resolution (e.g., image resolution and angular resolution) and SNR. Illustratively, the present disclosure is related to a 3D-sensor with a segmentation of the active area (and a corresponding segmentation of the field of view), which allows a y-size reduction.
[0017] The most relevant use case for a 3D-sensor as described herein may be for integration in a mobile communication device, e.g. a smartphone, a tablet, a laptop, and the like. When implemented at the user-side of a smartphone, for example, a 3D-sensor as described herein may allow reducing the y-dimension of a notch at the user-side, thus increasing the available screen area. It is however understood that the applications of the 3D-sensor are not limited to the context of mobile communication devices, but the 3D-sensor may in general be applied in any device or system in which the overall reduced dimension of the 3D-sensor may be beneficial for the arrangement.
[0018] It is also understood that the configuration with a segmentation of imaging areas and field of view is not limited to 3D-sensing applications, but may in general be applied to any imaging device which may benefit from the reduced critical dimension of the sensing area(s). Stated in a different fashion, 3D-sensing may be the most relevant use case for an imaging device configured as described herein, but such configuration may in general be applied also for other applications, such as distance measurements, two-dimensional imaging, etc.
[0019] According to various aspects, an imaging device may include: a camera, wherein the camera includes a plurality of light sensing areas, wherein the light sensing areas are arranged along a first direction; and an optical system configured to define a plurality of imaging channels, wherein each imaging channel corresponds to a partial field of view covering a respective portion of a total field of view of the imaging device, wherein the portions of the total field of view covered by the partial fields of view are arranged along a second direction at an angle with the first direction, and wherein each imaging channel is configured to direct light from the corresponding partial field of view towards a respective light sensing area of the plurality of light sensing areas.
[0020] In a preferred configuration, the imaging device may be configured for imaging of infrared light (e.g., near-infrared light). In this scenario, the light sensing areas may be configured to be sensitive for infrared light, and the optical system may be designed for light with wavelength in the infrared range. Infrared light may provide imaging in real-world scenarios with less influence from ambient light (e.g., from sunlight, or other sources of visible light in a scene, such as the headlights of a car, a street lamp, etc.), and may be the preferred choice for 3D-sensing applications such as for face recognition and authentication.
[0021] As will be discussed in further detail in relation to FIG.6A to FIG.6C, there may be various options for the optical components that allow a segmentation of the field of view of the imaging device. In a preferred configuration the segmentation is achieved by means of one or more meta-surface optical elements, e.g. one or more meta-surface lenses. A lens including a meta-surface allows a flexible control of the properties of light passing through the metasurface, and thus provide a convenient and easily scalable manner of implementing the segmentation of the field of view.
[0022] According to various aspects, an imaging device may include: a camera, wherein the camera includes a plurality of light sensing areas, wherein the light sensing areas are arranged along a first direction; and one or more meta-surface optical elements, wherein each metasurface optical element is configured to direct light from a corresponding partial field of view covering a respective portion of a total field of view of the imaging device towards one or more of the plurality of light sensing areas, wherein the portions of the total field of view covered by the partial fields of view are arranged along a second direction at an angle with the first direction.
[0023] It is however understood that the optical system may alternatively include other types of optical and/or mechanical components to manipulate the incoming light and achieve the segmentation of the field of view, as discussed in further detail below.
[0024] In the following, various aspects refer to specific orientations and/or relative orientations of various entities (e.g., light sensing areas, lateral dimensions of a light sensing area, partial fields of view, etc.). In general, the orientations described in the present disclosure refer to the scenario in which a light sensing area is arranged to face the field of view of the corresponding imaging device. In this exemplary scenario, a x-dimension (or x-size) of the light sensing area may be the lateral dimension along the horizontal field of view direction, and a y-dimension (or y-size) of the light sensing area may be the lateral dimension along the vertical field of view direction. It is however understood that the definition of horizontal or vertical orientation, as well as the definition of x-dimension or y-dimension may be arbitrary.
[0025] FIG.2A and FIG.2B show a top view and a side view of an imaging device 200, respectively, in a schematic representation according to various aspects. In various aspects, the imaging device 200 may be or may be configured as a 3D-sensor, e.g. may be a sensor configured to capture three-dimensional information from the field of view, e.g. three-dimensional information of objects present in the field of view. As other examples, the imaging device 200 may be or may be configured as a time-of-flight sensor, a depth-measuring device, a proximity sensor, and the like.
[0026] In general, the imaging device 200 may include a light sensing portion 202 and an optical system 204 (illustrated with a dotted line in FIG.2A to allow visualizing the underlying light sensing portion 202). The light sensing portion 202 and the optical system 204 may be configured to provide a segmentation of a field of view 220 of the imaging device 200 (see also FIG.2C) and to enable sensing light from different segments of the field of view 220. It is understood that the representation in FIG.2A and FIG.2B is simplified for the purpose of illustration, and that the imaging device 200 may include additional components with respect to those shown, e.g. one or more amplifiers to amplify a signal representing the received light, one or more noise filters, one or more processors, and the like.
[0027] The field of view 220 may extend along three directions 252, 254, 256, referred to herein as field of view directions. Illustratively, the first field of view direction 252 may be a first direction along which field of view 220 has a first lateral extension (e.g., a width, or a first angular extension), and the second field of view direction 254 may be a second direction along which the field of view 220 has a second lateral extension (e.g., a height, or a second angular extension). In some aspects, the first field of view direction 252 may be a horizontal direction, and the second field of view direction 254 may be a vertical direction. It is however understood that the definition of first field of view direction 252 and second field of view direction 254 may be arbitrary. The first field of view direction 252 and the second field of view direction 254 may be orthogonal to one another, and may be orthogonal to a direction along which an optical axis of the optical system 204 is aligned (e.g., the optical axis may be aligned along a third field of view direction 256). The third field of view direction 256 may illustratively represent a depth of the field of view 220.
[0028] The field of view 220 may be a field of view of the light sensing portion 202 as defined by the optical system 204. In general, the field of view 220 may be understood as the angular range within which the imaging device 200 may operate, e.g. the angular range within which the imaging device 200 may carry out an imaging operation. In some aspects, the field of view 220 may be or correspond to a field of illumination of the imaging device 200, e.g. a field of illumination of a projector of the imaging device (see also FIG.5A). The field of view 220 may illustratively be a detection area of the imaging device 200.
[0029] The light sensing portion 202 may include a plurality of light sensing areas 206 configured to be sensitive for light. A light sensing area 206 may be an active area of the light sensing portion 202 configured to convert light energy (illustratively, photons) of light impinging onto the light sensing area 206 in electrical energy (e.g., in a current, illustratively a photo current). Illustratively, a light sensing area 206 may be an optoelectronic component with a spectral sensitivity in a predefined wavelength range, for example in the visible range (e.g., from about 380 nm to about 700 nm), infrared range and/or near infrared range (e.g., in the range from about 700 nm to about 5000 nm, for example in the range from about 860 nm to about 1600 nm), or ultraviolet range (e.g., from about 100 nm to about 400 nm). In a preferred configuration, the light sensing areas 206 may be configured to be sensitive for infrared light, in particular for near-infrared light. A light sensing area 206 may be an image sensor, or may be part of an image sensor. Illustratively, the plurality of light sensing areas 206 may be understood as a plurality of image sensors.
[0030] The geometry (e.g., the shape and lateral dimensions) of a light sensing area 206 may be adapted according to the system requirements, e.g. according to an overall dimension of the imaging device, according to fabrication constraints, etc. In the exemplary configuration in FIG.2A and FIG.2B, the light sensing area 206 are shown as having a rectangular shape, but it is understood that a light sensing area 206 may have any suitable shape (e.g., a square shape, or even asymmetric shapes). In general, a light sensing area 206 may include a plurality of pixels, e.g. a first plurality of pixels Nx defining a first dimension (e.g., a dimension along the first field of view direction 252), and a second plurality of pixels Ny defining a second dimension (e.g., a dimension along the second field of view direction 254). In various aspects, the light sensing area 206 may include a two-dimensional array of pixels.
[0031] A number of pixels Nx, Ny in each direction, as well as a pixel pitch (e.g., a pixel-to-pixel distance in the first and/or second direction) may be adapted depending on the desired dimension of the light sensing areas 206. According to the strategy described herein, a light sensing area 206 may have a reduced dimension with respect to a conventional configuration, e.g. a reduced lateral extension for at least one dimension, as discussed in further detail below (see FIG.2E). As an example, the light sensing areas 206 may be integrated in a substrate, e.g. a printed circuit board. In this configuration, the first dimension of a light sensing areas 206 may be a dimension along a direction defined by a main dimension of the substrate, and the second dimension of a light sensing areas 206 may be a dimension along a direction defined by a secondary dimension of the substrate. According to various aspects, the light sensing areas 206 may all be integrated on a common substrate, e.g. on a CMOS chip.
[0032] According to various aspects, the light sensing areas 206 may all have a same area, e.g. may all have a same lateral extension in the x-dimension and y-dimension. In other aspects, the light sensing areas 206 may have different areas, and corresponding different x-dimension and/or y-dimension, to provide a further degree of control over the geometrical arrangement of the imaging device 200. In an exemplary configuration, a light sensing area 206 may have a longer lateral extension along the first field of view direction 252 (e.g., a lateral dimension along the first field of view direction 252 greater than a lateral dimension along the second field of view direction 254). It is however understood that also other configurations may be provided, e.g. with same lateral extension along the first and second field of view directions 252, 254, or with a longer lateral extension along the second field of view direction 254. Only as a numerical example, the y-dimension of a light sensing are 206 may be in the range from 2 mm to 5 mm, for example 3 mm.
[0033] As an exemplary configuration, a light sensing area 206 may be configured according to Complementary-Metal-Oxide-Semiconductor (CMOS) technology, e.g. a light sensing area 206 may be a CMOS image sensor. In this configuration, a light sensing area 206 may include a plurality of CMOS-pixels, each including a photodetector (e.g., a photodetector configured to be sensitive for infrared light) that accumulates an electrical charge based on the amount of light impinging onto the photodetector. As another exemplary configuration, a light sensing area 206 may be configured according to Charged Coupled Device (CCD) technology, e.g. a light sensing area 206 may be a CCD image sensor. In this configuration, the light sensing area 206 may include a plurality of CCD-pixels with a photoactive region and a transmission region. [0034] In general, the light sensing portion 202 may be or include an image sensor, wherein a light sensing area of the image sensor is segmented into a plurality of light sensing areas 206 (also referred to as photo-sensitive areas, or photo-sensitive regions). According to various aspects, the light sensing portion 202 may be or include a camera (e.g., an infrared camera, in particular a near-infrared camera).
[0035] In various aspects, the imaging device 200 (e.g., as part of the light sensing portion 202) may include one or more processors configured to receive the signals generated by the light sensing areas 206. The one or more processors may be configured to convert the electrical signals from the light sensing areas 206 into corresponding digital signals to allow further processing, as discussed in further detail below (see also FIG.4).
[0036] The light sensing areas 206 may be arranged along a first direction 210. The splitting of the sensing areas 206 and their disposition one next to the other allows reducing the critical dimension of the imaging device 200 (illustratively, allows reducing at least one lateral dimension of the light sensing areas 206 compared to a conventional configuration), while maintaining a same resolution and SNR in view of the splitting of the field of view 220 discussed in further detail in relation to FIG.2C. In various aspects, the light sensing areas 206 may form an array along the first direction 210, e.g. a one-dimensional array or a substantially one-dimensional array (see also FIG.3). In various aspects, the light sensing areas 206 may be arranged to face the field of view 220 of the imaging device 200.
[0037] In general, the light sensing areas 206 may be spaced apart from one another along the first direction 210, e.g., there may be a distance g between adjacent light sensing areas 206 (for example measured edge-to-edge). In an exemplary configuration, the distance g may be a regular distance, e.g. each light sensing area 206 may be spaced by the distance g from the adjacent light sensing area 206. In another exemplary configuration, however, the distance between light sensing areas 206 may vary, so that a first light sensing area may be spaced by a first distance from a second light sensing area, the second light sensing area may be spaced by a second distance from a third light sensing area, etc. A varying distance may allow a flexible adaptation of the arrangement of the light sensing areas to requirements of the device, e.g. to fabrication constraints.
[0038] The optical system 204 may be configured to provide a splitting of the field of view 220 of the imaging device 200 into a plurality of partial fields of view 220-1, 220-2, each imaged via a corresponding light sensing area 206. Illustratively, the optical system 204 may be configured to define a plurality of imaging channels 222, each corresponding to a partial field of view 220-1, 220-2 that covers a respective portion of a total field of view 220 of the imaging device 200. In the exemplary scenario illustrated in FIG.2A to FIG.2C, the light sensing portion 202 may include two light sensing areas 206, so that the optical system 204 may define two imaging channels (e.g., a first imaging channel 222-1 corresponding to a first partial field of view 220-1, and a second imaging channel 222-2 corresponding to a second partial field of view 220-2). Considering the splitting, the field of view 220 may also be referred to herein as total field of view 220 of the imaging device 200.
[0039] An imaging channel 222 may be understood as a light collection area (e.g., a light collection cone) from which the optical system 204 collects light to direct it onto a corresponding light sensing area 206. Illustratively, each imaging channel 222 may be configured to direct light (e.g., infrared light, in particular near-infrared light) from the corresponding partial field of view towards a respective light sensing area 206. Stated in a different fashion, the optical system 204 may be configured to image different portions of the field of view 220 (illustratively, different areas, or regions, of the field of view 220) onto different light sensing areas 206. An imaging channel 222 may illustratively be an optical construct defined by the optical system 204 (e.g., by the optical components of the optical system 204, as discussed in relation to FIG.6A to FIG.6C) to guide light from a corresponding partial field of view on one of the light sensing areas 206. According to various aspects, to improve the stability of the arrangement (e.g., its resistance to mechanical shocks), the optical system 204 may be mechanically coupled to a substrate on which the light sensing areas 206 are formed.
[0040] According to various aspects, a sum of the areas of the partial fields of view 220-1, 220-2 may be less than or equal to a field of illumination of a projector of the imaging device 200 (described in further detail in FIG.5). Illustratively, a full bounding box of the field of view of the imaging channels (also referred to herein as receiver channels R1 and R2) may be smaller or equal to the field of illumination of the projector. This configuration may ensure an efficient utilization of the light sensing areas 206 to capture only light relevant for the operation of the imaging device 200, and reducing the overall noise of a measurement.
[0041] The portions of the field of view 220 covered by the partial fields of view 220-1, 220-2 may be arranged along a second direction 212 at an angle with the first direction 210. In a preferred configuration, which may be the most relevant use case for the arrangement described herein, the first direction 210 may correspond to a horizontal direction 252 of the field of view 220 of the imaging device 200, and the second direction 212 may correspond to a vertical direction 254 of the field of view 220 of the imaging device 200. It is however understood that other arrangements may be provided, as shown in FIG.3, as long as a splitting of the sensing area(s) and detection area along different directions is provided. Illustratively, the splitting of the light sensing areas 206 along the horizontal direction 252 allows reducing the critical y-dimension, whereas the contextual splitting of the field of view 220 along the vertical direction 254 allows maintaining an overall resolution at a same level with respect to a conventional configuration with a single active area.
[0042] According to various aspects, the optical system 204 may be configured to define partial fields of view 220-1, 220-2 having the same area, e.g. having the same lateral (and angular) extension in the first field of view direction 252 and second field of view direction 254. In general, according to the splitting, the lateral (or angular) extension of a partial field of view 220-1, 220-2 along the second (e.g., vertical) field of view direction 254 may be less than the lateral (or angular) extension of a partial field of view 220-1, 220-2 along the first (e.g., horizontal) field of view direction 252, e.g. a partial field of view 220-1, 220-2 may have a rectangular shape elongated in the horizontal direction. However, it is understood that also other geometries may be provided, e.g. a partial field of view 220-1, 220-2 may have a square shape, or a rectangular shape elongated in the vertical direction, as other examples.
[0043] It is also understood that, in other aspects, the optical system 204 may be configured to define partial fields of view 220-1, 220-2 having different areas, e.g. having a different lateral (and angular) extension in the first field of view direction 252 and/or a different lateral (and angular) extension second field of view direction 254. For example, the first partial field of view 220-1 may be longer and/or higher than the second partial field of view 220-2, or vice versa. This configuration may provide an additional degree of freedom in the arrangement of the light sensing areas 206, allowing to use light sensing areas 206 with different dimensions (e.g., the area and/or lateral dimensions of the light sensing areas 206 may vary in a corresponding manner as the respective partial fields of view 220-1, 220-2). This arrangement may provide disposing the light sensing areas 206 in combination with other components of the imaging device in a more flexible and adaptable manner.
[0044] According to various aspects, adjacent partial fields of view 220-1, 220-2 may have an overlapping region 224. Illustratively, in the configuration in FIG.2A to FIG.2C, the first imaging channel 222-1 and the second imaging channel 222-2 may be configured such that the first partial field of view 220-1 and the second partial field of view 220-2 have partial overlap 224. The partial overlap 224 (illustratively, the overlapping region) may be at a border, e.g. an interface, between the partial fields of view 220-1, 220-2. The presence of an overlapping region 224 ensures a continuous coverage of the field of view 220, without leaving “blind” areas that could otherwise be caused by the splitting of the field of view 220. [0045] FIG.2D shows various options for the segmentation of the plurality of light sensing areas 206a, 206b, 206c. Illustratively, the plurality of light sensing areas 206 may include any suitable number N of light sensing areas, e.g. two, three, four, five, or more than five light sensing areas. As shown in FIG.2D, in various aspects the number of areas in which the light sensing portion 202 is split (and the corresponding number of partial fields of view in which the field of view 220 is split) may also define the overlapping regions for each light sensing area 206a, 206b, 206c and partial field of view.
[0046] In the case that the light sensing portion 202 includes two light sensing areas 206a, a region 226a- 1 of a first light sensing areas 206a- 1 and a region 226a-2 of a second light sensing areas 206a- 1 may receive light from the same part of the field of view 220, illustratively from the overlapping region 224 between the corresponding partial fields of view 220-1, 220-2.
[0047] In case the light sensing portion 202 includes more than two light sensing areas 206, at least one partial field of view (or more than one, as shown for the configuration 206c) may have two overlapping regions, illustratively one with the partial field of view disposed “above” along the second direction 212, and another one with the partial field of view disposed “below” along the second direction 212.
[0048] For example, in the case that the light sensing portion 202 includes three light sensing areas 206b, a region 226b- 1 of a first light sensing areas 206b- 1 and a region 226b-2 of a second light sensing areas 206b- 1 may receive light from the same part of the field of view 220, and further another region 226b-3 of the second light sensing areas 206b- 1 and a region 226b-4 of a third light sensing areas 206b-2 may receive light from the same part of the field of view 220. [0049] In a corresponding manner, in the case that the light sensing portion 202 includes four light sensing areas 206c, at least two of the light sensing areas 206-2c, 206-3 c may have two regions 226c-2, 226c-3, 226c-4, 226c-5, receiving light from the same parts of the field of view as regions 226c- 1, 226c-6 of other light sensing areas 206- 1c, 206-4c, and so on for increasing number of light sensing areas 206. [0050] The advantages of the “split-configuration” are shown in FIG.2E with respect to a conventional configuration for a light sensing area 228. In an exemplary scenario, the light sensing areas 206 and the light sensing area 228 may have a same x-dimension, e.g. a same x-size, lx’ = lx. The segmentation of the field of view 220 into N partial areas, and the corresponding segmentation of the light sensing area into N light sensing areas 206, allows reducing the lateral extension of the light sensing areas 204 along the y-dimension, e.g. the y-size, Iy’, according to the following equation,
Figure imgf000019_0001
where c represents the relative amount of the full image y-size taking into account an overlap between the partial fields of view 220-1, 220-2 (as shown with the overlapping region 224 in FIG.2C). In the exemplary configuration with N=2, the equation 4 may give,
Figure imgf000019_0002
[0051] A difference between the y-size of the single light sensing area 228 and the y-size of the light sensing areas 206 in the split-configuration may be expressed as
Figure imgf000019_0003
[0052] Equation 5 may thus represent the amount of reduction in the critical dimension of the imaging device 200 which may be achieved with respect to a conventional configuration while maintaining same resolution and SNR. Illustratively, a footprint (MFP’) of the imaging device 200 may be reduced in the critical dimension with respect to the footprint of a conventional configuration.
[0053] By way of illustration, the configuration described herein may provide a segmentation of the full (e.g., vertical) field of view by splitting receiver into N or at least two channels (R1 and R2) each having a smaller (vertical) y-size in a way which is favorable to reduce the size of the module along the critical dimension (CD). The segmentation of the full (e.g., vertical) field of view by splitting receiver’s active area of the image sensor into multiple pixel arrays leads to a reduction of the image y-size according to equation 4 with accounting for an overlap of the partial fields of view of the individual channels (<J). The overlap (<J) may be a small integer fraction of the original image y-size (Iy) in order to efficiently use the active area (-few percent, e.g. 0.05), e.g. the area of the CMOS image sensor. The y-size reduction in the critical dimension (CD) according the proposed segmentation in N parts is expressed by equation 5, and it is significant when assuming that the additional margins and mechanical adders (e) between edge of the active area and that of the module are fixed and small compared to the image y-size (Iy).
[0054] FIG.3 shows possible configurations for the relative orientation between light sensing areas 206 and partial fields of view 220-1, 220-2 in a schematic representation, according to various aspects.
[0055] As mentioned in relation to FIG.2A to FIG.2C, in a preferred configuration 300a, the first direction 210 and the second direction 212 may be orthogonal to one another, e.g. may be aligned with the horizontal field of view direction and with the vertical field of view direction, respectively. This configuration may provide a compact and precise splitting, with a simple configuration for the optical system 204. However, also slight variations from this configuration 300a may be provided, in which the light sensing areas 206 remain substantially disposed along a horizontal direction and the partial fields of view 220-1, 220-2 remain substantially disposed along a vertical direction. In general, the first direction 210 and the second direction 212 may be at an angle with respect to one another that is greater than 0° and less than 180°, for example an angle in the range from 70° to 110°, for example an angle in the range from 80° to 100°, for example an angle of 90°. The alignment of the segments may be adapted according to a desired configuration of the imaging device, e.g. according to space constraints, or according to a relative arrangement with other components, as examples.
[0056] For example, in a second configuration 300b, the first direction 210 may be rotated by a positive angle with respect to the horizontal field of view direction, so that the angle between the first direction 210 and the second direction 212 may be less than 90° by a corresponding amount, e.g. by 5°, 10°, or 15° as numerical examples. As another example, in a third configuration 300c, the first direction 210 may be rotated by a negative angle with respect to the horizontal field of view direction, so that the angle between the first direction 210 and the second direction 212 may be more than 90° by a corresponding amount, e.g. by 5°, 10°, or 15° as numerical examples.
[0057] As a further example, in a fourth configuration 300d, the second direction 212 may be rotated by a negative angle with respect to the vertical field of view direction, so that the angle between the first direction 210 and the second direction 212 may be less than 90° by a corresponding amount, e.g. by 5°, 10°, or 15° as numerical examples. As a further example, in a fifth configuration 300e, the second direction 212 may be rotated by a positive angle with respect to the vertical field of view direction, so that the angle between the first direction 210 and the second direction 212 may be more than 90° by a corresponding amount, e.g. by 5°, 10°, or 15° as numerical examples.
[0058] As further examples, both the first direction 210 and the second direction 212 may be rotated with respect to the horizontal field of view direction and vertical field of view direction. For example, the first direction 210 may be rotated by a positive angle with respect to the horizontal field of view direction, and the second direction 212 may be rotated by a negative angle with respect to the vertical field of view direction, as shown in a sixth configuration 300f. As another example, the first direction 210 may be rotated by a negative angle with respect to the horizontal field of view direction, and the second direction 212 may be rotated by a positive angle with respect to the vertical field of view direction, as shown in a seventh configuration 300g. As further examples, not shown, both the first direction 210 and the second direction 212 may be rotated by a positive angle, or both by a negative angle, with respect to the horizontal field of view direction and vertical field of view direction. [0059] FIG.4 shows a processor 400 for use in an imaging device (e.g., in the imaging device 200) in a schematic representation, according to various aspects. As mentioned in relation to FIG.2A, the imaging device 200 may include one or more processors 400 to carry out a processing of the sensing signals generated by the light sensing areas.
[0060] Considering the split-configuration discussed herein, the processor 400 may be illustratively configured to stitch together the separately recorded segments 220-1, 220-2 to yield an image of the field of view 220 by means of post-processing using an image stitching procedure. Any suitable stitching procedure known in the art may be implemented for such purpose.
[0061] Illustratively, the processor 400 may be configured to receive light sensing data 404 from the light sensing portion of the imaging device (e.g., from the light sensing portion 202). The light sensing data 404 may include or represent the light from each partial field of view 220-1, 220-2 sensed at a corresponding light sensing area. For example, the light sensing data 404 may represent, for each light sensing area, an intensity of the light received at the light sensing area (e.g., at each pixel) from the corresponding partial field of view.
[0062] The processor 400 may be configured to combine the light sensing data 404 from the plurality of partial fields of view to reconstruct three-dimensional information (or other types of information for different types of applications) about the field of view of the imaging device. For example, the processor 400 may be configured to reconstruct a three-dimensional image of the field of view or of a portion of the field of view. As another example, the processor 400 may be configured to generate a depth-map of the field of view or of a portion of the field of view. As a further example, the processor 400 may be configured to generate a cloud of data points representing the field of view or a portion of the field of view.
[0063] As an exemplary configuration, which may represent the most relevant use case for an imaging device configured as described herein (e.g., for use in a mobile communication device), the processor 400 may be configured to carry out a face recognition process, e.g. a face-authentication process, using the light sensing data 404 received from the light sensing areas. Illustratively, the field of view 220 may include a face 402, e.g. the face of a user of a smartphone for example. Different portions of the face 402 may belong to different partial fields of view 220-1, 220-2, so that the different portions of the face 402 may be imaged onto different light sensing areas. In the exemplary configuration in FIG.4, with N=2, a first portion of the face 402-1 may correspond to a first partial field of view 220-1, and a second portion of the face 402-2 may correspond to a second partial field of view 220-2, but it is understood that the aspects described in relation to FIG.4 may be extended in a corresponding manner to a configuration with N>2.
[0064] To carry out the face recognition process, the processor 400 may be configured to reconstruct the face 402 from the portions of the face 402-1, 402-2 imaged by the segmented light sensing areas, and may be configured to compare the reconstructed face with predefined data (e.g., stored in a memory of the imaging device, or in a memory of the smartphone), e.g. with a predefined facial profile of known (authorized) users. In this regard it is worth noting that the processor 400 may be part of the imaging device, or may be communicatively coupled with the imaging device. For example, the processor 400, in some aspects, may be a processor of the mobile communication device receiving the light sensing data 404.
[0065] In a preferred configuration, which may provide the most efficient approach for face-recognition applications, the processor 400 may be configured to determine a distortion of a predefined dot pattern and generate a three-dimensional map of a target (e.g., of the face 402) in the field of view of the imaging device according to the determined distortion. Illustratively, in various aspects the imaging device may be configured to emit/detected structured light, and derive three-dimensional information based on the distortion of the emitted pattern as received at the imaging device. The general principles of such technique are in general known in the art, so that a detailed discussion is not included herein. [0066] FIG.5 shows a projector 500 for use in an imaging device (e.g., in the imaging device 200) in a schematic representation, according to various aspects. In various aspects, an imaging device (e.g., the imaging device 200) may include, in addition to the receiver side described in relation to FIG.2A to FIG.4, an emitter side for emitting light towards the field of view. At the emitter side, the imaging device may include one or more projectors 500 (see also FIG.6E), illustratively one or more light emitting circuits. A projector may also be referred to herein as illuminator module.
[0067] In general, the projector 500 may be configured to emit light 502 (e.g., infrared light, in particular near-infrared light) towards the field of view of the imaging device. The projector 500 may have a field of illumination, e.g. defined by emitter optics of the projector (not shown), and the field of illumination of the projector 500 may cover the field of view of the imaging device. According to various aspects, the projector 500 (or, in other aspects, the plurality of projectors) may be configured to emit light 502 to fully illuminate the field of view of the imaging device (illustratively, with a single emission, without a sequential scanning of the field of view).
[0068] The projector 500 may include a light source 504 configured to emit light, and a controller 506 configured to control the light emission by the light source 504. The light source 504 may be configured to emit light having a predefined wavelength, for example in the infrared and/or near-infrared range, such as in the range from about 700 nm to about 5000 nm, for example in the range from about 860 nm to about 1600 nm. In some aspects, the light source 504 may include an optoelectronic light source (e.g., a laser source). As an example, the light source 504 may include one or more light emitting diodes. As another example the light source may include one or more laser diodes, e.g. one or more edge emitting laser diodes or one or more vertical cavity surface emitting laser (VCSEL) diodes. In various aspects, the light source 504 may include an array of emitter pixels, e.g. a one-dimensional or two-dimensional array of emitter pixels (e.g., an array of light emitting diodes or laser diodes). [0069] The controller 506 may be configured to provide a control signal to the light source 504 to prompt (e.g., to trigger, or to start) an emission of light by the light source 504. According to a preferred configuration, the controller 506 may be configured to control the light source to emit light towards the field of view of the imaging according to a predefined dot pattern. The dot pattern may be achieved by suitably controlling the light source 504 and/or by suitably configuring emitter optics of the projector 500, as generally known in the art for structured light. Illustratively, the projector 500 may be configured to project a predefined pattern of dots on the field of view of the imaging device (e.g., on an object, or on the face of a user), which may be used for object- and/or face-recognition.
[0070] FIG.6A, FIG.6B, FIG.6C, FIG.6D, and FIG.6E show possible relative arrangements of one or more light sensing areas 602 with respect to one or more projectors 604 for integration in an imaging device (e.g., in the imaging device 200). Illustratively, FIG.6A to FIG.6E illustrate possible dispositions of the light sensing areas 206 discussed in relation to FIG.2A to FIG.2E with respect to one or more projectors configured as the projector 500 discussed in relation to FIG.5.
[0071] A preferred configuration of an imaging device 600a is illustrated in FIG.6A in a top view. In this configuration the projector 604 may be disposed between two light sensing areas 602. A distance between the projector 604 and the first light sensing area 602 may be equal to the distance between the projector 604 and the second light sensing area 602. Illustratively, the imaging device may have a symmetric arrangement of the light sensing areas 602 at two sides of the projector 604 along the first direction 210. The symmetric arrangement provides an equal baseline distance for both light sensing areas 602 (Cl, C2) when receiving the light emitted by the projector (and reflected back towards the imaging device). This ensures equal parallax values for both detections, thus reducing or preventing distortions in the measurements.
[0072] The configuration of the imaging device 600a may thus be understood as the light source (illustratively, the patterned illuminator) (L) as being centered in between two separate image sensors 610 (CIS1, CIS2) including the light sensing areas 602, for example on a common board. This modular system setup also provides a relatively reduced x-size of the imaging device. The symmetric arrangement is also illustrated with respect to the imaging channels 606 and projecting channel 608 (illustratively, an optical channel defined by the emitter optics of the projector). Illustratively, the receiver channels 606 (Rl, R2) may be disposed at two sides of the projecting channel 608 (P).
[0073] FIG.6B shows another possible configuration of an imaging device 600b. In this configuration, the projector 604 is disposed spaced apart from the light sensing areas 602 (illustratively, from the image sensors 610). Illustratively, in this configuration the projector 604 may be disposed at a distance from the light sensing areas 602 along the first direction 210. Stated in a different fashion, the projector 604 may be disposed outside the array formed by the light sensing areas 602, e.g. at the left-hand side of the array or at the right-hand side of the array. This configuration may provide a closer spacing between the light sensing areas 602 (illustratively, a small gap, g) which may be beneficial for the stitching of the partial fields of view. Separating the projector 604 from the image sensors 608 also simplifies the heat management of the imaging device 600b and allows an easier system integration.
[0074] In the configurations shown in FIG.6A and FIG.6B the image sensors 608 are formed on separate substrates, which may allow a more flexible disposition in the final configuration of the imaging device 600a, 600b. In other aspects, as shown in FIG.6C and FIG.6D, the image sensors 608 of the imaging device 600c, 600d may be integrated on a common substrate, e.g. on a common opto-electronic chip (CIS), e.g. a CMOS chip, which may provide an increased stability of the arrangement.
[0075] The imaging device 600c may be configured in a similar manner as the imaging device 600a, with a projector 604 disposed between two light sensing areas 602, but in the configuration of the imaging device 600c, the projector and the light sensing areas may be integrated on a common substrate 612. Illustratively, in the imaging device 600c, the light source (e.g., the patterned illuminator) (L) may be integrated on a common substrate (e.g., a common opto-electronic chip) (CIS) together with the active areas 602 (Cl, C2) in a hybridmanner. This configuration may provide the advantages discussed in relation to the imaging device 600a, and may further partially benefit from wafer-level packaging.
[0076] In a corresponding manner, the imaging device 600d may be configured in a similar manner as the imaging device 600b, with a projector 604 disposed spaced apart from the light sensing areas 602, but in the configuration of the imaging device 600d, the light sensing areas 602 may be integrated on a common substrate 612. Illustratively, in the imaging device 600d, the separate active areas 602 of the CIS (Cl, C2) may be integrated on a common substrate (CIS) in a linear array along the x-dimension, and the light source (/.) may be placed on one side (either left or right in x-dimension) of the image sensor(s).
[0077] The aspects in FIG.6A to FIG.6D have been described for an exemplary scenario with one projector 604, but they may be correspondingly extended to a configuration in which the imaging device includes a plurality of projectors, e.g. a configuration in which the projector 604 includes (illustratively, is divided into) a plurality of projectors. FIG.6E shows possible configurations of imaging devices 600e-l, 600e-2, 600e-3 including a plurality of projectors 604.
[0078] As an example, an imaging device 600e-l may include two projectors 604-1, 604-2 both spaced apart from the light sensing areas 602-1, 602-2 (Cl, C2), e.g. one projector 604-1 (LI) may be spaced apart from the light sensing areas 602-1, 602-2 at one side of the array formed by the light sensing areas 602-1, 602-2 and another projector 604-2 (L2) may be spaced apart from the light sensing areas 602 at the opposite side of the array.
[0079] As another example, an imaging device 600e-2 may include two projectors 604-1, 604-2 each disposed between two respective light sensing areas 602. A first projector 604-1 (LI), may be disposed between a first light sensing area 602-1 (Cl) and a second light sensing area 602-2 (C2), and a second projector 604-2 (L2), may be disposed between the second light sensing area 602-2 (C2) and a third light sensing area 602-3 (C3).
[0080] The configuration of the imaging device 600e-2 may be extended for increasing number of light sensing areas 602 and projectors 604, as shown for the imaging device 600e-3, which includes three projectors 604-1, 604-2, 604-3 (LI, L2, L3) distributed across an array with four light sensing areas 602-1, 602-2, 602-3, 602-4 (Cl, C2, C3, C4).
[0081] Illustratively, FIG.6E shows different options of placing of the light source (e.g., the patterned illuminator) (L) with respect to the different active areas (Ci) for different number of segments of the field of view (N). This includes the possibility to segment also the field of illumination of the light source into a respective part (thus providing substantially equal baseline distances). This may also contribute to the reduction of module y-size by allowing a reduction of the y-size of the projector(s).
[0082] In various aspects, an imaging device may thus include a number N (greater than or equal to 2) of light sensing areas and a number N-l of projectors, wherein each projector is disposed between two of the light sensing areas. In other aspects, an imaging device may include a number N (greater than or equal to 2) of light sensing areas, and one or two projectors disposed outside the array of light sensing areas. In other aspects, a mixed configuration may be provided, in which an imaging device may include a number N (greater than or equal to 2) of light sensing areas, and at least one projector disposed outside the array of light sensing areas, and at least one further projector disposed between two of the light sensing areas.
[0083] FIG.7A, FIG.7B, and FIG.7C each shows an optical system 700a, 700b, 700c for use in an imaging device in a schematic representation, according to various aspects. Illustratively, the optical systems 700a, 700b, 700c may be exemplary configurations of the optical system 204 of the imaging device 200. The optical systems 700a, 700b, 700c are illustrated in a side view 702a, 702b, 702c, and in a top view 704a, 704b, 704c. [0084] In general, there may be various options to achieve the segmentation of the field of view of the imaging device, e.g. various options for angular steering and for providing selection elements for selecting partial fields of view. The optical systems 700a, 700b, 700c are based, respectively, on off-axis/shifted refractive lenses, off-axis meta-surface lenses, and structured gratings (e.g., diffraction gratings or meta-surface gratings). These configurations have been found to provide an efficient implementation of the segmentation strategy, and may be readily integrated in the fabrication flow of the imaging device. Furthermore, these strategies may be free of mechanically moving parts (e.g., free of oscillating microelectromechanical systems, MEMS, mirrors). However, in principle, also other configurations of the optical system may be provided, for example using planar fold-mirrors configured to define a different tilt angle for each individual channel, or using prisms (e.g., folded or upright). It is also understood that the various possible configurations for the optical system may be combined with one another.
[0085] The optical components described in the following in relation to the optical systems 700a, 700b, 700c may be manufactured with techniques known in the art, for example by using high-precision optical technologies for mass production (MP) such as injection-molded optics (IMO), wafer-level optics (WLO), glass molded optics (GMO), Grinding and Polishing, nanoimprint lithography (NIL) or deep ultra violet (DUV) lithography for diffractive/meta- surface optics.
[0086] In various aspects, the optical elements of the imaging channels (illustratively, of the receiver channels) and/or the optical elements of the projector may be formed on a common carrier substrate (e.g., by WLO, NIL- and/or DUV-lithographic technologies). For example a combination of Meta and WLO may be provided on diced pieces of wafers (lateral, channel-2- channel alignment and thermal stability).
[0087] According to various aspects, as shown in FIG.7A, the optical system 700a may include for at least one imaging channel (e.g., for each imaging channel) a lens element (e.g., a convex lens). In the exemplary configuration in FIG.7A, the optical system 700a may include a first lens element 706-1 corresponding to a first light sensing area 708-1 (and accordingly to a first imaging channel), and a second lens element 706-2 corresponding to a second light sensing area 708-2 (and accordingly to a second imaging channel). Each lens element 706-1, 706-2 may be configured to receive (e.g., collect) light from the (total) field of view of the imaging device and direct the received (e.g., collected) light towards the respective light sensing area 708-1, 708-2. In a preferred configuration, a lens element 706-1, 706-2 may be designed for infrared light (e.g., for near-infrared light). The optical system 700a may further include, for at least one (e.g., each) imaging channel, an aperture stop disposed at a decentered position with respect to a symmetry center of a surface profile of the respective lens element. For example, in the configuration in FIG.7A, the optical system 700a may include a first aperture stop 710-1 disposed off-center with respect to the first lens element 706-1, and a second aperture stop 710-2 disposed off-center with respect to the second lens element 706-2.
[0088] The aperture stop 710-1, 710-2 may be disposed at a decentered position with respect to a geometric center of the (total) field of view of the imaging device, so that the aperture stop 710-1, 710-2 may define for the respective imaging channel an optical axis tilted with respect to the third field of view direction 256 (illustratively, the direction orthogonal to a plane defined by the first field of view direction 252 and the second field of view direction 254, e.g. a plane defined by the first direction 210 and the second direction 212). In general, as shown in FIG.7A, the aperture stop 710-1, 710-2 may be disposed at a decentered position with respect to a geometric center of the respective light sensing area 708-1, 708-2. The off-center position of the aperture stop 710-1, 710-2 may provide directing (e.g., focusing) on a light sensing area 708-1, 708-2 the rays (e.g., infrared rays) coming from a respective “tilted position” in the field of view, illustratively a respective partial field of view at a certain vertical coordinate.
[0089] In this configuration, the segmentation of the field of view may thus be achieved using an in-plane deflection of the rays which would otherwise converge to the center of the respective active area 708-1, 708-2 of the image sensor(s). The decentration of the aperture stop 710-1, 710-2 (and with it the symmetry center of the lens surface(s) profile) with respect to the geometric center of image (in x-y-plane) ensures tilting the direction from which each light sensing area 708-1, 708-2 receives the corresponding light. The relative decentration may be implemented in such a way that the N imaging channels have significantly tilted (and hence non-parallel) optical axes which are intended to result in the vertical segmentation of full field of view, which enables the reduction of the image y-size according to Equation 4 above (e.g., with accounting for an overlap of the fields of view of the individual channels).
[0090] In a preferred configuration, as shown in FIG.7B, the lens element for at least one (e.g., for each) imaging channel may be or include a meta-surface optical element 712-1, 712-2, e.g. a meta-surface lens. Illustratively, the meta-surface optical element 712-1, 712-2 may include an optical meta-surface 714-1, 714-2, e.g. a surface-type metamaterial. The meta-surface optical element 712-1, 712-2 (e.g., the optical meta-surface 714-1, 714-2) may be patterned to direct the incoming light (e.g., infrared light) onto the respective light sensing area 708-1, 708-2. The pattern of the optical meta-surface 714-1, 714-2 may be a sub -wavelength pattern, e.g. considering as wavelength the wavelength of the light of interest, e.g. infrared light, in particular near-infrared light. For example the meta-surface optical element 712-1, 712-2 may include a nano-structured configured (e.g., patterned, or structured) to direct the incoming light onto the respective light sensing area 708-1, 708-2.
[0091] In general, the meta-surface optical element 712-1, 712-2 may include any suitable material for forming the structures of the meta-surface optical element 712-1, 712-2. In an exemplary configuration, which is particularly suitable for meta-surface optics in the infrared or near-infrared wavelength domain, the meta-surface optical element 712-1, 712-2 may include or may consist of amorphous silicon (aSi) on silicon dioxide (SiCh). Other examples of materials suitable for the nano-structures may include titanium dioxide (TiO?) or gallium nitride (GaN). [0092] In other aspects, as shown in FIG.7C, the segmentation of the field of view may be implemented via a combination of optical elements. The optical system 700c may include, for at least one (e.g., for each) imaging channel a lens element 716-1, 716-2 and a structured optical element 718-1, 718-2.
[0093] The structured optical element 718-1, 718-2 may be configured to receive (e.g., collect) light from the (total) field of view of the imaging device and may be configured to cause a deflection of the received light along the second field of view direction 254 (e.g., along the second direction 212). In a preferred configuration, the structured optical element 718-1, 718-2 may be designed for infrared light (e.g., near-infrared light). The lens element 716-1, 716-2 may be receive the deflected light from the optical element and direct the deflected light towards a respective light sensing area 708-1, 708-2. Illustratively, the lens element 716-1, 716-2 may be configured to cause the deflected light to converge towards the respective light sensing area708-l, 708-2.
[0094] In this configuration, the segmentation of the field of view may be achieved using an in-plane deflection by a micro/nano-structured optical element 718-1, 718-2 deflecting the rays along one (e.g., vertical) dimension which leads them to converge to a shifted position of the respective active area 708-1, 708-2 of the image sensor(s). As an exemplary configuration, the deflecting optical element 718-1, 718-2 may be a structured grating (e.g., a meta-surface grating or diffraction grating).
[0095] The lens element 716-1, 716-2 (e.g., a convex lens, or a meta-surface lens) may be configured to cause the deflected light to converge towards a position on the respective light sensing area 708-1, 708-2 decentered along the first direction 210 with respect to a geometric center of the light sensing area 708-1, 708-2. Illustratively, in this configuration the optical system 700c may include an aperture stop 710-1, 710-2 centered with respect to the structured optical element 718-1, 718-2 (and centered with respect to the lens element 716-1, 716-2), and the deflection may be achieved via the structuring of the deflecting optical element 718-1, 718-2. In this scenario, the structured optical element 718-1, 718-2 and the lens element 716-1, 716-2 (as well as the aperture stop 710-1, 710-2) may be aligned with respect to a common axis along the third field of view direction 256, e.g. respective geometric centers of the elements may be aligned along the third field of view direction 256.
[0096] The use of a separate deflection element 718-1, 718-2 enables the beam deflection with the aperture stops (ASi) as well as the stacked optical elements Li’ and Li to be aligned on a common axis along the z-dimension and thus the module being even smaller in the critical dimension CD.
[0097] In the following, various examples are provided that refer to aspects of the present disclosure (e.g., to the imaging device 200, 600a, 600b, 600c, 600d, 600e, to the processor 400, to the projector 500, to the optical system 700a, 700b, 700c).
[0098] Example 1 is an imaging device including: a camera, wherein the camera includes a plurality of light sensing areas configured to be sensitive for infrared light, wherein the light sensing areas are arranged along a first direction; and an optical system configured to define a plurality of imaging channels, wherein each imaging channel corresponds to a partial field of view covering a respective portion of a total field of view of the imaging device, wherein the portions of the total field of view covered by the partial fields of view are arranged along a second direction at an angle with the first direction, and wherein each imaging channel is configured to direct infrared light from the corresponding partial field of view towards a respective light sensing area of the plurality of light sensing areas.
[0099] In Example 2, the imaging device according to example 1 may optionally further include that the angle is greater than 0° and less than 180°, for example greater than 0° and less than 90°.
[00100] In Example 3, the imaging device according to example 1 or 2 may optionally further include that the first direction and the second direction are orthogonal to one another. [00101] In Example 4, the imaging device according to any one of examples 1 to 3 may optionally further include that the first direction corresponds to a horizontal direction of the total field of view of the imaging device, and that the second direction corresponds to a vertical direction of the total field of view of the imaging device.
[00102] In Example 5, the imaging device according to any one of examples 1 to 4 may optionally further include that the light sensing areas are arranged to face the total field of view of the imaging device.
[00103] In Example 6, the imaging device according to any one of examples 1 to 5 may optionally further include that at least one sensing area of the plurality of light sensing areas has a first lateral dimension along the first direction and a second lateral dimension along the second direction, and that the first lateral dimension is greater than the second lateral dimension. [00104] In Example 7, the imaging device according to example 6, may optionally further include that the second lateral dimension is in the range from 2 mm to 5 mm, for example the second lateral dimension may be equal to or less than 3 mm.
[00105] In Example 8, the imaging device according to any one of examples 1 to 7 may optionally further include that the light sensing areas are spaced apart from one another along the first direction.
[00106] In Example 9, the imaging device according to any one of examples 1 to 8 may optionally further include that the plurality of imaging channels include at least a first imaging channel corresponding to a first light sensing area and a second imaging channel corresponding to a second light sensing area, and that the first imaging channel and the second imaging channel are configured such that a first partial field of view corresponding to the first imaging channel and a second partial field of view corresponding to the second imaging channel have a partial overlap at a border between the first partial field of view and the second partial field of view. [00107] In Example 10, the imaging device according to any one of examples 1 to 9 may optionally further include that at least one light sensing area is or includes a Complementary- Metal-Oxide-Semiconductor (CMOS) sensor.
[00108] In Example 11, the imaging device according to any one of examples 1 to 10 may optionally further include a processor, wherein the processor is configured to: receive light sensing data from the camera, wherein the light sensing data include infrared light from each of the partial fields of view sensed at the corresponding light sensing area; and combine the light sensing data from the plurality of partial fields of view to reconstruct three-dimensional information of the total field of view of the imaging device.
[00109] In Example 12, the imaging device according to example 11 may optionally further include that the processor is further configured to carry out a face-authentication process using the light sensing data received from the camera.
[00110] In Example 13, the imaging device according to example 12 may optionally further include that to carry out the face-authentication process, the processor is configured to determine a distortion of a predefined dot pattern and generate a three-dimensional map of a target in the total field of view of the imaging device according to the determined distortion.
[00111] In Example 14, the imaging device according to any one of examples 1 to 13 may optionally further include that at least one light sensing area is configured to be sensitive for light having a wavelength in the near-infrared range.
[00112] In Example 15, the imaging device according to any one of examples 1 to 14 may optionally further include that the light sensing areas are integrated on a common substrate.
[00113] In Example 16, the imaging device according to example 15 may optionally further include that the common substrate is or includes a CMOS chip.
[00114] In Example 17, the imaging device according to any one of examples 1 to 16 may optionally further include a projector configured to emit infrared light towards the total field of view of the imaging device. [00115] In Example 18, the imaging device according to example 17 may optionally further include that the projector is configured to emit infrared light to fully illuminate the total field of view of the imaging device.
[00116] In Example 19, the imaging device according to example 17 or 18 may optionally further include that the projector includes a light source configured to emit infrared light, and a controller configured to control the light source to emit infrared light towards the total field of view of the imaging device according to a predefined dot pattern.
[00117] In Example 20, the imaging device according to example 19 may optionally further include that the light source is or includes a vertical cavity surface emission laser (VCSEL).
[00118] In Example 21, the imaging device according to any one of examples 17 to 20 may optionally further include that the projector is disposed between two light sensing areas of the plurality of light sensing areas.
[00119] In Example 22, the imaging device according to any one of examples 17 to 20 may optionally further include that the projector is disposed at a distance from the light sensing areas along the first direction.
[00120] In Example 23, the imaging device according to any one of examples 17 to 22 may optionally further include that the projector and the light sensing areas are integrated on a common substrate.
[00121] In Example 24, the imaging device according to any one of examples 17 to 23 may optionally further include that the projector includes a plurality of projectors, each projector being configured to emit infrared light towards a partial field of illumination covering a respective portion of the total field of view of the imaging device.
[00122] In Example 25, the imaging device according to any one of examples 1 to 24 may optionally further include that the optical system includes, for at least one imaging channel: a lens elements configured to receive infrared light from the total field of view of the imaging device and direct the received infrared light towards the respective light sensing area; and an aperture stop disposed at a decentered position with respect to a symmetry center of a surface profile of the lens element.
[00123] In Example 26, the imaging device according to example 25 may optionally further include that the aperture stop is disposed at a decentered position with respect to a geometric center of the total field of view of the imaging device, in such a way that the aperture stop defines for the imaging channel an optical axis tilted with respect to a third direction, and that the third direction is perpendicular to a plane defined by the first direction and the second direction.
[00124] In Example 27, the imaging device according to example 25 or 26 may optionally further include that the aperture stop is disposed at a decentered position with respect to a geometric center of the respective light sensing area.
[00125] In Example 28, the imaging device according to any one of examples 25 to 27 may optionally further include that the lens element is or includes a meta-surface optical element.
[00126] In Example 29, the imaging device according to any one of examples 1 to 24 may optionally further include that the optical system includes, for at least one imaging channel: a lens element and a structured optical element, wherein the structured optical element is configured to receive infrared light from the total field of view of the imaging device and cause a deflection of the received infrared light along the second field of view direction, and wherein the lens element is configured to receive the deflected infrared light from the optical element and cause the deflected infrared light to converge towards the respective light sensing area.
[00127] In Example 30, the imaging device according to example 29 may optionally further include that the lens element is configured to cause the deflected infrared light to converge towards a position on the respective light sensing area decentered along the first direction with respect to a geometric center of the light sensing area.
[00128] In Example 31, the imaging device according to example 29 or 30 may optionally further include that the lens element and the structured optical element are aligned with respect to a common axis along a third direction, and that the third direction is perpendicular to a plane defined by the first direction and the second direction.
[00129] In Example 32, the imaging device according to any one of examples 29 to 31 may optionally further include that the optical element is or includes a structured grating, e.g. a structured meta-surface grating or a structured diffraction grating.
[00130] In Example 33, the imaging device according to any one of examples 29 to 32 may optionally further include that the optical system is mechanically coupled to a substrate on which the light sensing areas are formed.
[00131] Example 34 is an imaging device including: a camera, wherein the camera includes a plurality of light sensing areas, wherein the light sensing areas are arranged along a first direction; and one or more meta-surface optical elements, wherein each meta-surface optical element is configured to direct light from a corresponding partial field of view covering a respective portion of a total field of view of the imaging device towards a respective light sensing area of the plurality of light sensing areas, wherein the portions of the total field of view covered by the partial fields of view are arranged along a second direction at an angle with the first direction.
[00132] In Example 35, the imaging device of example 34 may include one or more features of any one of examples 1 to 33.
[00133] Example 36 is a mobile communication device including a 3D-sensor, the 3D-sensor including: a camera, wherein the camera includes a plurality of light sensing areas configured to be sensitive for infrared light, wherein the light sensing areas are arranged along a first direction; and an optical system configured to define a plurality of imaging channels, wherein each imaging channel corresponds to a partial field of view covering a respective portion of a total field of view of the imaging device, wherein the portions of the total field of view covered by the partial fields of view are arranged along a second direction at an angle with the first direction, and wherein each imaging channel is configured to direct infrared light from the corresponding partial field of view towards a respective light sensing area of the plurality of light sensing areas.
[00134] In Example 37, the mobile communication device according to example 34 may optionally further include that the 3D-sensor is arranged such that the light sensing areas of the camera face a user-side of the mobile communication device.
[00135] The term “processor” as used herein may be understood as any kind of technological entity that allows handling of data. The data may be handled according to one or more specific functions that the processor may execute. Further, a processor as used herein may be understood as any kind of circuit, e.g., any kind of analog or digital circuit. A processor may thus be or include an analog circuit, digital circuit, mixed-signal circuit, logic circuit (e.g., a hard-wired logic circuit or a programmable logic circuit), microprocessor, Central Processing Unit (CPU), Graphics Processing Unit (GPU), Digital Signal Processor (DSP), Field Programmable Gate Array (FPGA), integrated circuit, Application Specific Integrated Circuit (ASIC), etc., or any combination thereof. It is understood that any two (or more) of the processors detailed herein may be realized as a single entity with equivalent functionality or the like, and conversely that any single processor detailed herein may be realized as two (or more) separate entities with equivalent functionality or the like.
[00136] The word “exemplary” is used herein to mean “serving as an example, instance, or illustration”. Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
[00137] The words “plural” and “multiple” in the description and the claims, if any, are used to expressly refer to a quantity greater than one. Accordingly, any phrases explicitly invoking the aforementioned words (e.g. “a plurality of [objects]”, “multiple [objects]”) referring to a quantity of objects is intended to expressly refer more than one of the said objects. For instance, the phrase “a plurality” may be understood to include a numerical quantity greater than or equal to two (e.g., two, three, four, five, [...], etc.). The terms “group”, “set”, “collection”, “series”, “sequence”, “grouping”, “selection”, etc., and the like in the description and in the claims, if any, are used to refer to a quantity equal to or greater than one, i.e. one or more. Accordingly, the phrases “a group of [objects]”, “a set of [objects]”, “a collection of [objects]”, “a series of [objects]”, “a sequence of [objects]”, “a grouping of [objects]”, “a selection of [objects]”, “[object] group”, “[object] set”, “[object] collection”, “[object] series”, “[object] sequence”, “[object] grouping”, “[object] selection”, etc., used herein in relation to a quantity of objects is intended to refer to a quantity of one or more of said objects. It is appreciated that unless directly referred to with an explicitly stated plural quantity (e.g. “two [objects]”, “three of the [objects]”, “ten or more [objects]”, “at least four [objects]”, etc.) or express use of the words „plural“, „multiple“, or similar phrases, references to quantities of objects are intended to refer to one or more of said objects.
[00138] Throughout the drawings, it should be noted that like reference numbers are used to depict the same or similar elements, features, and structures, unless otherwise noted.
[00139] The phrase “at least one” and “one or more” may be understood to include a numerical quantity greater than or equal to one (e.g., one, two, three, four, [...], etc.). The phrase “at least one of’ with regard to a group of elements may be used herein to mean at least one element from the group consisting of the elements. For example, the phrase “at least one of’ with regard to a group of elements may be used herein to mean a selection of: one of the listed elements, a plurality of one of the listed elements, a plurality of individual listed elements, or a plurality of a multiple of individual listed elements.
[00140] While the above descriptions and connected figures may depict electronic device components as separate elements, skilled persons will appreciate the various possibilities to combine or integrate discrete elements into a single element. Such may include combining two or more circuits for form a single circuit, mounting two or more circuits onto a common chip or chassis to form an integrated element, executing discrete software components on a common processor core, etc. Conversely, skilled persons will recognize the possibility to separate a single element into two or more discrete elements, such as splitting a single circuit into two or more separate circuits, separating a chip or chassis into discrete elements originally provided thereon, separating a software component into two or more sections and executing each on a separate processor core, etc.
[00141] It is appreciated that implementations of methods detailed herein are demonstrative in nature, and are thus understood as capable of being implemented in a corresponding device. Likewise, it is appreciated that implementations of devices detailed herein are understood as capable of being implemented as a corresponding method. It is thus understood that a device corresponding to a method detailed herein may include one or more components configured to perform each aspect of the related method.
[00142] All acronyms defined in the above description additionally hold in all claims included herein.
[00143] While the invention has been particularly shown and described with reference to specific aspects, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes, which come within the meaning and range of equivalency of the claims, are therefore intended to be embraced.

Claims

Claims What is claimed is:
1. An imaging device comprising: a camera, wherein the camera comprises a plurality of light sensing areas configured to be sensitive for infrared light, wherein the light sensing areas are arranged along a first direction; and an optical system configured to define a plurality of imaging channels, wherein each imaging channel corresponds to a partial field of view covering a respective portion of a total field of view of the imaging device, wherein the portions of the total field of view covered by the partial fields of view are arranged along a second direction at an angle with the first direction, and wherein each imaging channel is configured to direct infrared light from the corresponding partial field of view towards a respective light sensing area of the plurality of light sensing areas.
2. The imaging device according to claim 1, wherein the first direction and the second direction are orthogonal to one another.
3. The imaging device according to claim 1 or 2, wherein the first direction corresponds to a horizontal direction of the total field of view of the imaging device, and wherein the second direction corresponds to a vertical direction of the total field of view of the imaging device. The imaging device according to any one of claims 1 to 3, wherein the plurality of imaging channels comprise at least a first imaging channel corresponding to a first light sensing area and a second imaging channel corresponding to a second light sensing area, and wherein the first imaging channel and the second imaging channel are configured such that a first partial field of view corresponding to the first imaging channel and a second partial field of view corresponding to the second imaging channel have a partial overlap at a border between the first partial field of view and the second partial field of view. The imaging device according to any one of claims claim 1 to 4, further comprising: a processor, wherein the processor is configured to: receive light sensing data from the camera, wherein the light sensing data comprise infrared light from each of the partial fields of view sensed at the corresponding light sensing area; and combine the light sensing data from the plurality of partial fields of view to reconstruct three dimensional information of the total field of view of the imaging device. The imaging device according to claim 5, wherein the processor is further configured to carry out a face authentication process using the light sensing data received from the camera. The imaging device according to claim 6, wherein to carry out the face authentication process, the processor is configured to determine a distortion of a predefined dot pattern and generate a three dimensional map of a target in the total field of view of the imaging device according to the determined distortion. The imaging device according to any one of claims 1 to 7, wherein at least one light sensing area of the plurality of light sensing areas is configured to be sensitive for light having a wavelength in the near infrared range. The imaging device according to any one of claims 1 to 8, further comprising: a projector configured to emit infrared light towards the total field of view of the imaging device. The imaging device according to claim 9, wherein the projector comprises a light source configured to emit infrared light, and a controller configured to control the light source to emit infrared light towards the total field of view of the imaging device according to a predefined dot pattern. The imaging device according to claim 9 or 10, wherein the projector is disposed between two light sensing areas of the plurality of light sensing areas. The imaging device according to claim 9 or 10, wherein the projector is disposed at a distance from the light sensing areas along the first direction. The imaging device according to any one of claims 1 to 12, wherein the optical system comprises, for at least one imaging channel: a lens elements configured to receive infrared light from the total field of view of the imaging device and direct the received infrared light towards the respective light sensing area; and an aperture stop disposed at a decentered position with respect to a symmetry center of a surface profile of the lens element. The imaging device according to claim 13, wherein the aperture stop is disposed at a decentered position with respect to a geometric center of the total field of view of the imaging device, in such a way that the aperture stop defines for the imaging channel an optical axis tilted with respect to a third direction, wherein the third direction is perpendicular to a plane defined by the first direction and the second direction. The imaging device according to claim 14, wherein the lens element is or comprises a meta-surface optical element. The imaging device according to any one of claims 1 to 15, wherein the optical system comprises, for at least one imaging channel: a lens element and a structured optical element, wherein the structured optical element is configured to receive infrared light from the total field of view of the imaging device and cause a deflection of the received infrared light along the second field of view direction, and wherein the lens element is configured to receive the deflected infrared light from the optical element and cause the deflected infrared light to converge towards the respective light sensing area. An imaging device comprising: a camera, wherein the camera includes a plurality of light sensing areas, wherein the light sensing areas are arranged along a first direction; and one or more meta-surface optical elements, wherein each meta-surface optical element is configured to direct light from a corresponding partial field of view covering a respective portion of a total field of view of the imaging device towards a respective light sensing area of the plurality of light sensing areas, wherein the portions of the total field of view covered by the partial fields of view are arranged along a second direction at an angle with the first direction. The imaging device according to claim 17, wherein the first direction corresponds to a horizontal direction of the total field of view of the imaging device, and wherein the second direction corresponds to a vertical direction of the total field of view of the imaging device. A mobile communication device comprising a 3D sensor, the 3D sensor comprising: a camera, wherein the camera comprises a plurality of light sensing areas configured to be sensitive for infrared light, wherein the light sensing areas are arranged along a first direction; and an optical system configured to define a plurality of imaging channels, wherein each imaging channel corresponds to a partial field of view covering a respective portion of a total field of view of the imaging device, wherein the portions of the total field of view covered by the partial fields of view are arranged along a second direction at an angle with the first direction, and wherein each imaging channel is configured to direct infrared light from the corresponding partial field of view towards a respective light sensing area of the plurality of light sensing areas. The mobile communication device according to claim 19, wherein the 3D sensor is arranged such that the light sensing areas of the camera face a user side of the mobile communication device.
PCT/EP2023/075127 2022-10-28 2023-09-13 Miniaturized 3d-sensing camera system WO2024088640A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263420129P 2022-10-28 2022-10-28
US63/420,129 2022-10-28

Publications (1)

Publication Number Publication Date
WO2024088640A1 true WO2024088640A1 (en) 2024-05-02

Family

ID=88068430

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/075127 WO2024088640A1 (en) 2022-10-28 2023-09-13 Miniaturized 3d-sensing camera system

Country Status (1)

Country Link
WO (1) WO2024088640A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140111650A1 (en) * 2012-10-19 2014-04-24 Qualcomm Incorporated Multi-camera system using folded optics
US20220011661A1 (en) * 2019-03-25 2022-01-13 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Device comprising a multi-aperture imaging device for generating a depth map
US20220094902A1 (en) * 2019-06-06 2022-03-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Multi-channel imaging device and device having a multi-aperture imaging device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140111650A1 (en) * 2012-10-19 2014-04-24 Qualcomm Incorporated Multi-camera system using folded optics
US20220011661A1 (en) * 2019-03-25 2022-01-13 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Device comprising a multi-aperture imaging device for generating a depth map
US20220094902A1 (en) * 2019-06-06 2022-03-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Multi-channel imaging device and device having a multi-aperture imaging device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANDREAS BRU"CKNER ET AL: "Multi-aperture optics for wafer-level cameras", JOURNAL OF MICRO NANOLITHOGRAPHY MEMS AND MOEMS, 21 November 2011 (2011-11-21), pages 43010, XP055128787, Retrieved from the Internet <URL:http://dx.doi.org/> DOI: 10.1117/1.3659144 *

Similar Documents

Publication Publication Date Title
US9967547B2 (en) Wafer level optics for folded optic passive depth sensing system
US9986223B2 (en) Folded optic passive depth sensing system
US7920339B2 (en) Method and apparatus providing singlet wafer lens system with field flattener
CN108718376B (en) Thin multi-aperture imaging system with auto-focus and method of use thereof
US9237281B2 (en) Image sensor comprising plural pixels including a microlens and plural photoelectric conversion portions and image pickup apparatus comprising same
JP5375010B2 (en) Imaging device
JP5379241B2 (en) Optical image apparatus, optical image processing apparatus, and optical image forming method
JP2020526754A (en) Distance measuring device with electron scanning emitter array and synchronous sensor array
JP5923755B2 (en) Depth estimation imaging device and imaging device
TWI606309B (en) Optical imaging apparatus, in particular for computational imaging, having further functionality
JP2009524263A (en) Image detection system and method of manufacturing the same
JP2010114758A (en) Device for image capture and method therefor
KR101974578B1 (en) Imaging optical system for 3D image acquisition apparatus, and 3D image acquisition apparatus including the imaging optical system
JP2012520557A (en) Method for manufacturing multiple micro optoelectronic devices and microoptoelectronic devices
CN111866387B (en) Depth image imaging system and method
KR20150068778A (en) Catadioptric light-field lens and image pickup apparatus including the same
JP2012065021A (en) Solid state imaging device
WO2024088640A1 (en) Miniaturized 3d-sensing camera system
KR20170015108A (en) Image sensor
CN112335049B (en) Imaging assembly, touch screen, camera module, intelligent terminal, camera and distance measurement method
JP6916627B2 (en) Imaging device and its control method
JP2004336228A (en) Lens array system
US20180023943A1 (en) Optical apparatuses and method of collecting three dimensional information of an object
WO2017212616A1 (en) Optical device and imaging device provided with same
JP2006019918A (en) Imaging device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23771831

Country of ref document: EP

Kind code of ref document: A1