WO2020227980A1 - 图像传感器、光强感知***及方法 - Google Patents

图像传感器、光强感知***及方法 Download PDF

Info

Publication number
WO2020227980A1
WO2020227980A1 PCT/CN2019/087074 CN2019087074W WO2020227980A1 WO 2020227980 A1 WO2020227980 A1 WO 2020227980A1 CN 2019087074 W CN2019087074 W CN 2019087074W WO 2020227980 A1 WO2020227980 A1 WO 2020227980A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensing pixel
pixel block
target
image sensor
sensing
Prior art date
Application number
PCT/CN2019/087074
Other languages
English (en)
French (fr)
Inventor
王星泽
赖嘉炜
Original Assignee
合刃科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 合刃科技(深圳)有限公司 filed Critical 合刃科技(深圳)有限公司
Priority to CN201980005548.6A priority Critical patent/CN111345032B/zh
Priority to PCT/CN2019/087074 priority patent/WO2020227980A1/zh
Publication of WO2020227980A1 publication Critical patent/WO2020227980A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Definitions

  • the invention relates to the field of image processing, in particular to an image sensor, a light intensity sensing system and method.
  • the image sensor in the traditional technology can usually only effectively detect low dynamic range (LDR: Low Dynamic Range) images.
  • LDR Low Dynamic Range
  • the limited dynamic range must be used within a suitable brightness range. Therefore, it is necessary to meter the shot scene to determine a reasonable exposure time for automatic exposure.
  • TTL Through The Lens
  • the separate type includes two sensors. One sensor is responsible for imaging and the other is responsible for detecting light intensity. Although it can provide better light metering results, it will add additional electronic components and light paths, resulting in an increase in cost and volume. It is mainly used in SLRs. Cameras and other high-end imaging equipment.
  • the combined type refers to the imaging sensor that is responsible for both imaging and light metering functions, and is mainly used in devices that require high integration such as mobile phone lenses.
  • the traditional image sensor used for imaging can only gradually approach the exposure conditions according to the current ratio of overexposure and underexposure, which takes a long time and has a certain lag. In continuous shooting, when the brightness of the scene changes greatly, it is not suitable for occasions where the lighting conditions are frequently switched.
  • an image sensor is proposed.
  • the image sensor can be used to sense the intensity of the ambient light to adjust the exposure time.
  • An image sensor includes a photosensitive surface on which one or more sensing pixel areas are arranged, and the sensing pixel area includes at least two sets of sensing pixel blocks;
  • the optical film arrays arranged on the surface of the sensing pixel block have the same thickness of the optical film array on the surface of the sensing pixel block of the same group, and the thickness of the optical film array on the surface of the sensing pixel block of different groups is different;
  • the thickness of the optical film array on the surface of a sensing pixel block corresponds to the exposure delay of the sensing pixel block.
  • the thickness distribution of the optical film array on each group of sensing pixel blocks satisfies a logarithmic distribution based on the base attenuation rate of the optical film array.
  • the groups of sensing pixel blocks in the same sensing pixel area are uniformly and sparsely arranged in the sensing pixel area; the thickness distribution of the sensing pixel blocks in the sensing pixel area is non-monotonous arrangement or misalignment arrangement.
  • the sensing pixel area is arranged at the edge and/or the center of the photosensitive surface.
  • the sensing pixel area is uniformly and sparsely arranged on the entire photosensitive surface.
  • the manner in which the optical film array is disposed on the surface of the sensing pixel area includes at least one of deposition, patterning, or etching.
  • a light intensity sensing method based on the aforementioned image sensor, includes:
  • the exposure time is adjusted according to the preset exposure duration and the target exposure delay.
  • the acquiring the target exposure delay corresponding to the target sensing pixel block includes:
  • the method further includes:
  • the sensing pixel block For the value of the sensing pixel block, obtaining a brightness correction value corresponding to the thickness value of the optical film array on the surface of the sensing pixel block, and correcting the value of the sensing pixel block according to the brightness correction value;
  • the sensing pixel block For the value of the sensing pixel block beyond the brightness range, the sensing pixel block is interpolated and restored according to the value of the neighboring pixel block.
  • a light intensity sensing system comprising the aforementioned image sensor and a processor connected to the image sensor, the processor being used to obtain the imaging brightness of the image sensor in the sensing pixel area under a preset exposure time; to determine the target brightness , Obtain the target perception pixel block corresponding to the target brightness; obtain the target exposure delay corresponding to the target perception pixel block, the target exposure delay corresponds to the thickness value of the optical film array on the surface of the target perception pixel block; The preset exposure duration and the target exposure delay adjust the exposure time.
  • multiple levels of light transmission energy attenuation can be generated by sensing the thickness change of the optical film array disposed on the surface of the pixel area, resulting in different thicknesses on the surface
  • the multiple levels of exposure delay corresponding to the sensing pixel blocks of the optical film array make it possible to obtain the imaging brightness level corresponding to multiple exposure durations with fewer exposures, which can be quickly determined by selecting the appropriate imaging brightness
  • the optimal exposure time is less time-consuming to detect light intensity than traditional technology, and it is more adaptable to scenes with rapid changes in ambient light.
  • Figure 1 is a schematic diagram of the relationship between the brightness range of the LDR image imaging and the exposure time
  • Figure 2 is an effect diagram of LDR images under different exposure durations
  • Figure 3 is a schematic diagram of the step-by-step detection of the optimal exposure time in the traditional technology
  • Figure 4 is a schematic diagram of a light intensity sensing system in an embodiment
  • Fig. 5 is a schematic diagram of an image sensor in an embodiment
  • FIG. 6 is a schematic diagram of an optical film array sensing pixel block provided on the surface of an embodiment
  • Fig. 7 is a distribution diagram of the thickness value of the optical film array on the surface of the sensing pixel block in the 4X4 sensing pixel area in an embodiment
  • FIG. 8 is a distribution diagram of thickness values of the optical film array on the surface of the sensing pixel block in the 4X2 sensing pixel area in an embodiment
  • FIG. 9 is a flowchart of a light intensity sensing method in an embodiment
  • FIG. 10 is a schematic diagram of a light intensity sensing method for detecting the optimal exposure time in an embodiment
  • Figure 11 is a schematic diagram of an imaging method in an embodiment
  • Fig. 12 is a schematic diagram of the composition of a computer system running the aforementioned light intensity sensing method in an embodiment.
  • the pictures obtained by taking pictures are usually LDR (English: Low Dynamic Range, Chinese: Low Dynamic Range) images.
  • LDR International: Low Dynamic Range
  • Chinese Low Dynamic Range
  • Figure 1 shows the brightness range of the LDR image and the change in the brightness range of the LDR image that is generated with the increase of exposure time.
  • the brightness range of the LDR image is a range window, and changes in the exposure time will not change the LDR brightness range, but only affect the lower limit of the brightness range.
  • the exposure time is shorter T 0 and longer T 1
  • the LDR image collected by the traditional image sensor can be seen that the lower limit of the brightness range of the LDR image corresponding to T 0 is compared with that corresponding to T 1
  • the lower limit of the brightness range of the LDR image is lower, but the brightness range is the same.
  • an ordinary camera chooses a shorter, reasonable, and longer exposure time to obtain LDR image 1, LDR image 2, and LDR image 3 for shooting the same scene in the same light and dark environment. It can be seen that, For ordinary cameras, the short exposure time will darken the image, and the longer exposure time will overexpose the image. Only a reasonable exposure time can be used to capture relatively good images. Therefore, the automatic exposure technology in the traditional technology needs to obtain a reasonable exposure time corresponding to the scene by sensing the light intensity, and then perform imaging.
  • the light intensity sensing method in the traditional technology can be referred to as shown in FIG. 3, and its principle is a step-by-step detection method.
  • the image sensor first collects the image under the exposure time T 0 , and then analyzes whether the exposure of the image corresponding to the T 0 is reasonable.
  • T cost T 0 +T 1 +....+T best
  • a light intensity sensing system includes:
  • the image sensor 10 may be a flat-panel structure, including a photosensitive surface, on which a plurality of pixels are arranged in an orderly manner, that is, collecting light signals into electrical signals, and encoding the electrical signals It becomes the sampling point of the pixels of the image.
  • the photosensitive surface is CCD (English: Charge-coupled Device, Chinese: Charge-coupled Device, a detection element that uses the amount of charge to indicate the size of the signal and transmits the signal by coupling) and CMOS (English: Complementary Metal Oxide Semiconductor, Chinese : Complementary metal oxide semiconductor, a photosensitive element).
  • one or more perceptual pixel areas 12 are provided on the photosensitive surface, and the perceptual pixel areas 12 include at least two groups of perceptual pixel blocks (as shown in Fig. 1, pixels represented by different gray values in the perceptual pixel area 12 Piece).
  • the sensing pixel area may be set at the edge and/or the center of the photosensitive surface. In another embodiment, the sensing pixel area is uniformly and sparsely arranged on the entire photosensitive surface.
  • the image sensor 10 further includes an optical film array 14 arranged on the surface of the sensing pixel block.
  • the optical film array 14 on the surface of the same group of sensing pixel blocks has the same thickness.
  • the array thickness is different.
  • the thickness of the optical film array on the surface of a sensing pixel block corresponds to the exposure delay of the sensing pixel block.
  • FIG. 4 shows the photosensitive surface of the image sensor 10, one of the squares corresponds to a pixel block, and a pixel block can contain at least one pixel.
  • one pixel can be regarded as a pixel block, or 2 ⁇ 2, or 3 ⁇ 3 pixels as a pixel block.
  • the pixel blocks with gray scales in the sensing pixel area 12 in FIG. 4 are sensing pixel blocks provided with optical film arrays on the surface, and the gray scale represents the thickness of the optical film array on the surface of the sensing pixel blocks.
  • the manner in which the optical film array is disposed on the surface of the sensing pixel area includes at least one of deposition, patterning, or etching.
  • the optical film arrays on the surface of different sensing pixel blocks can be the same or different.
  • the sensing pixel blocks with the same thickness of the surface optical film array belong to the same group (the pixel blocks with the same gray scale in the sensing pixel area 12 in Figure 4 are the same group).
  • the sensing pixel blocks with different thickness of the surface optical film array belong to different groups (the pixel blocks with different gray levels in the sensing pixel area 12 in Figure 4 belong to different groups)
  • Figures 5 and 6 show the shape and thickness distribution of the optical film arrays on the surface of different groups of sensing pixel blocks, including 4 sensing pixel blocks belonging to different groups, with thickness distributions d1, d2, d3, and d4 , And d1 ⁇ d2 ⁇ d3 ⁇ d4, the four sensing pixel blocks are referred to as the thickness values below to distinguish them.
  • the optical film array After the optical film array is set on the surface of the sensing pixel block, during the imaging process, when the light-transmitting optical film array enters the sensing pixel block, energy loss will occur due to the light transmittance of the optical film array, and the thicker the optical film array, The lower the light transmittance, the more energy loss is generated; and the size of the exposure time corresponds to the energy size of the light signal entering the sensing pixel block. The longer the exposure time, the greater the energy of the light signal entering the sensing pixel block. The higher the corresponding brightness, the brighter the picture; the shorter the exposure time, the smaller the energy of the light signal entering the sensing pixel block, and the lower the corresponding brightness, the darker the picture.
  • the energy loss caused by the light transmittance of the optical film array can be equivalent to the reduction in the amount of energy obtained by the reduction in exposure time. Therefore, the thickness of the optical film array on the surface of a sensing pixel block can correspond to the exposure of the sensing pixel block Delay.
  • the energy of the light signal entering the sensing pixel block d1 is the most.
  • the brightness detected by the block d1 is the highest, while the light signal energy entering the sensing pixel block d4 is the least, and the brightness detected by the sensing pixel block d4 is the lowest.
  • the energy of the light signal that needs to enter the sensing pixel block d1, d2, d3, and d4 is different, and the energy of the light signal that needs to enter the sensing pixel block d1 is the lowest, so only the shorter one is required.
  • Exposure time is sufficient; the light signal energy required to enter the sensing pixel block d4 is the highest, and a longer exposure time is required to ensure sufficient light signal energy enters the sensing pixel block d4. Therefore, the thickness value of the optical film array on the surface of the sensing pixel block The change from d1 to d4 is equivalent to an exposure delay.
  • a pixel block without an optical film array on the surface is equivalent to a perceptual pixel block with a thickness of 0, that is, the corresponding exposure delay is 0 (real exposure time).
  • the corresponding relationship between the thickness value of the sensing pixel block and the exposure delay can be recorded in advance.
  • the thickness value di of the optical film array on the surface of the sensing pixel block can be recorded corresponding to the exposure delay T i , and then each sensing pixel block can be recorded on the photosensitive surface.
  • the relative position of, that is, the mapping relationship between each photosensitive pixel block and its corresponding exposure delay T i is established.
  • the thickness distribution of the optical film array on each group of sensing pixel blocks satisfies the logarithmic distribution based on the base attenuation rate of the optical film array, that is:
  • each group of sensing pixel blocks in the same sensing pixel area are uniformly and sparsely arranged in the sensing pixel area; the thickness distribution of the sensing pixel blocks in the sensing pixel area is a non-monotonic arrangement or a staggered arrangement.
  • Figure 7 shows the thickness distribution of the optical film array on the surface of the sensing pixel block in a 4 ⁇ 4 sensing pixel area, where the color gray scale represents the thickness value, and the color gray scale
  • the pixel blocks belong to the same group, and the surface has the same thickness of optical film array. It can be seen from Fig. 7 that each group of sensing pixel blocks are evenly and sparsely distributed in the entire 4 ⁇ 4 sensing pixel area.
  • Figure 8 shows the thickness distribution of the optical film array on the surface of the sensing pixel block in a 4 ⁇ 2 sensing pixel area. It can be seen that the thickness distribution of the sensing pixel block in the sensing pixel area is non-monotonous or misaligned. That is, it does not reflect an increase or decrease of d1 and d4 to d2 and d3.
  • the processor 20 performs light intensity sensing based on the above-mentioned image sensor 10 provided with a sensing pixel area, which can be used to obtain the imaging brightness of the image sensor in the sensing pixel area under a preset exposure time; determine the target brightness, Acquiring a target sensing pixel block corresponding to a target brightness; acquiring a target exposure delay corresponding to the target sensing pixel block, where the target exposure delay corresponds to the thickness value of the optical film array on the surface of the target sensing pixel block; according to the The exposure time is adjusted by preset exposure time and the target exposure delay.
  • the processor 20 may be a computer device, which may rely on a computer program to execute a light intensity sensing method.
  • the computer program may run on any computer system based on the von Neumann system.
  • the device can be a smart phone, a personal computer, or other digital products with an image processing chip, such as a digital camera, a SLR camera, etc.
  • the light intensity sensing method is shown in Figure 9 and includes:
  • Step S102 Obtain the imaging brightness of the image sensor in the sensing pixel area under the preset exposure time.
  • the preset exposure time is T 0 , then the image sensor 10 is exposed for T 0 time, then each pixel block on the image sensor 10 can receive the corresponding light signal, and the magnitude of the received light signal energy is reflected in the pixel block The brightness level. As shown in FIG. 10, for a common pixel block without an optical film array on the surface, the energy received is not attenuated, and the energy of the entire exposure time T 0 is received.
  • the photosensitive pixel blocks d1 For the photosensitive pixel blocks d1, it receives energy by the thickness d1 of the optical attenuation film array, the attenuation amount of energy equivalent to the energy of the lack of exposure delay T 1, i.e., the photosensitive pixel blocks equivalent to d1 Only the energy of the exposure time T 0 -T 1 is received. Then, the imaging brightness of the photosensitive pixel block d1 is lower than the brightness of an ordinary pixel block without an optical film array on the surface.
  • the received energy is attenuated by the optical film array with thicknesses d2 and d3, and the attenuation energy is equivalent to the lack of exposure delay T 2 and T 3
  • the energy is equivalent to that the photosensitive pixel blocks d2 and d3 only receive the energy of the exposure time T 0 -T 2 and the energy of T 0 -T 3 . Since d1 ⁇ d2 ⁇ d3, the imaging brightness of the photosensitive pixel block d1>the imaging brightness of the photosensitive pixel block d2>the imaging brightness of the photosensitive pixel block d3.
  • Step S104 Determine the target brightness, and obtain the target perception pixel block corresponding to the target brightness.
  • the sensing pixel block with the thickness d2 of the optical film array provided on the surface is taken as the target sensing pixel block.
  • Step S106 Obtain a target exposure delay corresponding to the target sensing pixel block, where the target exposure delay corresponds to the thickness value of the optical film array on the surface of the target sensing pixel block.
  • the value of the exposure delay corresponding to the sensing pixel block with the thickness of the optical film array d2 on the surface of the image sensor 10 is pre-recorded, that is, the exposure delay T 2 shown in FIG. Set T 2 as the target exposure delay.
  • Step S108 Adjust the exposure time according to the preset exposure time length and the target exposure delay.
  • the imaging brightness of the ordinary pixel block is equivalent to the detection time, the exposure time is At T 0 , the imaging brightness of the pixel block d2 is sensed, so that the exposure degree is consistent.
  • sensing pixel areas can be set in both the center area and edge area of the image sensor, and sensing pixel blocks with the same thickness can be set in different sensing pixel areas.
  • the target perception corresponding to the target imaging brightness There are two or more pixel blocks.
  • obtaining the target exposure delay corresponding to the target sensing pixel block may include:
  • the pixel block in the central area of the photosensitive surface usually corresponds to the object. Therefore, the perceptual pixel block located in the central area of the photosensitive surface should have a higher weight value, while the perceptual pixel block located at the edge of the photosensitive surface should have a lower value.
  • the imaging method further includes:
  • Step S202 Obtain the value of each pixel block on the photosensitive surface of the image sensor, where the pixel block includes a sensing pixel block.
  • Step S204 For the value of the sensing pixel block, obtain a brightness correction value corresponding to the thickness value of the optical film array on the surface of the sensing pixel block, and correct the value of the sensing pixel block according to the brightness correction value.
  • the pixel value is the actual imaging information; for sensing pixel blocks with optical film array on the surface, the brightness correction is reversed.
  • the transmittance of the sensing pixel block d1 is attenuated by half, then during the imaging process, for the sensing pixel block d1, the brightness of the acquired imaging information needs to be corrected to be equivalent to the image captured when the transmittance is not attenuated The brightness value of the information.
  • Step S206 For the value of the perceptual pixel block that exceeds the brightness range, perform interpolation restoration on the perceptual pixel block according to the value of the neighboring pixel block.
  • the imaging information of the perceptual pixel block that is too bright or too dark even after correction can be interpolated and restored based on the imaging information of its neighboring pixel blocks.
  • the value of the adjacent 3 ⁇ 3 pixel block can be selected, and the average value is calculated as the value of the perceptual pixel block. This will not lead to the loss of information in part of the sensing pixel block.
  • multiple levels of light transmission energy attenuation can be generated by sensing the thickness change of the optical film array disposed on the surface of the pixel area, resulting in different thicknesses on the surface
  • the multiple levels of exposure delay corresponding to the sensing pixel blocks of the optical film array make it possible to obtain the imaging brightness level corresponding to multiple exposure durations with fewer exposures, which can be quickly determined by selecting the appropriate imaging brightness
  • the optimal exposure time is less time-consuming to detect light intensity than traditional technology, and it is more adaptable to scenes with rapid changes in ambient light.
  • the “consistent” mentioned above means the same or the same meaning within a certain error range. Due to the semiconductor manufacturing process, the actual thickness of some optical thin film media will inevitably be caused by the theoretically designed thickness value. There is a deviation, but as long as it is within a certain error range, it will only affect the range of light transmittance of the optical film medium. When the overlap of the light transmittance range corresponding to the two photosensitive points is greater than a certain preset ratio, it can be considered as two The light transmittance is the same, or the thickness of the optical film medium corresponding to the photosensitive point has the same spatial distribution or thickness distribution.
  • FIG. 12 shows a computer system based on the von Neumann system that runs the above-mentioned light intensity sensing method. Specifically, it may include an external input interface 1001, a processor 1002, a memory 1003, and an output interface 1004 connected through a system bus.
  • the external input interface 1001 may optionally include at least a network interface 10012 and a USB interface 10014.
  • the memory 1003 may include an external memory 10032 (such as a hard disk, an optical disk, or a floppy disk, etc.) and an internal memory 10034.
  • the output interface 1004 may at least include a display screen 10042 and other devices.
  • the operation of this method is based on a computer program.
  • the program file of the computer program is stored in the external memory 10032 of the aforementioned von Neumann system-based computer system 10, and is loaded into the internal memory 10034 during operation. It is then compiled into machine code and then transferred to the processor 1002 for execution, so that logically virtual modules are formed in the computer system 10 based on the von Neumann system.
  • the input parameters are received through the external input interface 1001, and transferred to the memory 1003 for buffering, and then input to the processor 1002 for processing, and the processed result data may be buffered in the memory 1003 It is processed later or passed to the output interface 1004 for output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

本发明实施例公开了一种图像传感器,包括一感光面,所述感光面上设置有一个或一个以上的感知像素区域,所述感知像素区域包括至少两组感知像素块;设置在所述感知像素块表面的光学薄膜阵列,同一组所述感知像素块表面的光学薄膜阵列厚度相同,不同组所述感知像素块表面的光学薄膜阵列厚度不同;所述光学薄膜阵列在一感知像素块表面的厚度对应所述感知像素块的曝光延时。此外,本发明实施例公开了一种基于上述图像传感器的光强感知方法及***,可更快速的感知环境光强,从而可降低自动曝光的响应时间。

Description

图像传感器、光强感知***及方法 技术领域
本发明涉及图像处理领域,特别涉及一种图像传感器、光强感知***及方法。
背景技术
传统技术中的图像传感器通常只能有效探测到低动态范围(LDR:Low Dynamic Range)的图像,为了使图像传感器尽可能捕捉有用的信息,必须将有限的动态范围利用在合适的亮度范围内,因此需要对被拍摄的景物进行测光,从而确定合理的曝光时间以进行自动曝光。
目前,商用摄像设备大多数通过TTL(Through The Lens)的方式,将负责测光的传感器集成在成像镜头后,以减少成像传感器和测光传感器间的误差,其中有分离式和合并式两种。
分离式包括两个传感器,一个传感器负责成像,另一个负责检测光强,虽能提供较好的测光效果,但是会增加额外电子元件、光路,导致造价和体积的增加,主要应用于如单反相机等中高端摄像设备中。
合并式指成像传感器同时负责成像和测光的功能,主要用于手机镜头等对集成度要求较高的设备中。但是,使用传统的用于成像的图像传感器进行测光,只能根据当前的图像过曝和欠曝的比例对曝光条件进行逐步逼近,耗时较长,具有一定的滞后性。在连续拍摄中,当场景的明暗出现较大变化时,在光照条件频繁切换的场合下并不适用。
发明内容
基于此,为解决现有技术中使用传统成像的图像传感器测光调整曝光时间的调整速度较慢的问题,特提出了一种图像传感器,该图像传感器可用于感知环境光强以调整曝光时间,具体的:
一种图像传感器,包括一感光面,所述感光面上设置有一个或一个以上的感知像素区域,所述感知像素区域包括至少两组感知像素块;
设置在所述感知像素块表面的光学薄膜阵列,同一组所述感知像素块表面的光学薄膜阵列厚度相同,不同组所述感知像素块表面的光学薄膜阵列厚度不 同;
所述光学薄膜阵列在一感知像素块表面的厚度对应所述感知像素块的曝光延时。
在其中一个实施例中,所述光学薄膜阵列在各组感知像素块上的厚度分布满足以所述光学薄膜阵列的基础衰减率为底的对数分布。
在其中一个实施例中,同一所述感知像素区域内的各组感知像素块均匀稀疏设置在所述感知像素区域内;所述感知像素区域内的感知像素块的厚度分布为非单调排列或错位排列。
在其中一个实施例中,所述感知像素区域设置于所述感光面的边缘和/或中心位置。
在其中一个实施例中,所述感知像素区域均匀稀疏设置在整个所述感光面。
在其中一个实施例中,所述光学薄膜阵列设置在所述感知像素区域表面的方式包括沉积、图形化或刻蚀工艺中的至少一种。
此外,为解决现有技术中使用传统成像的图像传感器测光调整曝光时间的调整速度较慢的问题,还提出了一种基于前述的图像传感器的光强感知方法的。
一种光强感知方法,基于前述的图像传感器,包括:
获取预设曝光时长下所述图像传感器在感知像素区域的成像亮度;
确定目标亮度,获取目标亮度对应的目标感知像素块;
获取所述目标感知像素块对应的目标曝光延时,所述目标曝光延时与所述目标感知像素块表面的光学薄膜阵列的厚度值对应;
根据所述预设曝光时长和所述目标曝光延时调整曝光时间。
在其中一个实施例中,所述目标感知像素块为两个或两个以上;
所述获取所述目标感知像素块对应的目标曝光延时包括:
结合所述目标感知像素块在感光面上的相对位置,计算两个或两个以上的所述目标感知像素块的曝光延时的加权值作为目标曝光延时。
在其中一个实施例中,所述方法还包括:
接收拍照指令,获取所述图像传感器的感光面上各像素块的值,所述像素块包括感知像素块;
对于感知像素块的值,获取与所述感知像素块表面的光学薄膜阵列的厚度值对应的亮度修正值,根据所述亮度修正值修正所述感知像素块的值;
对于超出亮度范围的感知像素块的值,根据相邻像素块的值对所述感知像素块进行插值恢复。
此外,为解决现有技术中使用传统成像的图像传感器测光调整曝光时间的调整速度较慢的问题,还提出了一种基于前述的图像传感器的光强感知***。
一种光强感知***,包括前述的图像传感器,以及与所述图像传感器连接的处理器,所述处理器用于获取预设曝光时长下所述图像传感器在感知像素区域的成像亮度;确定目标亮度,获取目标亮度对应的目标感知像素块;获取所述目标感知像素块对应的目标曝光延时,所述目标曝光延时与所述目标感知像素块表面的光学薄膜阵列的厚度值对应;根据所述预设曝光时长和所述目标曝光延时调整曝光时间。
实施本发明实施例,将具有如下有益效果:
采用了上述图像传感器,以及基于上述图像传感器的光强感知***和方法之后,可通过感知像素区域表面设置的光学薄膜阵列的厚度变化产生多个级别的透光能量衰减,从而产生于表面不同厚度光学薄膜阵列的感知像素块对应的多个级别的曝光延时,使得只需要进行较少次数的曝光,即可得到多个曝光时长对应的成像亮度级别,通过选择合适的成像亮度则可快速确定最佳曝光时长,相较于传统技术,检测光强的耗时更少,更能适应环境光快速变化的场景。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为LDR图像成像的亮度范围与曝光时间的关系示意图;
图2为不同曝光时长下LDR图像的效果图;
图3为传统技术中步进式检测最佳曝光时长的原理图;
图4为一个实施例中一种光强感知***示意图;
图5为一个实施例中一种图像传感器的示意图;
图6为一个实施例中表面设置了光学薄膜阵列感知像素块的示意图;
图7为一个实施例中4X4感知像素区域中感知像素块表面的光学薄膜阵列 的厚度值的分布图;
图8为一个实施例中4X2感知像素区域中感知像素块表面的光学薄膜阵列的厚度值的分布图;
图9为一个实施例中一种光强感知方法的流程图;
图10为一个实施例中一种光强感知方法检测最佳曝光时长的原理图;
图11为一个实施例中一种成像方法的示意图;
图12为一个实施例中运行前述光强感知方法的计算机***的组成示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
普通相机拍摄的图片由于动态范围有限,通常只有256个亮度范围,因此拍照得到的图片通常为LDR(英文:Low Dynamic Range,中文:低动态范围)图像,当拍照时,外部光照环境的不同则会使得生成的LDR图像会产生不同的曝光效果,同一外部光照环境,不同的曝光时长也会产生不同的图像效果。
参考图1所示,图1展示了LDR图像的亮度范围以及随着曝光时间的增加所产生的LDR图像的亮度范围的变化。由图1可看出,LDR图像的亮度范围是一个范围窗口,曝光时间的变化不会使LDR亮度范围产生变化,只会影响亮度范围的下限值。例如,对比曝光时间为较短的T 0和较长的T 1时,传统图像传感器采集的LDR图像,可看出T 0对应的LDR图像的亮度范围的下限值相较于T 1对应的LDR图像的亮度范围的下限值较低,但亮度范围相同。
再参考图2所示,普通相机在同一光暗环境下对相同的场景拍摄分别选择采用较短、合理、较长的曝光时间得到LDR图像1、LDR图像2和LDR图像3,可看出,对于普通相机而言,曝光时间较短则图像偏暗,曝光时间较长则图像过曝,只有采用合理的曝光时间才能拍摄相对较好的图像。因此,传统技术中的自动曝光技术需要通过感知光强获得与场景对应的合理的曝光时间,然后再进行成像。
传统技术中进行光强感知的方法可参考图3所示,其原理为步进式的检测 方法。图像传感器先在曝光时间T 0下采集图像,然后分析该T 0对应的图像的曝光是否合理,如果不合理,则增加/减少一个步进时长ΔT的曝光时间,也就是图像传感器再次在曝光时间T 1下采集图像,且T 1=T 0+ΔT,然后再分析该T 1对应的图像的曝光是否合理,如果不合理,则继续增加一个步进时长ΔT的曝光时间采集图像,若合理的曝光时间为T best,那么,通过图3可看出,要找到T best需要耗费的时间为:
T cost=T 0+T 1+....+T best
即,采集了多少次图像进行分析,就至少需要耗费多少个采集需要的曝光时间之和,这就导致了传统技术中使用用来成像的图像传感器进行光强感知的方案耗时较长,不能适应光强快速变化的场景。
为此,为解决上述传统技术中使用用来成像的图像传感器进行光强感知的方案耗时较长,不能适应光强快速变化的场景的技术问题,在本发明实施例中,特提出了一种光强感知***,如图4所示,包括:
图像传感器10,以及与图像传感器连接的处理器20,其中:
如图4和图5所示,图像传感器10可以是平板型结构,包括一感光面,在感光面上有序排列有多个像素点,即采集光信号转化为电信号,并将电信号编码成为图像的像素的采样点。物理上,感光面为CCD(英文:Charge-coupled Device,中文:电荷耦合器件,一种用电荷量表示信号大小,用耦合方式传输信号的探测元件)和CMOS(英文:Complementary Metal Oxide Semiconductor,中文:互补金属氧化物半导体,一种感光元件)。
在本实施例中,感光面上设置有一个或一个以上的感知像素区域12,感知像素区域12包括至少两组感知像素块(如图1中感知像素区域12中用不同灰度值表示的像素块)。在本实施例中,如图4所示,感知像素区域可设置于所述感光面的边缘和/或中心位置。在另一个实施例中,感知像素区域均匀稀疏设置在整个所述感光面。
再参考图5和图6所示,图像传感器10还包括设置在感知像素块表面的光学薄膜阵列14,同一组感知像素块表面的光学薄膜阵列14厚度相同,不同组感知像素块表面的光学薄膜阵列厚度不同。光学薄膜阵列在一感知像素块表面的厚度对应感知像素块的曝光延时。
也就是说,图4展示了图像传感器10的感光面,其中的一个方格对应一个 像素块,一个像素块可包含至少一个像素点,例如,可一个像素点作为一个像素块,或者以2×2,或3×3的像素点作为一个像素块。而图4中感知像素区域12内的具有灰度的像素块即为表面设置了光学薄膜阵列的感知像素块,且灰度的深浅代表了感知像素块表面的光学薄膜阵列的厚度的大小。
在本实施例中,光学薄膜阵列设置在所述感知像素区域表面的方式包括沉积、图形化或刻蚀工艺中的至少一种。
不同感知像素块表面的光学薄膜阵列可相同也可不同,表面光学薄膜阵列厚度相同的感知像素块为同一组(如图4中感知像素区域12内灰度相同的像素块即为同一组),而表面光学薄膜阵列厚度不同的感知像素块为不同组(如图4中感知像素区域12内灰度不同的像素块分别属于不同组)
图5和图6则展示了不同组的感知像素块表面设置的光学薄膜阵列的形态及其厚度分布情况,其中包括4个属于不同组的感知像素块,厚度分布为d1、d2、d3和d4,且d1<d2<d3<d4,以下以厚度值代称这4个感知像素块以示区分。
感知像素块在表面设置了光学薄膜阵列之后,在进行成像过程中,光透射光学薄膜阵列进入感知像素块时,会由于光学薄膜阵列的透光性而产生能量损耗,且光学薄膜阵列越厚,其透光性越低,产生的能量损耗越多;而曝光时间的大小对应了进入感知像素块的光信号的能量大小,曝光时长越长,则进入感知像素块的光信号的能量越大,对应亮度越高,画面越亮;曝光时长越小,则进入感知像素块的光信号的能量越小,对应亮度越低,画面越暗。因此,光学薄膜阵列的透光性导致的能量损耗可等效为曝光时间的减少所导致的能量获取量减少,因此,光学薄膜阵列在一感知像素块表面的厚度可对应该感知像素块的曝光延时。
例如,对于厚度分布为d1、d2、d3和d4的4个感知像素块,由于d1<d2<d3<d4,那么,对于同一环境光,进入感知像素块d1的光信号能量最多,通过感知像素块d1检测到的亮度最高,而进入感知像素块d4的光信号能量最少,通过感知像素块d4检测到的亮度最低。反言之,对于达到同一检测亮度,需要进入感知像素块d1、d2、d3和d4的光信号能量各不相同,且需要进入感知像素块d1的光信号能量要求最低,则只需要较短的曝光时间即可;需要进入感知像素块d4的光信号能量要求最高,需要较长的曝光时间以保证足够的光信号能量进入感知像素块d4,因此,感知像素块表面的光学薄膜阵列的厚度值由 d1至d4的变化等效于产生了曝光延时。
表面未设置光学薄膜阵列的像素块相当于厚度值为0的感知像素块,亦即对应的曝光延时为0(真实曝光时长)。可预先记录感知像素块的厚度值与曝光延时的对应关系,例如,可记录感知像素块表面的光学薄膜阵列的厚度值di对应曝光延时T i,再记录各个感知像素块在感光面上的相对位置,即建立了各个感光像素块与其对应的曝光延时T i的映射关系。
在一个实施例中,光学薄膜阵列在各组感知像素块上的厚度分布满足以所述光学薄膜阵列的基础衰减率为底的对数分布,即:
di=log ni×d1
这是由于光学薄膜阵列的透射率满足t i=n -i×t 1,其中,i为厚度取值空间分布,t 1为最高透光率,ti为第i个厚度值的透光率,n为基础衰减率。当光学薄膜阵列在各组感知像素块上的厚度分布满足以所述光学薄膜阵列的基础衰减率为底的对数分布时,可看出,各组感知像素块对应的透光率ti呈线性变化,从而更加方便映射曝光延时。
进一步的,同一所述感知像素区域内的各组感知像素块均匀稀疏设置在所述感知像素区域内;所述感知像素区域内的感知像素块的厚度分布为非单调排列或错位排列。
参考图7和图8所示,图7为一个4×4的感知像素区域内感知像素块表面的光学薄膜阵列的厚度分布情况,其中颜色灰度的深浅代表着厚度值的大小,颜色灰度的的像素块属于同一组,表面具有相同厚度的光学薄膜阵列。由图7可看出,各组感知像素块均匀稀疏地分布在整个4×4的感知像素区域内的。图8则展示了一个4×2的感知像素区域内感知像素块表面的光学薄膜阵列的厚度分布情况,可看出,感知像素区域内的感知像素块的厚度分布为非单调排列或错位排列,即没有体现出一个d1和d4向d2和d3方向的递增或递减。
按照上述方式设置同一所述感知像素区域的各组感知像素块的厚度分布,可避免连续较大亮度范围的图像采样将被噪声干扰,从而抗噪性能更强。
在本实施例中,处理器20则基于上述设置有感知像素区域的图像传感器10进行光强感知,可用于获取预设曝光时长下所述图像传感器在感知像素区域的成像亮度;确定目标亮度,获取目标亮度对应的目标感知像素块;获取所述目标感知像素块对应的目标曝光延时,所述目标曝光延时与所述目标感知像素块 表面的光学薄膜阵列的厚度值对应;根据所述预设曝光时长和所述目标曝光延时调整曝光时间。
具体的,在一个实施例中,处理器20可以是计算机设备,可依赖于计算机程序执行一光强感知方法,该计算机程序可运行于任意基于冯诺依曼体系的计算机***之上,该计算机设备可以是智能手机、个人电脑,或具有图像处理芯片的其他数码产品,例如数码相机、单反相机等。
具体的,该光强感知方法如图9所示,包括:
步骤S102:获取预设曝光时长下所述图像传感器在感知像素区域的成像亮度。
预设曝光时长为T 0,则将图像传感器10曝光T 0时间,则图像传感器10上的各个像素块则可接收到相应的光信号,且接收到的光信号能量的大小体现为该像素块的亮度大小。参考图10所示,对于表面未设置光学薄膜阵列的普通像素块,则其接收到的能量没有衰减,接收了整个曝光时间T 0的能量。
而对于感光像素块d1,其接收到的能量受到了厚度为d1的光学薄膜阵列的衰减,衰减的能量大小等效于缺少了曝光延时T 1的能量,即等效于于感光像素块d1只接收了曝光时间T 0-T 1的能量。则感光像素块d1的成像亮度低于表面未设置光学薄膜阵列的普通像素块的亮度。
同理,对于感光像素块d2和感光像素块d3,其接收到的能量受到了厚度为d2和d3的光学薄膜阵列的衰减,衰减的能量大小等效于缺少了曝光延时T 2和T 3的能量,即等效于于感光像素块d2和d3只接收了曝光时间T 0-T 2的能量和T 0-T 3的能量。由于d1<d2<d3,则感光像素块d1的成像亮度>感光像素块d2的成像亮度>感光像素块d3的成像亮度。
由此,产生了与d1、d2和d3对应的成像亮度级别。
步骤S104:确定目标亮度,获取目标亮度对应的目标感知像素块。
例如,再参考图10所示,若通过检测发现d2的成像亮度合适,则将表面设置的光学薄膜阵列的厚度为d2的感知像素块作为目标感知像素块。
步骤S106:获取目标感知像素块对应的目标曝光延时,所述目标曝光延时与所述目标感知像素块表面的光学薄膜阵列的厚度值对应。
如前所述,预先记录有图像传感器10中,表面设置的光学薄膜阵列的厚度为d2的感知像素块对应的曝光延时的值,即图10中所示的曝光延时T 2,则可 将T 2作为目标曝光延时。
步骤S108:根据所述预设曝光时长和所述目标曝光延时调整曝光时间。
即将曝光时间调整为T best=T 0-T 2,即可使得普通像素块在曝光时间T 0-T 2范围接收光信号的能量,普通像素块的成像亮度即相当于检测时,曝光时间为T 0时,感知像素块d2的成像亮度,从而使得曝光程度达到了一致。
且细致地对上述过程分析可发现,在上述调整曝光时长的过程中,调整的耗时仅仅为一次预设的曝光时长T 0,即经过对图像传感器10的一次长达T cost=T 0的曝光,可依赖于感知像素块表面的光学薄膜阵列的透光性等效地得到T 0-T 1,T 0-T 2,......以致T 0-T i的多个曝光市场下的成像亮度的效果,然后通过一次选择合适的成像亮度,即可快速确定该成像亮度对应的曝光时长,相较于传统技术中,需要T cost=T 0+T 1+....+T best而言,需要的曝光次数较少,耗费的时间较少,因此效率更高。
在本实施例中,进一步的,所述目标感知像素块为两个或两个以上。例如,可在图像传感器的中心区域和边缘区域均设置感知像素区域,且不同的感知像素区域中可设置具有相同厚度的感知像素块,则在这种情况下,与目标成像亮度对应的目标感知像素块为两个或两个以上。
在此情况下,获取所述目标感知像素块对应的目标曝光延时则可包括:
结合所述目标感知像素块在感光面上的相对位置,计算两个或两个以上的所述目标感知像素块的曝光延时的加权值作为目标曝光延时。
例如,感光面中心区域的像素块通常对应于拍摄物,因此,位置处于感光面中心区域的感知像素块应享有较高的加权值,而位置处于感光面边缘的感知像素块则应享有较低的加权值,通过加权平均,即可得到目标曝光延时。
在一个实施例中,由于感知像素块也位于感光面上,因此在成像中页应该考虑感知像素块的成像信息,如图11所示,该成像方法还包括:
步骤S202:获取所述图像传感器的感光面上各像素块的值,所述像素块包括感知像素块。
步骤S204:对于感知像素块的值,获取与感知像素块表面的光学薄膜阵列的厚度值对应的亮度修正值,根据所述亮度修正值修正所述感知像素块的值。
也就是说,对于表面未设置光学薄膜阵列的普通像素块,其像素值即为实际的成像信息;而对于表面设置了光学薄膜阵列的感知像素块,则反向地对其 进行亮度修正。例如,若感知像素块d1的透光衰减为减半,则在成像过程中,对于感知像素块d1,需要对其采集的成像信息进行亮度修正,调整至相当于透光未衰减时采集的成像信息的亮度值。
步骤S206:对于超出亮度范围的感知像素块的值,根据相邻像素块的值对所述感知像素块进行插值恢复。
也就是说,对于即使修正后也过亮或过暗的感知像素块的成像信息,可以根据其相邻像素块的成像信息进行插值恢复。例如,对于一过曝或过暗的感知像素块,可选择其相邻的3×3范围的像素块的值,计算其平均值后作为该感知像素块的值。从而不会导致部分感知像素块的信息丢失。
实施本发明实施例,将具有如下有益效果:
采用了上述图像传感器,以及基于上述图像传感器的光强感知***和方法之后,可通过感知像素区域表面设置的光学薄膜阵列的厚度变化产生多个级别的透光能量衰减,从而产生于表面不同厚度光学薄膜阵列的感知像素块对应的多个级别的曝光延时,使得只需要进行较少次数的曝光,即可得到多个曝光时长对应的成像亮度级别,通过选择合适的成像亮度则可快速确定最佳曝光时长,相较于传统技术,检测光强的耗时更少,更能适应环境光快速变化的场景。
需要说明的是,前文所述的“一致”即为在一定误差范围内相同或相同的含义,由于半导体工艺制作过程中,不可避免地会造成部分光学薄膜介质的实际厚度与理论设计的厚度值有偏差,但只要在一定误差范围内,只会影响光学薄膜介质的透光率的范围,当该两个感光点对应的透光率的范围重合部分大于一定预设比例时,即可认为二者的透光率一致,或者说感光点对应的光学薄膜介质厚度取值空间分布或厚度分布一致。
在一个实施例中,如图12所示,图12展示了一种运行上述光强感知方法的基于冯诺依曼体系的计算机***。具体的,可包括通过***总线连接的外部输入接口1001、处理器1002、存储器1003和输出接口1004。其中,外部输入接口1001可选的可至少包括网络接口10012和USB接口10014。存储器1003可包括外存储器10032(例如硬盘、光盘或软盘等)和内存储器10034。输出接口1004可至少包括显示屏10042等设备。
在本实施例中,本方法的运行基于计算机程序,该计算机程序的程序文件存储于前述基于冯诺依曼体系的计算机***10的外存储器10032中,在运行时 被加载到内存储器10034中,然后被编译为机器码之后传递至处理器1002中执行,从而使得基于冯诺依曼体系的计算机***10中形成逻辑上的各虚拟模块。且在上述光强感知方法执行过程中,输入的参数均通过外部输入接口1001接收,并传递至存储器1003中缓存,然后输入到处理器1002中进行处理,处理的结果数据或缓存于存储器1003中进行后续地处理,或被传递至输出接口1004进行输出。
以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不会使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (10)

  1. 一种图像传感器,包括一感光面,其特征在于,所述感光面上设置有一个或一个以上的感知像素区域,所述感知像素区域包括至少两组感知像素块;
    设置在所述感知像素块表面的光学薄膜阵列,同一组所述感知像素块表面的光学薄膜阵列厚度相同,不同组所述感知像素块表面的光学薄膜阵列厚度不同;
    所述光学薄膜阵列在一感知像素块表面的厚度对应所述感知像素块的曝光延时。
  2. 根据权利要求1所述的图像传感器,其特征在于,所述光学薄膜阵列在各组感知像素块上的厚度分布满足以所述光学薄膜阵列的基础衰减率为底的对数分布。
  3. 根据权利要求1所述的图像传感器,其特征在于,同一所述感知像素区域内的各组感知像素块均匀稀疏设置在所述感知像素区域内;所述感知像素区域内的感知像素块的厚度分布为非单调排列或错位排列。
  4. 根据权利要求1所述的图像传感器,其特征在于,所述感知像素区域设置于所述感光面的边缘和/或中心位置。
  5. 根据权利要求1所述的图像传感器,其特征在于,所述感知像素区域均匀稀疏设置在整个所述感光面。
  6. 根据权利要求1至5任一项所述的图像传感器,其特征在于,所述光学薄膜阵列设置在所述感知像素区域表面的方式包括沉积、图形化或刻蚀工艺中的至少一种。
  7. 一种光强感知方法,基于权利要求1至6任一项所述的图像传感器,其特征在于,包括:
    获取预设曝光时长下所述图像传感器在感知像素区域的成像亮度;
    确定目标亮度,获取目标亮度对应的目标感知像素块;
    获取所述目标感知像素块对应的目标曝光延时,所述目标曝光延时与所述目标感知像素块表面的光学薄膜阵列的厚度值对应;
    根据所述预设曝光时长和所述目标曝光延时调整曝光时间。
  8. 根据权利要求7所述的光强感知方法,其特征在于,所述目标感知像素 块为两个或两个以上;
    所述获取所述目标感知像素块对应的目标曝光延时包括:
    结合所述目标感知像素块在感光面上的相对位置,计算两个或两个以上的所述目标感知像素块的曝光延时的加权值作为目标曝光延时。
  9. 根据权利要求7所述的光强感知方法,其特征在于,所述方法还包括:
    接收拍照指令,获取所述图像传感器的感光面上各像素块的值,所述像素块包括感知像素块;
    对于感知像素块的值,获取与所述感知像素块表面的光学薄膜阵列的厚度值对应的亮度修正值,根据所述亮度修正值修正所述感知像素块的值;
    对于超出亮度范围的感知像素块的值,根据相邻像素块的值对所述感知像素块进行插值恢复。
  10. 一种光强感知***,包括权利要求1至6任一项所述的图像传感器,以及与所述图像传感器连接的处理器,其特征在于,所述处理器用于获取预设曝光时长下所述图像传感器在感知像素区域的成像亮度;确定目标亮度,获取目标亮度对应的目标感知像素块;获取所述目标感知像素块对应的目标曝光延时,所述目标曝光延时与所述目标感知像素块表面的光学薄膜阵列的厚度值对应;根据所述预设曝光时长和所述目标曝光延时调整曝光时间。
PCT/CN2019/087074 2019-05-15 2019-05-15 图像传感器、光强感知***及方法 WO2020227980A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980005548.6A CN111345032B (zh) 2019-05-15 2019-05-15 图像传感器、光强感知***及方法
PCT/CN2019/087074 WO2020227980A1 (zh) 2019-05-15 2019-05-15 图像传感器、光强感知***及方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/087074 WO2020227980A1 (zh) 2019-05-15 2019-05-15 图像传感器、光强感知***及方法

Publications (1)

Publication Number Publication Date
WO2020227980A1 true WO2020227980A1 (zh) 2020-11-19

Family

ID=71187722

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/087074 WO2020227980A1 (zh) 2019-05-15 2019-05-15 图像传感器、光强感知***及方法

Country Status (2)

Country Link
CN (1) CN111345032B (zh)
WO (1) WO2020227980A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101185165A (zh) * 2005-07-06 2008-05-21 松下电器产业株式会社 固体摄像装置的制造方法、固体摄像装置及摄像机
JP2013118295A (ja) * 2011-12-02 2013-06-13 Sharp Corp 固体撮像素子の製造方法、固体撮像素子、および電子情報機器
CN106210532A (zh) * 2016-07-29 2016-12-07 宇龙计算机通信科技(深圳)有限公司 一种拍照处理方法及终端设备
CN108269811A (zh) * 2016-12-30 2018-07-10 豪威科技股份有限公司 高动态范围彩色图像传感器及相关的方法
CN109510949A (zh) * 2018-10-24 2019-03-22 浙江大学 基于图像特征点有效亮度的相机自动曝光方法

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100341315C (zh) * 2000-04-14 2007-10-03 业程科技股份有限公司 电子相机及其曝光方法
CN101454901B (zh) * 2005-09-21 2011-04-27 Rjs科技公司 具有增益控制的宽动态范围感光元件或阵列***和方法
JP2007329721A (ja) * 2006-06-08 2007-12-20 Matsushita Electric Ind Co Ltd 固体撮像装置
EP2511681B1 (en) * 2009-11-30 2024-05-22 IMEC vzw Integrated circuit for spectral imaging system
JP2011176542A (ja) * 2010-02-24 2011-09-08 Nikon Corp カメラおよび画像合成プログラム
JP2013005017A (ja) * 2011-06-13 2013-01-07 Sony Corp 撮像装置、および撮像装置制御方法、並びにプログラム
CN103037173B (zh) * 2011-09-28 2015-07-08 原相科技股份有限公司 影像***
DE102012217093A1 (de) * 2012-09-21 2014-04-17 Robert Bosch Gmbh Kamerasystem, insbesondere für ein Fahrzeug, und Verfahren zum Ermitteln von Bildinformationen eines Erfassungsbereichs
KR102039464B1 (ko) * 2013-05-21 2019-11-01 삼성전자주식회사 전자 센서와, 그의 제어 방법
US9147704B2 (en) * 2013-11-11 2015-09-29 Omnivision Technologies, Inc. Dual pixel-sized color image sensors and methods for manufacturing the same
KR102149187B1 (ko) * 2014-02-21 2020-08-28 삼성전자주식회사 전자 장치와, 그의 제어 방법
KR102547655B1 (ko) * 2015-11-18 2023-06-23 삼성전자주식회사 이미지 센서 및 이를 포함하는 전자 장치
DE102015225797B3 (de) * 2015-12-17 2017-05-04 Robert Bosch Gmbh Optischer Detektor
KR101852258B1 (ko) * 2016-06-29 2018-04-26 한국과학기술원 픽셀별 적용 필터의 차이에 기반하는 이미지 생성 방법 및 이를 수행하는 장치들
JP6661506B2 (ja) * 2016-09-23 2020-03-11 サムスン エレクトロニクス カンパニー リミテッド 固体撮像装置
CN107888904B (zh) * 2016-09-30 2021-08-10 三星电子株式会社 用于处理图像的方法和支持该方法的电子装置
CN107018339A (zh) * 2017-03-09 2017-08-04 广东欧珀移动通信有限公司 图像传感器、图像处理方法、图像处理装置及电子装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101185165A (zh) * 2005-07-06 2008-05-21 松下电器产业株式会社 固体摄像装置的制造方法、固体摄像装置及摄像机
JP2013118295A (ja) * 2011-12-02 2013-06-13 Sharp Corp 固体撮像素子の製造方法、固体撮像素子、および電子情報機器
CN106210532A (zh) * 2016-07-29 2016-12-07 宇龙计算机通信科技(深圳)有限公司 一种拍照处理方法及终端设备
CN108269811A (zh) * 2016-12-30 2018-07-10 豪威科技股份有限公司 高动态范围彩色图像传感器及相关的方法
CN109510949A (zh) * 2018-10-24 2019-03-22 浙江大学 基于图像特征点有效亮度的相机自动曝光方法

Also Published As

Publication number Publication date
CN111345032A (zh) 2020-06-26
CN111345032B (zh) 2021-12-31

Similar Documents

Publication Publication Date Title
US9224782B2 (en) Imaging systems with reference pixels for image flare mitigation
US8737755B2 (en) Method for creating high dynamic range image
US7151560B2 (en) Method and apparatus for producing calibration data for a digital camera
CN107786785B (zh) 光照处理方法及装置
AU2011320937B2 (en) Automatic white balance processing with flexible color space selection
TWI496463B (zh) 形成全彩色影像之方法
US10477106B2 (en) Control system, imaging device, and computer-readable medium
CN111028189A (zh) 图像处理方法、装置、存储介质及电子设备
CN107948538B (zh) 成像方法、装置、移动终端和存储介质
EP1583356A1 (en) Image processing device and image processing program
CN105611185B (zh) 图像生成方法、装置及终端设备
CN103905731B (zh) 一种宽动态图像采集方法及***
CN107613216B (zh) 对焦方法、装置、计算机可读存储介质和电子设备
CN110290325B (zh) 图像处理方法、装置、存储介质及电子设备
JP2004222231A (ja) 画像処理装置および画像処理プログラム
WO2020034739A1 (zh) 控制方法、装置、电子设备和计算机可读存储介质
CN111405185B (zh) 一种摄像机变倍控制方法、装置、电子设备及存储介质
US7400355B2 (en) Image pickup apparatus and photometer
CN110266965B (zh) 图像处理方法、装置、存储介质及电子设备
WO2023124611A1 (zh) 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质
WO2020227980A1 (zh) 图像传感器、光强感知***及方法
WO2022073364A1 (zh) 图像获取方法及装置、终端和计算机可读存储介质
KR20060047034A (ko) 디지털 카메라에서 비네팅 현상 제거 장치 및 방법
JP2004222233A (ja) 画像処理装置および画像処理プログラム
JP4466016B2 (ja) 画像処理装置および画像処理プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19928975

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19928975

Country of ref document: EP

Kind code of ref document: A1