CN112788313A - Image sensor, imaging system and terminal - Google Patents

Image sensor, imaging system and terminal Download PDF

Info

Publication number
CN112788313A
CN112788313A CN202011566288.0A CN202011566288A CN112788313A CN 112788313 A CN112788313 A CN 112788313A CN 202011566288 A CN202011566288 A CN 202011566288A CN 112788313 A CN112788313 A CN 112788313A
Authority
CN
China
Prior art keywords
pixel array
image sensor
pixel
pixels
spectral response
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011566288.0A
Other languages
Chinese (zh)
Other versions
CN112788313B (en
Inventor
傅琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Realme Chongqing Mobile Communications Co Ltd
Original Assignee
Realme Chongqing Mobile Communications Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Realme Chongqing Mobile Communications Co Ltd filed Critical Realme Chongqing Mobile Communications Co Ltd
Priority to CN202011566288.0A priority Critical patent/CN112788313B/en
Publication of CN112788313A publication Critical patent/CN112788313A/en
Application granted granted Critical
Publication of CN112788313B publication Critical patent/CN112788313B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/75Circuitry for providing, modifying or processing image signals from the pixel array

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses image sensor, camera module and terminal. An image sensor includes a first pixel array and a second pixel array having a spectral response that does not intersect a spectral response of the first pixel array, the first pixel array including at least a portion of a visible light spectral response, the pixels of the second pixel array including quantum dots. According to the image sensor, the camera module and the terminal, the first pixel array and the second pixel array which have the non-crossed spectral responses are arranged on the image sensor, and compared with the situation that only at least part of visible light spectral responses corresponding to the first pixel array are obtained, the wider spectral responses can be obtained, so that more different types of optical signals are obtained, and the image sensor can achieve more abundant functions.

Description

Image sensor, imaging system and terminal
Technical Field
The application relates to the technical field of consumer electronics, in particular to an image sensor, a camera module and a terminal.
Background
At present, a camera can realize normally open operation, and can output gray level images in real time to perform functions such as face detection and gesture recognition. However, the camera is generally a black and white camera, and only can achieve spectral response to a part of visible light, and the functions that can be achieved are few.
Disclosure of Invention
The embodiment of the application provides an image sensor, a camera module and a terminal.
The image sensor of the embodiment of the application comprises a first pixel array and a second pixel array, the spectral response of the second pixel array is not crossed with the spectral response of the first pixel array, the first pixel array comprises at least partial visible light spectral response, and the pixels of the second pixel array comprise quantum dots.
The camera module of the embodiment of the application comprises a lens and an image sensor. The image sensor is arranged on the image side of the lens. The image sensor includes a first pixel array and a second pixel array, the spectral response of the second pixel array being non-intersecting with the spectral response of the first pixel array, the first pixel array including at least a portion of a visible light spectral response, the pixels of the second pixel array including quantum dots.
The terminal of this application includes casing and camera module. The camera module is arranged on the shell. The camera module comprises a lens and an image sensor. The image sensor is arranged on the image side of the lens. The image sensor includes a first pixel array and a second pixel array, the spectral response of the second pixel array being non-intersecting with the spectral response of the first pixel array, the first pixel array including at least a portion of a visible light spectral response, the pixels of the second pixel array including quantum dots.
According to the image sensor, the camera module and the terminal, the first pixel array and the second pixel array which have the non-crossed spectral responses are arranged on the image sensor, and compared with the situation that only at least part of visible light spectral responses corresponding to the first pixel array are obtained, the wider spectral responses can be obtained, so that more different types of optical signals are obtained, and the image sensor can achieve more abundant functions.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of embodiments of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic plan view of a terminal according to some embodiments of the present application.
Fig. 2 is a schematic cross-sectional view of a camera module according to some embodiments of the present disclosure.
FIG. 3 is a schematic cross-sectional view of an image sensor of certain embodiments of the present application.
Fig. 4 is a schematic layout of a pixel array according to some embodiments of the present disclosure.
Fig. 5 is a schematic layout of a pixel array according to some embodiments of the present disclosure.
Fig. 6 is a schematic layout of a pixel array according to some embodiments of the present disclosure.
Fig. 7 is a schematic layout of a pixel array according to some embodiments of the present disclosure.
Detailed Description
Embodiments of the present application will be further described below with reference to the accompanying drawings. The same or similar reference numbers in the drawings identify the same or similar elements or elements having the same or similar functionality throughout.
In addition, the embodiments of the present application described below in conjunction with the accompanying drawings are exemplary and are only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the present application.
In this application, unless expressly stated or limited otherwise, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through intervening media. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
Referring to fig. 3 and 4, the image sensor 10 according to the embodiment of the present disclosure includes a first pixel array 141 and a second pixel array 142, a spectral response of the second pixel array 142 does not intersect with a spectral response of the first pixel array 141, the first pixel array 141 includes at least a portion of a visible light spectral response, and the pixels 143 of the second pixel array 142 include quantum dots 1421.
In the image sensor 10 according to the embodiment of the present invention, the first pixel array 141 and the second pixel array 142 having non-intersecting spectral responses are disposed in the image sensor 10, so that a wider spectral response can be obtained compared with a case where only at least a part of the visible light spectral responses corresponding to the first pixel array 141 are obtained, and thus more different types of optical signals can be obtained, so that the image sensor 10 can realize more abundant functions.
Referring to fig. 1, a terminal 1000 according to an embodiment of the present disclosure includes a housing 200 and a camera module 100. The camera module 100 is mounted on the housing 200.
More specifically, terminal 1000 can be a cell phone, a tablet, a display, a laptop, a teller machine, a gate, a smart watch, a head-up display device, a game console, and the like. In the embodiment of the present application, the terminal 1000 is a mobile phone as an example, and it is understood that the specific form of the terminal 1000 is not limited to the mobile phone.
The housing 200 may be used to mount the camera module 100, or the housing 200 may serve as a mounting carrier for the camera module 100. Terminal 1000 includes front and back, the Camera module can set up and regard as leading Camera in the front, Camera module 100 still can set up and regard as rearmounted Camera in the back, in the embodiment of this application, Camera module 100 sets up and regards as leading Camera in the front, Camera module 100 can be normally open Camera (Always ON Camera, AON Camera), normally open Camera can real-time detection moving object and output image, in order to carry out the detection of moving object, the object that the moving object took place to remove for the Camera that normally opens relatively. The housing 200 can also be used to mount functional modules of the terminal 1000, such as power supply devices, communication devices, etc., so that the housing 200 provides protection for the functional modules against dust, falling, water, etc.
Referring to fig. 2, the camera module 100 includes an image sensor 10 and a lens 20. The image sensor 10 is disposed on the image side of the lens 20. The lens 20 includes a lens group 21 and a lens barrel 22, the lens group 21 is disposed in the lens barrel 22, light enters from a light inlet 23 formed in the lens barrel 22, passes through the lens group 21 and then reaches the image sensor 10, and the image sensor 10 converts an optical signal into an electrical signal to output image data.
Referring to fig. 3, the image sensor 10 includes a substrate 11, a readout circuit 12, a protection layer 13, a pixel array 14, a filter 15, and a microlens array 16. The substrate 11, the readout circuitry 12, the protective layer 13, the pixel array 14, the filter 15, and the microlens array 16 are sequentially arranged along the light exit direction of the image sensor 10.
The substrate 11 is used for carrying the readout circuit 12, the substrate 11 includes a carrying surface 111, and the readout circuit 12 is disposed on the carrying surface 111. The readout circuitry 12 is disposed on the carrying surface 111 by etching or the like, for example. The substrate 11 may be a silicon substrate 11, and the silicon substrate 11 has the advantages of high thermal conductivity, low cost, mature manufacturing process, and the like.
A protective layer 13 is provided on the readout circuitry 12, the protective layer 13 covering the readout circuitry 12 to prevent the readout circuitry 12 from being exposed to air and corroded thereby affecting the lifetime of the readout circuitry 12. The protective layer 13 may be an already oxidized material such as silicon dioxide (SiO2) and the like, which is well protected from corrosion, thereby increasing the lifetime of the readout circuitry 12.
The pixel array 14 is disposed on the opposite side of the protective layer 13 from the substrate 11 and is connected to the readout circuit 12, and the readout circuit 12 can read the electrical signals accumulated by the pixels 143 of the pixel array 14 to determine the pixel value corresponding to each pixel 143.
The pixel array 14 includes a first pixel array 141 and a second pixel array 142. The first pixel array 141 and the second pixel array 142 may constitute a rectangular pixel array 14. The spectral responses of the first pixel array 141 and the second pixel array 142 do not intersect. The fact that the spectral responses of the first pixel array 141 and the second pixel array 142 do not intersect means that there is no overlapping portion in the spectral responses of the first pixel array 141 and the second pixel array 142, that is, there is no overlapping portion in the wavelength bands of the light acquired by the first pixel array 141 and the second pixel array 142.
Referring to fig. 4 to 7, in one embodiment, the second pixel array 142 is disposed around the first pixel array 141. The first pixel array 141 is arranged in a matrix, and the second pixel array 142 surrounds the rectangular first pixel array 141 to form a rectangular ring.
Wherein the first pixel array 141 comprises at least a portion of the visible spectral response. For example, the first pixel array 141 includes a spectral response of red light, a spectral response of green light, a spectral response of blue light, or a spectral response of white light (i.e., the entire visible light spectral response), that is, the first pixel array 141 includes a red pixel R, a green pixel G, a blue pixel B, or a white pixel W, wherein the red pixel R, the green pixel G, the blue pixel B are color pixels, and the white pixel W is a panchromatic pixel; for another example, the first pixel array 141 includes any two spectral responses of a spectral response of red light, a spectral response of green light, a spectral response of blue light, and a spectral response of white light, that is, the first pixel array 141 includes any two of a red pixel R, a green pixel G, a blue pixel B, and a white pixel W; for another example, the first pixel array 141 includes any three spectral responses of a spectral response of red light, a spectral response of green light, a spectral response of blue light, and a spectral response of white light, that is, the first pixel array 141 includes any three of a red pixel R, a green pixel G, a blue pixel B, and a white pixel W; also for example, the first pixel array 141 includes a spectral response of red light, a spectral response of green light, a spectral response of blue light, and a spectral response of white light, that is, the first pixel array 141 includes red pixels R, green pixels G, blue pixels B, and white pixels W.
Referring to fig. 4, in the present embodiment, the first pixel array 141 includes a spectral response of red light, a spectral response of green light, and a spectral response of blue light, i.e., the first pixel array 141 includes a red pixel R, a green pixel G, and a blue pixel B. The color pixels of the first pixel array 141 are arranged in a bayer array. It is to be understood that fig. 4 to 7 are only schematic diagrams, and only a part of the pixels 143 of the first pixel array 141 and the second pixel array 142 is shown.
Referring to fig. 5, in another embodiment, the first pixel array 141 includes a red pixel R, a green pixel G, a blue pixel B, and a white pixel W. The first pixel array 141 includes a plurality of minimum repeating units 1411, and each minimum repeating unit 1411 includes a red pixel R, a green pixel G, a blue pixel B, and a white pixel W. The minimal repeating unit 1411 includes a plurality of sub-units 1412, each sub-unit 1412 includes a color pixel and a panchromatic pixel, and the ratio of the color pixel to the panchromatic pixel can be 1:1, 1:3, 3:1, and the like, and can be set according to the requirements for imaging in dark light. In the present embodiment, the ratio of the color pixels and the panchromatic pixels is 1:1, that is, each sub-unit 1412 includes 2 color pixels and two panchromatic pixels, which are respectively located on two diagonal lines of the sub-unit 1412.
Referring to fig. 6, in another embodiment, the first pixel array 141 and the second pixel array 142 can be respectively located at odd rows and even rows of the rectangular pixel array 14; alternatively, the first and second pixel arrays 141 and 142 may be located at even and odd rows of the rectangular pixel array 14, respectively. That is, the first pixel array 141 and the second pixel array 142 are arranged to be interlaced.
In other embodiments, the first pixel array 141 and the second pixel array 142 may be located at odd columns and even columns of the rectangular pixel array 14, respectively; alternatively, the first and second pixel arrays 141 and 142 may be located at even and odd columns of the rectangular pixel array 14, respectively. That is, the first pixel array 141 and the second pixel array 142 are arranged to cross at intervals.
Of course, the arrangement of the first pixel array 141 and the second pixel array 142 is not limited to the above two arrangements, and can be designed according to actual requirements. Referring to fig. 7, for example, the first pixel array 141 includes a red pixel R, a green pixel G, a blue pixel B, and a white pixel W, and any two pixels 143 of the first pixel array 141 or the second pixel array 142 are not adjacently arranged, where the non-adjacent arrangement of any two pixels 143 of the first pixel array 141 or the second pixel array 142 means: any two pixels 143 of the first pixel array 141 or the second pixel array 142 are not adjacent to each other in the same row and the same column. At this time, the pixel 143 of the second pixel array 142 is surrounded by any one of the pixels 143 of the first pixel array 141, and the pixel 143 of the first pixel array 141 is surrounded by any one of the pixels 143 of the second pixel array 142, so that the first pixel array 141 and the second pixel array 142 are more closely combined.
In this way, the first pixel array 141 and the second pixel array 142 are arranged in an interlaced/column-crossing manner or in a non-adjacent manner, so that the field ranges of the first pixel array 141 and the second pixel array 142 are substantially the same, and thus the first pixel array 141 and the second pixel array 142 can simultaneously acquire different light information of the same scene, thereby implementing different detection functions for the same scene.
The pixels 143 of the second pixel array 142 include a plurality of quantum dots 1421 (shown in fig. 3), the quantum dots 1421 may be lead sulfide (PbS) quantum dots, which may be used to acquire light having a spectral response of [1400 nm, 3000 nm ] to generate corresponding electrical signals. The method can be used for realizing the functions of flaw detection, material detection and the like of the target object by acquiring the optical signal with the spectral response of [1400 nm, 3000 nm ] reflected by the target object.
The quantum dots 1421 can also be graphene quantum dots that can be used to acquire light with a spectral response of [185 nanometers, 400 nanometers ] to generate corresponding electrical signals. The method can be used for realizing the function of detecting the skin of a target person by acquiring the optical signal with the spectral response of [185 nanometers, 400 nanometers ] reflected by the target object.
Of course, the quantum dots 1421 may also be quantum dots 1421 made of other materials, and may be designed to implement functions according to specific needs, so as to further increase the functions that can be implemented by the second pixel array 142.
The plurality of quantum dots 1421 of each pixel 143 of the second pixel array 142 can be used to acquire an optical signal, and the electrical signals generated by the optical signals acquired by the plurality of quantum dots 1421 are accumulated together to be used for calculating the pixel value of the pixel 143. For example, each pixel 143 may include 5 × 5 quantum dots 1421, 10 × 10 quantum dots 1421, 30 × 30 quantum dots 1421, 100 × 100 quantum dots 1421, and the like, so that each pixel 143 can be covered by a larger number of quantum dots 1421 to receive a light signal, prevent excessive light from being emitted to the gaps between the quantum dots 1421 and thus cannot be captured, and improve light utilization.
Since the height of the quantum dots 1421 is small, in order to make the first and second pixel arrays 141 and 142 flush, and to facilitate the subsequent stacking of the optical filter 15 and the microlens array 16, the height of the protective layer 13 corresponding to the second pixel array 142 may be set higher than the height of the protective layer 13 corresponding to the first pixel array 141, so that the first and second pixel arrays 141 and 142 are flush.
Referring to fig. 1 to fig. 3, the terminal 1000 further includes a first processor 301 and a second processor 302, the first processor 301 is connected to the readout circuit 12, and the second processor 302 are connected to the first processor 301.
The first processor 301 is configured to acquire first image data output by the image sensor 10 in a detection mode, and recognize feature information in the first image data to output a recognition result.
Specifically, the pixels 143 of the pixel array 14 can detect their own brightness changes in real time, and when a moving object exists in the field of view, the light incident amount of the pixel 143 corresponding to the moving object changes accordingly due to the difference in their own color or the degree of surface irregularity, so that the brightness of the pixel 143 changes. When the brightness of any pixel 143 of the pixel array 14 changes, the pixel 143 outputs a trigger signal, and the first processor 301 can determine that a moving object exists in the current scene according to the trigger signal, so as to control the image sensor 10 to enter the detection mode. The brightness change indicates that the brightness difference value of the pixels 143 after the brightness is increased or the brightness is decreased reaches a preset brightness threshold, the preset brightness threshold can be further determined according to the timeliness and power consumption of the detection, when it is required to detect whether a moving object exists in more timely manner, the preset brightness threshold can be set to be lower, so that the image sensor 10 can detect slight movement, and if the power consumption of the terminal 1000 needs to be reduced, the preset brightness threshold can be set to be higher, or the moving object is determined to exist when the brightness of the pixels 143 changes (i.e., multiple trigger signals are obtained), so that the image sensor 10 is prevented from detecting the moving object frequently and the image sensor 10 is triggered to enter the detection mode.
Referring to fig. 4, when the image sensor 10 operates in the detection mode, the detection of the target object can be realized, and when only the face detection, the gesture recognition, and the like are required to be performed on the target object, the readout circuit 12 can read out only the pixel values of the pixels 143 of the first pixel array 141 to output the first image data, and only the gray scale image needs to be acquired because only the face detection and the gesture recognition are realized, that is, the pixel values of the color pixels and/or the pixel values of the panchromatic pixels in the first pixel array 141 are all converted into the luminance values to generate the gray scale image as the first image data. Specifically, the color pixels may be interpolated into panchromatic pixels by a preset interpolation algorithm, and then the luminance of each panchromatic pixel may be obtained to generate a grayscale image as the first image data. Then, the first processor 301 identifies the feature information in the grayscale image and compares the feature information with preset feature information (such as a preset face template, a gesture template, etc.), so as to identify a face, a gesture, etc. in the grayscale image, so as to output an identification result.
When only skin detection or flaw detection (such as fruit flaw detection) and material detection of the target object need to be achieved, the readout circuit 12 may read out only the pixel values of the pixels 143 of the second pixel array 142 to output the first image data. Similarly, the first image data may also be a gray-scale image, and the first processor 301 identifies the skin condition, the fruit damage condition, and the material of the template object by identifying the characteristic information in the gray-scale image and comparing the characteristic information with preset characteristic information (such as a preset skin template, a preset flaw detection template, and a preset material template), so as to output the identification result.
When it is necessary to simultaneously implement face detection, skin detection of a target object (i.e., a face), or when it is necessary to simultaneously implement gesture detection, skin detection of a target object (i.e., a hand), then the readout circuit 12 may simultaneously read the pixel values of the pixels 143 of the entire pixel array 14 to output the first image data. The face detection and the skin detection of the target object are thus achieved by the first image data containing the pixel values of the pixels 143 of the first pixel array 141 and the second pixel array 142, respectively.
Referring to fig. 4 to fig. 7, in order to further reduce the power consumption required for detection and improve the detection efficiency, the readout circuit 12 may output the first image data in an interlaced readout manner. For example, the readout circuit 12 reads out pixel values of the pixels 143 of the odd-numbered rows or the even-numbered rows of the first pixel array 141; or readout circuit 12 layer reads out pixel values of pixels 143 of odd or even rows of second pixel array 142; or readout circuitry 12 layer reads out pixel values of pixels 143 of odd or even rows of pixel array 14. Therefore, only the pixel values of a part of the pixels 143 of the first pixel array 141 and/or the second pixel array 142 are read out, the data processing amount in the subsequent feature identification is reduced to improve the detection efficiency, and the power consumption is reduced.
Of course, the readout circuit 12 may also output the first image data in a merged readout manner. For example, the readout circuit 12 collectively reads out a plurality of pixels 143 having the same spectral response in the first pixel array 141 as one large pixel whose pixel value may be the sum, average, weighted average, or the like of the pixel values of the plurality of pixels 143. The red pixels R of the second and fourth columns of the second row and the green pixels G of the third and fifth columns of the second row are combined into one red large pixel, and the green pixels G of the second and third columns of the second row and the fifth column are combined into one green large pixel, or the red pixels R of the second row and the third and third columns of the second row and the third column of the third row and the white pixels W of the third column of the second row and the third column of the third row are combined into one white large pixel, as in fig. 6, or the red pixels R of the first row and the first, third and fifth columns of the first row and the green pixels G of the second, fourth and sixth columns of the first row are combined into one red large pixel, and so on.
For another example, the readout circuit 12 reads out a plurality of adjacent pixels (such as the pixels 143 of the first column and the second column of the second row and the pixels 143 of the third column and the fourth column of the second row in fig. 6) in the second pixel array 142 as one large pixel, and the pixel value of the large pixel may be the sum, the average, the weighted average, or the like of the pixel values of the plurality of pixels 143. Therefore, the first image data is output in a merging and reading mode, the data processing amount during subsequent feature recognition can be reduced, the detection efficiency is improved, and the power consumption is reduced.
Referring to fig. 5, when the first pixel array 141 includes color pixels and panchromatic pixels, in order to further reduce the data processing amount and the calculation amount of interpolating the color pixels into the panchromatic pixels, the readout circuit 12 can read out only the pixel values of the panchromatic pixels (e.g., the white pixels W in fig. 5), so that the gray scale image as the first image data can be quickly calculated from the read out panchromatic pixels without interpolating the color pixels, and only the panchromatic pixels of the first pixel array 141 are read out, thereby reducing the data processing amount in the subsequent feature recognition to improve the detection efficiency and reduce the power consumption.
The second processor 302 can obtain the recognition result of the first processor 301 to control the terminal 1000 to perform corresponding operations according to the recognition result.
For example, when the recognition result output by the first processor 301 is that a human face is recognized, the second processor 302 may determine that the user may want to view information of the display screen, and at this time, the terminal 1000 may control the display screen to light up and display preset information (such as clock information, weather information, and the like), such as displaying a scene on the display screen, and when the human face is detected, light up the display screen to display a message screen component (the message screen component includes a clock component, a weather component, and the like).
For another example, when the recognition result output by the first processor 301 is that the recognition gesture is a predetermined gesture, the second processor 302 may perform a corresponding operation according to the predetermined gesture, for example, when the user uses a fist-making gesture, the second processor 302 performs a screen capture operation on the current display interface, and for another example, when the user uses a five-finger opening gesture, the second processor 302 controls the flashlight to be turned on.
For another example, when the first processor 301 outputs the identification result that the fruit is damaged, the second processor 302 displays the identification result on the display screen and highlights the damaged part of the fruit; when the identification result output by the first processor 301 is that the material of the target object is metal, the second processor 302 displays the identification result on the display screen.
In this way, the second processor 302 only needs to control the terminal 1000 to execute corresponding operations according to the recognition result, and does not need to process images, so that the calculation amount is small and the power consumption is relatively saved.
When the image sensor 10 receives a shooting instruction, the image sensor 10 enters a shooting mode, the first processor 301 is further configured to output second image data in the shooting mode, and the second processor 302 displays a corresponding preview image according to the second image data.
When it is necessary to acquire a visible light image using the image sensor 10, the image sensor 10 enters a shooting mode. For example, a user clicks a camera icon to issue a photographing instruction, the second processor 302 sends the photographing instruction to the first processor 301, the first processor 301 controls the image sensor 10 to enter a photographing mode according to the photographing instruction, and the second processor 302 displays a corresponding preview image according to the second image data output by the first processor 301. For another example, the predetermined recognition result may trigger the second processor 302 to issue a shooting instruction, and if the recognition result is that the fruit is damaged and the damaged portion of the fruit needs to be highlighted, the second processor 302 issues the shooting instruction to enable the image sensor 10 to be in the shooting mode, so as to display the corresponding preview image according to the second image data output by the first processor 301, thereby highlighting the damaged portion of the fruit in combination with the recognition result. For another example, when the user uses a gesture of opening the camera, the second processor 302 issues a shooting instruction to enable the image sensor 10 to be in a shooting mode, so that the corresponding preview image is displayed according to the second image data output by the first processor 301.
When acquiring the second image data, the readout circuit 12 reads out the pixel values of the pixels 143 of the first pixel array 141 having the visible light spectral response to output the second image data, and at this time, the first processor 301 transmits the second image data to the second processor 302 without processing the second image data, so that the second processor 302 controls the display screen to display the corresponding preview image according to the second image data.
In other embodiments, the image sensor 10 may be in the detection mode and the photographing mode at the same time, i.e., the image sensor 10 outputs the first image data and the second image data at the same time. So that the second processor 302 synchronously displays the corresponding preview image according to the second image data while the first processor 301 recognizes the feature information according to the first image data to output the recognition result.
Referring to fig. 3 again, the filter 15 includes a plurality of sub-filters 151, and the sub-filters 151 and the pixels 143 of the pixel array 14 are disposed in a one-to-one correspondence manner, that is, the pixels 143 are configured to receive light passing through the corresponding filter 15.
Taking the pixel array 14 as an example of the array shown in fig. 5, wherein the first pixel array 141 includes a red pixel R, a green pixel G, a blue pixel B, and a white pixel W, and the second pixel array 142 includes graphene quantum dots, so that the sub-filter 151 corresponding to the red pixel R is used for transmitting red light, the sub-filter 151 corresponding to the green pixel G is used for transmitting green light, the sub-filter 151 corresponding to the blue pixel B is used for transmitting blue light, and the sub-filter 151 corresponding to the white pixel W is used for transmitting white light; the sub-filters 151 corresponding to the second pixel array 142 are used for transmitting light of [185 nm, 400 nm ].
The microlens array 16 is located on the opposite side of the filter 15 from the pixel array 14. The microlens array 16 includes a plurality of microlenses 161. The micro lens 161 may be a convex lens for condensing the light emitted from the lens 20 to the micro lens 161 so that more light is irradiated on the corresponding sub filter 151. Each microlens 161 corresponds to one pixel 143 of one pixel array 14, that is, each microlens 161, each sub-filter 151, and each pixel 143 correspond to one another.
When each microlens 161 corresponds to one pixel 143, the corresponding microlens 161 and the pixel 143 of the pixel near the center of the pixel array 14 (for example, 4 pixels 143 near the center of the pixel array 14) are aligned, and the pixel 143 and the corresponding microlens 161 of the pixel near the center of the non-pixel array 14 are offset from each other. Specifically, the offset amount of the pixel 143 and the corresponding microlens 161 is larger as the distance of the pixel 143 from the center is farther, where the offset amount refers to the distance between the center of the orthographic projection of the microlens 161 on the pixel array 14 and the center of the corresponding pixel 143.
When the microlenses 161 and the pixels 143 are perfectly aligned without being shifted, for the pixel array 14, a part of the light converged by the microlenses 161 at the edge position cannot be received by the corresponding pixels 143, which results in waste of light. The image sensor 10 of the embodiment of the application sets a reasonable offset for the microlens 161 corresponding to the non-central position and the pixel 143 corresponding to the microlens 161, so that the converging effect of the microlens 161 can be improved, light received by the microlens 161 can be received by the corresponding pixel 143 after being converged, and the utilization rate of the light is improved.
Referring to fig. 1 to 4, in the image sensor 10, the camera module 100 and the terminal 1000 according to the embodiment of the present disclosure, the first pixel array 141 and the second pixel array 142 having non-crossed spectral responses are disposed on the image sensor 10, so that a wider spectral response can be obtained compared with obtaining only at least a portion of visible spectral responses corresponding to the first pixel array 141, so as to obtain more different types of optical signals, and the image sensor 10 can implement more functions. And because the first pixel array 141 includes at least partial visible light spectral response, the acquisition of preview images can be realized through the first pixel array 141, thereby realizing the function of a normal visible light camera, the gray level images are acquired through the first pixel array 141 and/or the second pixel array 142, the detection function (such as face detection, gesture detection, skin detection, fruit flaw detection, material detection and the like) of the normally open camera can be realized, and thus the terminal 1000 can realize the functions of the normally open camera and the visible light camera at the same time only by arranging one image sensor 10. In addition, the terminal 1000 obtains and identifies the image data of the image sensor 10 through the first processor 301 which is separately arranged, and compared with the case that the image data is processed by the second processor 302 (generally, an application processor of the terminal 1000), the terminal needs to be processed by a complicated image processing flow, so that the processing efficiency is higher, the power consumption is more reduced, and the realization of the normal opening of the image sensor 10 is facilitated.
In the description herein, reference to the terms "certain embodiments," "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples" means that a particular feature, structure, material, or characteristic described in connection with the embodiments or examples is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one feature. In the description of the present application, "a plurality" means at least two, e.g., two, three, unless specifically limited otherwise.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations of the above embodiments may be made by those of ordinary skill in the art within the scope of the present application, which is defined by the claims and their equivalents.

Claims (16)

1. An image sensor comprising a first pixel array and a second pixel array, the spectral response of the second pixel array being non-intersecting with the spectral response of the first pixel array, the first pixel array comprising at least a portion of the visible spectral response, the pixels of the second pixel array comprising quantum dots.
2. The image sensor of claim 1, wherein the quantum dots comprise lead sulfide quantum dots, and the spectral response of the second pixel array is [1400 nanometers, 3000 nanometers ].
3. The image sensor of claim 1, wherein the quantum dots comprise graphene quantum dots, and the spectral response of the second pixel array is [185 nanometers, 400 nanometers ].
4. The image sensor of claim 1, wherein the second pixel array surrounds the first pixel array.
5. The image sensor of claim 1, wherein the first pixel array and the second pixel array comprise a pixel array, and wherein the first pixel array and the second pixel array are respectively located in odd rows and even rows of the pixel array.
6. The image sensor of claim 1, further comprising a substrate, readout circuitry disposed on the substrate, and a protective layer disposed on the readout circuitry, wherein the first and second pixel arrays are disposed on the protective layer, wherein a height of the protective layer corresponding to the second pixel array is higher than a height of the protective layer corresponding to the first pixel array such that the first and second pixel arrays are flush.
7. The image sensor of claim 6, wherein the first pixel array and the second pixel array form a pixel array, and the readout circuit reads out pixel values of pixels of odd or even rows of the first pixel array when the image sensor is operating in a detection mode; or
The readout circuit reads out pixel values of pixels of odd or even rows of the second pixel array; or
The readout circuit reads out pixel values of pixels of odd or even rows of the pixel array.
8. The image sensor of claim 6, wherein the first pixel array and the second pixel array comprise a pixel array, and wherein the readout circuit collectively reads out a plurality of pixels of the pixel array as one large pixel when the image sensor is operating in a detection mode.
9. The image sensor of claim 6 wherein the first pixel array comprises panchromatic pixels and color pixels having a narrower spectral response than the panchromatic pixels, the readout circuit reading out pixel values of the panchromatic pixels when the image sensor is operating in a detection mode.
10. The image sensor of any of claims 7-9, wherein the image sensor enters the detection mode when the image sensor detects a moving object.
11. The image sensor of claim 10, wherein the image sensor is determined to detect a moving object when a brightness difference value of any one pixel of the pixel array reaches a predetermined threshold.
12. The image sensor of claim 6, wherein the readout circuit reads out pixel values of pixels of the first pixel array when the image sensor is operating in a capture mode.
13. A camera module, comprising a lens and the image sensor of any one of claims 1-12, wherein the image sensor is disposed on an image side of the lens.
14. A terminal, characterized in that the terminal comprises a housing and the camera module of claim 13, the camera module being arranged on the housing.
15. The terminal of claim 14, wherein the terminal further comprises:
a first processor for acquiring first image data output by the image sensor in a detection mode and recognizing feature information in the first image data to output a recognition result; and
and the second processor is used for controlling the terminal to execute corresponding operation according to the identification result.
16. The terminal of claim 14, wherein the first processor is further configured to output second image data in a shooting mode, and wherein the second processor displays a corresponding preview image according to the second image data.
CN202011566288.0A 2020-12-25 2020-12-25 Image sensor, imaging system and terminal Active CN112788313B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011566288.0A CN112788313B (en) 2020-12-25 2020-12-25 Image sensor, imaging system and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011566288.0A CN112788313B (en) 2020-12-25 2020-12-25 Image sensor, imaging system and terminal

Publications (2)

Publication Number Publication Date
CN112788313A true CN112788313A (en) 2021-05-11
CN112788313B CN112788313B (en) 2023-04-07

Family

ID=75752555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011566288.0A Active CN112788313B (en) 2020-12-25 2020-12-25 Image sensor, imaging system and terminal

Country Status (1)

Country Link
CN (1) CN112788313B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113992862A (en) * 2021-11-30 2022-01-28 维沃移动通信有限公司 Image sensor, camera module, electronic equipment and pixel information acquisition method
CN114125243A (en) * 2021-11-30 2022-03-01 维沃移动通信有限公司 Image sensor, camera module, electronic equipment and pixel information acquisition method
CN114143430A (en) * 2021-11-30 2022-03-04 维沃移动通信有限公司 Image sensor, camera module, electronic equipment and image acquisition method
WO2022134744A1 (en) * 2020-12-25 2022-06-30 Oppo广东移动通信有限公司 Electronic device control method, electronic device, and computer-readable storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060055800A1 (en) * 2002-12-18 2006-03-16 Noble Device Technologies Corp. Adaptive solid state image sensor
US20090195681A1 (en) * 2008-02-01 2009-08-06 Compton John T Sampling and Readout of an Image Sensor Having a Sparse Color Filter Array Pattern
JP2011243862A (en) * 2010-05-20 2011-12-01 Sony Corp Imaging device and imaging apparatus
US20160027838A1 (en) * 2014-07-25 2016-01-28 Invisage Technologies, Inc. Color image sensor without the color filters
CN105704463A (en) * 2014-12-09 2016-06-22 意法半导体(R&D)有限公司 Image sensor using pixels with combined RGB and IR sensing
US20160182846A1 (en) * 2014-12-22 2016-06-23 Google Inc. Monolithically integrated rgb pixel array and z pixel array
CN106252368A (en) * 2015-06-10 2016-12-21 光子科学研究所 Imageing sensor and manufacture method thereof and include the electro-optical system of imageing sensor
CN106454165A (en) * 2015-08-03 2017-02-22 光子科学研究所 Image sensor with non-local readout circuit and optoelectronic device comprising said image sensor
WO2017101559A1 (en) * 2015-12-18 2017-06-22 广东欧珀移动通信有限公司 Image sensor, terminal comprising same, and imaging method
CN107018294A (en) * 2015-09-30 2017-08-04 三星电子株式会社 Electronic installation
WO2017193738A1 (en) * 2016-05-09 2017-11-16 比亚迪股份有限公司 Image sensor, imaging method, and imaging device
US20200328239A1 (en) * 2019-04-11 2020-10-15 Luxvisions Innovation Limited Image sensing device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060055800A1 (en) * 2002-12-18 2006-03-16 Noble Device Technologies Corp. Adaptive solid state image sensor
US20090195681A1 (en) * 2008-02-01 2009-08-06 Compton John T Sampling and Readout of an Image Sensor Having a Sparse Color Filter Array Pattern
JP2011243862A (en) * 2010-05-20 2011-12-01 Sony Corp Imaging device and imaging apparatus
US20160027838A1 (en) * 2014-07-25 2016-01-28 Invisage Technologies, Inc. Color image sensor without the color filters
CN105704463A (en) * 2014-12-09 2016-06-22 意法半导体(R&D)有限公司 Image sensor using pixels with combined RGB and IR sensing
US20160182846A1 (en) * 2014-12-22 2016-06-23 Google Inc. Monolithically integrated rgb pixel array and z pixel array
CN106252368A (en) * 2015-06-10 2016-12-21 光子科学研究所 Imageing sensor and manufacture method thereof and include the electro-optical system of imageing sensor
CN106454165A (en) * 2015-08-03 2017-02-22 光子科学研究所 Image sensor with non-local readout circuit and optoelectronic device comprising said image sensor
CN107018294A (en) * 2015-09-30 2017-08-04 三星电子株式会社 Electronic installation
WO2017101559A1 (en) * 2015-12-18 2017-06-22 广东欧珀移动通信有限公司 Image sensor, terminal comprising same, and imaging method
WO2017193738A1 (en) * 2016-05-09 2017-11-16 比亚迪股份有限公司 Image sensor, imaging method, and imaging device
US20200328239A1 (en) * 2019-04-11 2020-10-15 Luxvisions Innovation Limited Image sensing device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
甘健飞: "基于运动目标检测的红外—可见光图像配准", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022134744A1 (en) * 2020-12-25 2022-06-30 Oppo广东移动通信有限公司 Electronic device control method, electronic device, and computer-readable storage medium
CN113992862A (en) * 2021-11-30 2022-01-28 维沃移动通信有限公司 Image sensor, camera module, electronic equipment and pixel information acquisition method
CN114125243A (en) * 2021-11-30 2022-03-01 维沃移动通信有限公司 Image sensor, camera module, electronic equipment and pixel information acquisition method
CN114143430A (en) * 2021-11-30 2022-03-04 维沃移动通信有限公司 Image sensor, camera module, electronic equipment and image acquisition method

Also Published As

Publication number Publication date
CN112788313B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN112788313B (en) Image sensor, imaging system and terminal
US10425624B2 (en) Solid-state image capturing device and electronic device
US10910423B2 (en) Solid-state imaging device, manufacturing method thereof, and electronic device
CN107924923B (en) Solid-state image pickup device and electronic apparatus
CN212785522U (en) Image sensor and electronic device
CN106686285B (en) Catoptric imaging equipment and mobile device with catoptric imaging equipment
US8780259B2 (en) Image capturing apparatus and in-focus position detection method thereof
US8823844B2 (en) Imaging device and imaging apparatus
US7663685B2 (en) Hybrid solid-state image pickup element and image pickup apparatus using the same
US20130250164A1 (en) Image capturing apparatus and in-focus position detection method thereof
CN104519327A (en) Image sensor and image acquisition system
US20190041559A1 (en) Solid-state imaging element, manufacturing method of the same, and electronic device
US8654224B2 (en) Composite imaging element and imaging device equipped with same
CN107408566A (en) Solid-state imaging apparatus and electronic installation
CN112557308B (en) electronic device
WO2021070867A1 (en) Electronic device
CN214381044U (en) Electronic device
EP4096212A1 (en) Electronic device
US10580816B2 (en) Solid-state imaging device, camera module, and electronic apparatus
CN113676617A (en) Motion detection method, motion detection device, electronic equipment and computer-readable storage medium
CN109387895B (en) Color filter array apparatus
US20240014233A1 (en) Image sensor, image data obtaining method, and imaging device
JP2021064853A (en) Defective pixel detection method, defective pixel detection device, program, and image sensor
CN114205497A (en) Image sensor, camera module and electronic equipment
CN112146726A (en) Dispersion type image sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant