WO2021075116A1 - Dispositif d'imagerie à semi-conducteur et appareil électronique - Google Patents

Dispositif d'imagerie à semi-conducteur et appareil électronique Download PDF

Info

Publication number
WO2021075116A1
WO2021075116A1 PCT/JP2020/028084 JP2020028084W WO2021075116A1 WO 2021075116 A1 WO2021075116 A1 WO 2021075116A1 JP 2020028084 W JP2020028084 W JP 2020028084W WO 2021075116 A1 WO2021075116 A1 WO 2021075116A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
solid
light
semiconductor substrate
imaging region
Prior art date
Application number
PCT/JP2020/028084
Other languages
English (en)
Japanese (ja)
Inventor
皓平 大井
博信 深川
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2021075116A1 publication Critical patent/WO2021075116A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith

Definitions

  • This technology relates to solid-state image sensors and electronic devices.
  • Patent Document 1 may not be able to further improve the image quality of the solid-state image sensor.
  • this technology was made in view of such a situation, and a solid-state image sensor capable of further improving the image quality of the solid-state image sensor and an electronic device equipped with the solid-state image sensor are used.
  • the main purpose is to provide.
  • the present inventor succeeded in realizing higher image quality of the solid-state image sensor, and completed the present technology.
  • a filter that transmits a plurality of specific lights, and a semiconductor substrate on which a photoelectric conversion unit is formed are provided.
  • An effective imaging region in which a first light-shielding film is arranged is formed between filters that transmit the specific light.
  • a non-imaging region in which a second light-shielding film is arranged is formed between the plurality of filters that transmit specific light and the semiconductor substrate so as to cover the light incident surface of the semiconductor substrate.
  • the filters that transmit specific light include, for example, a blue (B) filter that transmits blue (B) light, a green (G) filter that transmits green light (G) light, and a red that transmits red (R) light.
  • B blue
  • G green
  • R red
  • examples thereof include a (R) filter, a transparent filter that transmits white (W) light, a filter for infrared light (IR light) that transmits infrared (IR) light, and the like.
  • the dug portion may be provided in the non-imaging region.
  • the dug portion may be provided in the effective imaging region.
  • the difference between the position of the upper end portion of the on-chip lens on the light incident side corresponding to the central region of the effective imaging region and the position of the upper end portion of the on-chip lens on the light incident side corresponding to the boundary region is 100 nm or less. But it may be.
  • the first light-shielding film and / or the second light-shielding film may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the non-imaging region may be arranged around the outer periphery of the effective imaging region, and the boundary region and the vicinity region of the boundary region may have a rectangle in a plan view from the light incident side.
  • the dug portion may be formed on at least one of the four sides of the above.
  • the shape of the dug portion on the boundary region side in the depth direction in cross-sectional view may have a gradient.
  • an insulating film may be arranged between the first light-shielding film and / or the second light-shielding film and the semiconductor substrate.
  • the first light-shielding film and / or the second light-shielding film may be in contact with the P-type semiconductor region or the N-type semiconductor region of the semiconductor substrate.
  • a filter that transmits a plurality of specific lights, and a semiconductor substrate on which a photoelectric conversion unit is formed are provided.
  • An effective imaging region in which a first light-shielding film is arranged is formed between filters that transmit the specific light.
  • a non-imaging region in which a second light-shielding film is arranged is formed between the plurality of filters that transmit specific light and the semiconductor substrate so as to cover the light incident surface of the semiconductor substrate.
  • a solid-state image pickup apparatus in which a digging portion formed by digging a light incident surface of the semiconductor substrate is provided in the non-imaging region.
  • the filters that transmit specific light include, for example, a blue (B) filter that transmits blue (B) light, a green (G) filter that transmits green light (G) light, and a red that transmits red (R) light.
  • B blue
  • G green
  • R red
  • examples thereof include a (R) filter, a transparent filter that transmits white (W) light, a filter for infrared light (IR light) that transmits infrared (IR) light, and the like.
  • the difference from the position of the portion may be 100 nm or less.
  • the first light-shielding film and / or the second light-shielding film may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the shape of the dug portion on the boundary region side between the effective imaging region and the non-imaging region in the depth direction in cross-sectional view may have a gradient.
  • An insulating film may be arranged between the second light-shielding film and the semiconductor substrate in the region of the dug portion.
  • the second light-shielding film may be in contact with the P-type semiconductor region or the N-type semiconductor region of the semiconductor substrate.
  • a filter that transmits a plurality of specific lights, and a semiconductor substrate on which a photoelectric conversion unit is formed are provided.
  • An effective imaging region in which a first light-shielding film is arranged is formed between filters that transmit the specific light.
  • a non-imaging region in which a second light-shielding film is arranged is formed between the plurality of filters that transmit specific light and the semiconductor substrate so as to cover the light incident surface of the semiconductor substrate.
  • a solid-state image pickup apparatus in which a digging portion formed by digging a light incident surface of the semiconductor substrate is provided in the effective imaging region.
  • the filters that transmit specific light include, for example, a blue (B) filter that transmits blue (B) light, a green (G) filter that transmits green light (G) light, and a red that transmits red (R) light.
  • B blue
  • G green
  • R red
  • examples thereof include a (R) filter, a transparent filter that transmits white (W) light, a filter for infrared light (IR light) that transmits infrared (IR) light, and the like.
  • the difference from the position of the portion may be 100 nm or less.
  • the first light-shielding film and / or the second light-shielding film may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the shape of the dug portion on the boundary region side between the effective imaging region and the non-imaging region in the depth direction in cross-sectional view may have a gradient.
  • an insulating film may be arranged between the first light-shielding film and the semiconductor substrate.
  • the first light-shielding film may be in contact with the P-type semiconductor region or the N-type semiconductor region of the semiconductor substrate.
  • the present technology provides an electronic device equipped with a solid-state imaging device on the first side surface according to the present technology, a solid-state imaging device on the second side surface according to the present technology, or a third side surface according to the present technology. To do.
  • FIG. 1 It is a figure which shows the use example of the solid-state image sensor of 1st to 9th Embodiment to which this technique is applied. It is a functional block diagram of an example of the electronic device which concerns on the tenth embodiment to which this technique is applied. It is a figure which shows an example of the schematic structure of the endoscopic surgery system. It is a block diagram which shows an example of the functional structure of a camera head and a CCU. It is a block diagram which shows an example of the schematic structure of a vehicle control system. It is explanatory drawing which shows an example of the installation position of the vehicle exterior information detection unit and the image pickup unit.
  • a solid-state image sensor such as a CMOS image sensor has an effective image pickup region in which matrix-shaped image pickup pixels are arranged in the center of the chip, and a non-imaging region is provided around the effective image pickup region.
  • CMOS image sensor has an effective image pickup region in which matrix-shaped image pickup pixels are arranged in the center of the chip, and a non-imaging region is provided around the effective image pickup region.
  • the height is increased by a wiring film or a metal film provided in the non-imaging region, and a step is generated between the effective imaging region and the non-imaging region. Therefore, in the peripheral portion of the effective imaging region, there arises a problem that the light collected by the on-chip lens does not enter the center of the photoelectric conversion unit (for example, the photodiode (PD)). That is, the incident light does not properly enter each opening of the metal film, and a part of the incident light is blocked by the edge of the opening of the metal film, so that the pixels in the peripheral portion of the effective imaging region are the pixels in the central portion. Compared with, the sensitivity is reduced. As a result, a phenomenon that the peripheral portion of the imaging screen becomes dark is observed, and depending on the degree, the sensitivity may be non-uniform and the product may be defective.
  • the photoelectric conversion unit for example, the photodiode (PD)
  • the insulating film is used in a solid-state imaging device having an insulating film arranged from the effective imaging region to the non-imaging region and a metal film having a large film thickness arranged in the non-imaging region on the insulating film.
  • an effective image pickup region formed by a photodiode serving as a photoelectric conversion unit and a plurality of pixel transistors and arranged in a two-dimensional array, and a non-imaging region having peripheral circuits are provided. It is configured.
  • CMOS image sensor As another solid-state image sensor such as a CMOS image sensor, a semiconductor chip in which a pixel region in which a plurality of pixels are arranged is formed and a semiconductor chip in which a logic circuit for performing signal processing is formed are electrically connected. It is configured as one device by connecting the devices.
  • a semiconductor module in which a back-illuminated image sensor chip having a micro pad for each pixel cell and a signal processing chip having a signal processing circuit formed and having a micro pad are connected by micro bumps can be mentioned.
  • the solid-state imaging device does not have a peripheral circuit in the non-imaging region, but a metal film (light-shielding film) covering the light incident surface of the semiconductor substrate is formed on the back surface side for checking the black level and the like. There is.
  • the effective imaging region has a grid-like light-shielding wall (light-shielding film) pattern in a plan view on the light incident side.
  • the surface of the semiconductor substrate (silicon substrate) under the metal film (light-shielding film) in the boundary region between the effective imaging region and the non-imaging region and in the vicinity of this boundary region according to the step generated between the effective imaging region and the non-imaging region.
  • a step for example, an on-chip lens that occurs in the boundary region between the effective imaging region and the non-imaging region and the region near the boundary region (for example, the region near the boundary region and the effective imaging region).
  • FIG. 11 is a diagram showing a configuration of an example of a solid-state image sensor.
  • FIG. 11A is a plan layout view of the solid-state image sensor 1000, which is an example of the solid-state image sensor, from the light incident side.
  • the solid-state imaging device 1000 has an effective imaging region 1001 and a non-imaging region 1002, and the non-imaging region 1002 is arranged around the outer periphery of the effective imaging region 1001.
  • FIG. 11B is a cross-sectional view of the solid-state image sensor 1000 in the F region in the effective imaging region 1001 shown in FIG. 11A.
  • an on-chip lens 611b, a red color filter 3 and a green color filter 4, an insulating film (for example, an oxide film) (not shown), and a photoelectric conversion unit are used.
  • the semiconductor substrate 111b on which (not shown) is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 111b so as to divide the pixels.
  • a first light-shielding film 1 is arranged between the red color filter 3 and the green color filter 4.
  • the first light-shielding film 1 is arranged in a grid pattern between pixels in a plan view from the light incident side.
  • FIG. 11C is a cross-sectional view of the solid-state image sensor 1000 in the region E2 near the boundary region between the effective imaging region 1001 and the non-imaging region 1002 in the effective imaging region 1001 shown in FIG. 11A.
  • an on-chip lens 611c, a red color filter 3 and a green color filter 4, an insulating film (for example, an oxide film) (not shown), and a photoelectric conversion unit are used.
  • the semiconductor substrate 111c on which (not shown) is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 111c so as to divide the pixels.
  • a first light-shielding film 1 is arranged between the red color filter 3 and the green color filter 4.
  • the first light-shielding film 1 is arranged in a grid pattern between pixels in a plan view from the light incident side.
  • FIG. 11 (d) is a cross-sectional view of the solid-state image sensor 1000 in the G region in the non-imaging region 1002 shown in FIG. 11 (a).
  • an on-chip lens 611d in order from the light incident side, an on-chip lens 611d, a red color filter 3 and a green color filter 4, an insulating film (for example, an oxide film) (not shown), and a photoelectric conversion unit are used.
  • the semiconductor substrate 111d on which (not shown) is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 111d so as to divide the pixels.
  • the second light-shielding film 2 includes a red color filter 3 and a green color filter 4, an insulating film (not shown), and a semiconductor substrate 111d so as to cover the light incident surface (upper surface in FIG. 11) of the semiconductor substrate 111d. It is arranged in a solid shape (the entire surface of the light incident surface of the semiconductor substrate 111d) between the two.
  • the pattern of the first light-shielding film 1 (lattice-like in a plan view) and the pattern of the second light-shielding film 2 (solid in a plan view) are different.
  • X1 shown in FIG. 11 (b) is flat (horizontal to the left and right in FIG. 11 (b))
  • X3 shown in FIG. 11 (d) is also flat (horizontal to the left and right in FIG. 11 (b)).
  • X2 shown in FIG. 11 (c) rises toward the non-imaging region 1002 (in FIG. 11 (b)) and rises to the right. ). That is, the E2 region shown in FIG.
  • 11C is an upper layer film thickness difference generation region (for example, a region where the film thickness difference of the on-chip lens 611c is generated, and the film of the on-chip lens 611c toward the non-imaging region 1002.
  • the thickness increases), which causes a local change in the film thickness of the color filter and the height of the on-chip lens due to the influence of the layout step. As shown in FIG. 10 described later, this difference in the laminated structure may cause the shading shape to collapse at an image height having a high incident angle.
  • FIG. 10A is a graph showing the result of the shading characteristic of the G (green) pixel, the vertical axis is the output, and the horizontal axis is the X address.
  • the X address is an axis (region) connecting the E1 region and the E2 region shown in FIG. 11 in the left-right direction in FIG.
  • the solid line A shows the result of the shading characteristic of the G (green) pixel of the solid-state image sensor 1000
  • the dotted line B shows the result of the shading characteristic of the G (green) pixel of the solid-state image sensor according to the present technology. As shown in FIG.
  • the G (green) pixel of the solid-state image sensor according to the present technology has no color unevenness with respect to the G (green) pixel of the solid-state image sensor 1000. , It can be understood that the shading characteristics are improved.
  • FIG. 10B is a graph showing the result of shading characteristics of R (red) pixels, the vertical axis is the output, and the horizontal axis is the X address.
  • the X address is an axis (region) connecting the E1 region and the E2 region shown in FIG. 11 in the left-right direction in FIG.
  • the solid line A shows the result of the shading characteristic of the R (red) pixel of the solid-state image sensor 1000
  • the dotted line B shows the result of the shading characteristic of the R (red) pixel of the solid-state image sensor according to the present technology. As shown in FIG.
  • the R (red) pixels of the solid-state image sensor according to the present technology have no color unevenness with respect to the R (red) pixels of the solid-state image sensor 1000. , It can be understood that the shading characteristics are improved.
  • FIG. 10C is a graph showing the result of the shading characteristic of the B (blue) pixel, the vertical axis is the output, and the horizontal axis is the X address.
  • the X address is an axis (region) connecting the E1 region and the E2 region shown in FIG. 11 in the left-right direction in FIG.
  • the solid line A shows the result of the shading characteristic of the B (blue) pixel of the solid-state image sensor 1000
  • the dotted line B shows the result of the shading characteristic of the B (blue) pixel of the solid-state image sensor 1000 according to the present technology. As shown in FIG.
  • the B (blue) pixel of the solid-state image sensor according to the present technology has no color unevenness with respect to the B (blue) pixel of the solid-state image sensor 1000. , It can be understood that the shading characteristics are improved.
  • FIG. 1 is a cross-sectional view showing a configuration example (solid-state image sensor 100) of the solid-state image sensor of the first embodiment according to the present technology.
  • solid-state image sensor of the first embodiment according to the present technology is not limited to the solid-state image sensor 100.
  • the solid-state imaging device 100 has an effective imaging region P1 and a non-imaging region Q1.
  • Reference numeral K1 in FIG. 1 indicates a boundary region between the effective imaging region P1 and the non-imaging region Q1.
  • the on-chip lens 61-1, the red color filter 3 and the green color filter 4, the insulating film (for example, the oxide film) 5-1 and the photoelectric conversion unit (not shown) are sequentially arranged from the light incident side.
  • the semiconductor substrate 11-1 on which the above is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 11-1 so as to divide the pixels.
  • a first light-shielding film 1 is arranged between the red color filter 3 and the green color filter 4.
  • the first light-shielding film 1 is arranged in a grid pattern between pixels in a plan view from the light incident side.
  • the first light-shielding film 1 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • an on-chip lens 61-2 In the non-imaging region Q1, in order from the light incident side, an on-chip lens 61-2, a red color filter 3 and a green color filter 4, an insulating film (for example, an oxide film) 5-1 and a photoelectric conversion unit (not shown).
  • the semiconductor substrate 11-2 on which the above is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 11-2 so as to divide the pixels.
  • the second light-shielding film 2 includes a red color filter 3 and a green color filter 4, an insulating film 5-1 and a semiconductor substrate so as to cover the light incident surface J1 (upper surface in FIG. 1) of the semiconductor substrate 11-2.
  • the second light-shielding film 2 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the digging portion H1 starts from the light incident surface J1 of the semiconductor substrate 11-2 at the boundary region K1 between the effective imaging region P1 and the non-imaging region Q1 and faces the outside of the non-imaging region Q1 (in FIG. 1).
  • the semiconductor substrate 11-2 is formed by digging in a substantially uniform depth in the depth direction (downward in FIG. 1). Due to the formation of the dug portion H1, the thickness of the semiconductor substrate 11-2 (length in the vertical direction in FIG. 1) becomes smaller than the thickness of the semiconductor substrate 11-1 (length in the vertical direction in FIG. 1). There is.
  • the dug portion H1 due to the formation of the dug portion H1, there is a step in the surface position between the film thickness of the color filter (for example, the red color filter 3 and the green color filter 4) and the height of the on-chip lens (for example, the on-chip lens 61-1). Is reduced, and the position of the upper end of the on-chip lens 61-1 on the light incident side between the central region of the effective imaging region P1 and the boundary region K1 between the effective imaging region P1 and the non-imaging region Q1 and the region near the boundary region K1.
  • FIG. 2A is a cross-sectional view showing a configuration example (solid-state image sensor 200a) of the solid-state image sensor of the second embodiment according to the present technology.
  • solid-state image sensor of the second embodiment according to the present technology is not limited to the solid-state image sensor 200a.
  • the solid-state imaging device 200a has an effective imaging region P2 and non-imaging regions Q2 and Q3.
  • Reference numeral K2 in FIG. 2 indicates a boundary region between the effective imaging region P1 and the non-imaging regions Q2 and Q3.
  • the on-chip lens 62a-1, the red color filter 3, the green color filter 4, the insulating film (for example, the oxide film) 5-1 and the photoelectric conversion unit (not shown) are arranged in this order from the light incident side.
  • the semiconductor substrate 12a-1 on which the above is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 12a-1 so as to divide the pixels.
  • a first light-shielding film 1 is arranged between the red color filter 3 and the green color filter 4.
  • the first light-shielding film 1 is arranged in a grid pattern between pixels in a plan view from the light incident side.
  • the first light-shielding film 1 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the insulating film for example, the oxide film 5-1 and the photoelectric conversion unit (non-imaging).
  • the semiconductor substrate 12a-2-1 on which (not shown) is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 12a-2-1 so as to divide the pixels.
  • the second light-shielding film 2 includes a red color filter 3, a green color filter 4, and an insulating film 5 so as to cover the light incident surface J2a (upper surface in FIG. 2A) of the semiconductor substrate 12a-2-1.
  • the second light-shielding film 2 in the non-imaging region Q2 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the insulating film for example, an oxide film 5-1 and the photoelectric conversion unit (non-imaging).
  • the semiconductor substrate 12a-2-2 on which (shown) is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 12a-2-2 so as to divide the pixels.
  • the second light-shielding film 2 includes a red color filter 3, a green color filter 4, and an insulating film 5 so as to cover the light incident surface J2a (upper surface in FIG. 2A) of the semiconductor substrate 12a-2-2.
  • the second light-shielding film 2 in the non-imaging region Q3 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the digging portion H2a moves the light incident surface J2a of the semiconductor substrate 12a-2-2 from the digging start position L1 of the non-imaging region Q3 toward the outside of the non-imaging region Q3 (to the right in FIG. 2A).
  • the semiconductor substrate 12a-2-2 is formed by digging in a substantially uniform depth in the depth direction (downward in FIG. 2A). Due to the formation of the dug portion H2a, the thickness of the semiconductor substrate 12a-2-2 (the length in the vertical direction in FIG. 2A) is the thickness of the semiconductor substrates 12a-1 and 12a-2-1 (in FIG. 1). It is smaller than the vertical length of. In the digging portion, the light incident surface (upper surface in FIG.
  • the semiconductor substrate 12a-1 is directed from the digging start position of the effective imaging region P2 toward the central region of the effective imaging region P2. It may be formed by digging at a substantially uniform depth (to the left in FIG. 2A) and in the depth direction of the semiconductor substrate 12a-1 (downward in FIG. 2A).
  • the difference in the surface position between the film thickness of the color filter (for example, the red color filter 3 and the green color filter 4) and the height of the on-chip lens (for example, the on-chip lens 62a-1) is reduced.
  • T4 (corresponding to the boundary region K2 and the vicinity region of the boundary region K2)
  • FIG. 2B is a cross-sectional view showing a configuration example (solid-state image sensor 200b) of the solid-state image sensor according to the third embodiment according to the present technology.
  • solid-state image sensor of the third embodiment according to the present technology is not limited to the solid-state image sensor 200b.
  • the solid-state image sensor 200b has an effective imaging region P3 and P4 and a non-imaging region Q4.
  • Reference numeral K2 in FIG. 2 indicates a boundary region between the effective imaging regions P3 and P4 and the non-imaging region Q4.
  • the insulating film for example, an oxide film
  • the photoelectric conversion unit non-applicable
  • the semiconductor substrate 12b-1-1 on which (not shown) is formed is formed.
  • An insulating film 5 for example, an oxide film having a trench structure is formed on the semiconductor substrate 12b-1-1 so as to divide the pixels.
  • a first light-shielding film 1 is arranged between the red color filter 3 and the green color filter 4.
  • the first light-shielding film 1 is arranged in a grid pattern on the pixels in a plan view from the light incident side.
  • the first light-shielding film 1 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the insulating film (for example, the oxide film) 5-1 and the photoelectric conversion unit (non-applicable) are arranged in this order from the light incident side.
  • the semiconductor substrate 12b-1-2 on which (shown) is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 12b-1-2 so as to divide the pixels.
  • a first light-shielding film 1 is arranged between the red color filter 3 and the green color filter 4. The first light-shielding film 1 is arranged in a grid pattern on the pixels in a plan view from the light incident side.
  • the digging portion H2b-1 views the light incident surface J2b-1 of the semiconductor substrate 12b-1-2 from the digging start position L1 of the effective imaging region P3 to the boundary region K2 between the effective imaging region P3 and the non-imaging region Q4. It is formed by digging up to a region near the boundary region K2 at a substantially uniform depth in the depth direction of the semiconductor substrate 12a-1-2 (downward in FIG. 2B). Due to the formation of the dug portion H2b-1, the thickness of the semiconductor substrate 12b-2 (the length in the vertical direction in FIG. 2B) is reduced to the thickness of the semiconductor substrate 12b-1-1 (in FIG. 2B). It is smaller than the vertical length). The thickness of the semiconductor substrate 12b-2 (length in the vertical direction in FIG. 2B) is abbreviated as the thickness of the semiconductor substrate 12b-1-2 (length in the vertical direction in FIG. 2B). It is equivalent.
  • an on-chip lens 62b-2 In order from the light incident side, an on-chip lens 62b-2, a red color filter 3 and a green color filter 4, an insulating film (for example, an oxide film) 5-1 and a photoelectric conversion unit (not shown).
  • the semiconductor substrate 12b-2 on which the above is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 12b-2 so as to divide the pixels.
  • the second light-shielding film 2 includes a red color filter 3 and a green color filter 4 and an insulating film 5-1 so as to cover the light incident surface J2b (upper surface in FIG. 2B) of the semiconductor substrate 12b-2.
  • the second light-shielding film 2 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the digging portion H2b-2 digs the light incident surface J2b of the semiconductor substrate 12b-2 in the depth direction of the semiconductor substrate 12b-2 (downward in FIG. 2B) at a substantially uniform depth. Is formed of.
  • the digging portion H2b-2 is continuously formed from the digging portion H2b-1 described above. Due to the formation of the dug portion H2b-2, the thickness of the semiconductor substrate 12b-2 (the length in the vertical direction in FIG. 2B) is the thickness of the semiconductor substrate 12b-1-1 (in FIG. 2B). It is smaller than the vertical length).
  • the thickness of the semiconductor substrate 12b-2 (length in the vertical direction in FIG. 2B) is abbreviated as the thickness of the semiconductor substrate 12b-1-2 (length in the vertical direction in FIG. 2B). It is equivalent.
  • the film thickness of the color filter for example, the red color filter 3 and the green color filter
  • the height of the on-chip lens for example, the on-chip lens 62b-1-1
  • the step of the surface position is reduced, and the on-chip lens 62b-1-1 between the central region of the effective imaging region P4, the boundary region K2 between the effective imaging region P3 and the non-imaging region Q4, and the vicinity region of the boundary region K2.
  • T6 corresponding to the boundary region K2 and the region near the boundary region K2
  • -t5 corresponding to the central region of the effective imaging region P4)
  • the contents of the description of the solid-state image sensor of the third embodiment (Example 3 of the solid-state image sensor) according to the present technology are the first and second first to the second aspects of the present technology described above, unless there is a particular technical contradiction. It can be applied to the solid-state image sensor of the embodiment and the solid-state image sensor of the fourth to ninth embodiments according to the present technology described later.
  • FIG. 3 is a cross-sectional view showing a configuration example (solid-state image sensor 300) of the solid-state image sensor according to the fourth embodiment according to the present technology.
  • solid-state image sensor of the fourth embodiment according to the present technology is not limited to the solid-state image sensor 300.
  • the solid-state imaging device 300 has an effective imaging region P5 and a non-imaging region Q5.
  • Reference numeral K3 in FIG. 3 indicates a boundary region between the effective imaging region P5 and the non-imaging region Q5.
  • the on-chip lens 63a-1, the red color filter 3 and the green color filter 4, the insulating film (for example, the oxide film) 5-1 and the photoelectric conversion unit (not shown) are in this order from the light incident side.
  • the semiconductor substrate 13a-1 on which the above is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 13a-1 so as to divide the pixels.
  • a first light-shielding film 1 is arranged between the red color filter 3 and the green color filter 4.
  • the first light-shielding film 1 is arranged in a grid pattern between pixels in a plan view from the light incident side.
  • the first light-shielding film 1 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the semiconductor substrate 13a-2 on which the above is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 13a-2 so as to divide the pixels.
  • the second light-shielding film 2 includes a red color filter 3, a green color filter 4, and an insulating film 5-1 so as to cover the light incident surface J3a (upper surface in FIG. 3A) of the semiconductor substrate 13a-2.
  • the second light-shielding film 2 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the digging portion H3a starts from the light incident surface J3a of the semiconductor substrate 13a-2 with the boundary region K3 between the effective imaging region P5 and the non-imaging region Q5 as a starting position toward the outside of the non-imaging region Q5 (in FIG. 3).
  • the semiconductor substrate 13a-2 is formed by digging in a substantially uniform depth in the depth direction (downward in FIG. 3). Due to the formation of the dug portion H3a, the thickness of the semiconductor substrate 13a-2 (length in the vertical direction in FIG. 3A) becomes the thickness of the semiconductor substrate 13a-1 (length in the vertical direction in FIG. 3A). It's smaller than that.
  • the digging of the starting point is performed in the direction directly below the depth direction of the semiconductor substrate so that the film thickness of the semiconductor substrate does not fluctuate sharply in the boundary region K3 and the region near the boundary region K3.
  • the semiconductor substrate may be digged with a gradient (for example, an inverted taper shape) in a cross-sectional view. Details will be described with reference to FIG. 3 (b).
  • the dug portion H3a due to the formation of the dug portion H3a, there is a step in the surface position between the film thickness of the color filter (for example, the red color filter 3 and the green color filter 4) and the height of the on-chip lens (for example, the on-chip lens 63a-1). Is reduced.
  • the color filter for example, the red color filter 3 and the green color filter 4
  • the height of the on-chip lens for example, the on-chip lens 63a-1
  • FIG. 3B is an enlarged cross-sectional view of a part of the I portion shown in FIG. 3A.
  • Reference numeral 13b-1 in FIG. 3B indicates a semiconductor substrate 13b-1 formed in the effective imaging region P5, and reference numeral 13b-2 is a semiconductor substrate 13b-2 formed in the non-imaging region Q5. Is shown.
  • the digging portion H3b starts from the light incident surface J3b of the semiconductor substrate 13b-2 with the boundary region K3 between the effective imaging region P5 and the non-imaging region Q5 as a starting position toward the outside of the non-imaging region Q5 (in FIG. 3).
  • the semiconductor substrate 13b-2 is formed by digging in a substantially uniform depth in the depth direction (downward in FIG. 3).
  • the digging at the starting point is performed so that the film thickness of the semiconductor substrate does not fluctuate sharply in the boundary region K3 and the region near the boundary region K3.
  • the digging is made so as to form a reverse taper shape S with reference to the reference code Z line by making a gradient in the cross-sectional view of the semiconductor substrate.
  • the dug portion H3b may be formed only in the non-imaging region Q5.
  • the contents of the description of the solid-state image sensor of the fourth embodiment (Example 4 of the solid-state image sensor) according to the present technology are the first to third aspects of the present technology described above, unless there is a particular technical contradiction. It can be applied to the solid-state image sensor of the embodiment and the solid-state image sensor of the fifth to ninth embodiments according to the present technology described later.
  • FIG. 4 is a plan layout view showing a configuration example (solid-state image sensor 400) of the solid-state image sensor according to the fifth embodiment according to the present technology.
  • FIG. 5A in FIG. 5 is a cross-sectional view showing a configuration example (solid-state imaging device 500a) of the solid-state imaging device according to the fifth embodiment according to the present technology, and
  • FIG. 5B is FIG. FIG.
  • FIG. 5 (c) is a cross-sectional view of the semiconductor substrates 15a-1b, 15a-2-1b and 15a-2-2b according to the n1-n2 line shown in FIG.
  • FIG. 5 (d) is a cross-sectional view of the semiconductor substrates 15a-1c, 15a-2-1c and 15a-2-2c according to the above
  • FIG. 5 (d) shows the semiconductor substrates 15a-1d according to the n5-n6 line shown in FIG. It is sectional drawing of 15a-2-1d and 15a-2-2d
  • FIG. 5 (e) is a semiconductor substrate 15a-1e, 15a-2-1e and 15a-corresponding to the n7-n8 line shown in FIG. It is sectional drawing of 2-2e.
  • Reference numeral P6 indicates an effective imaging region
  • reference numeral Q6 indicates a non-imaging region in which a dug portion is formed
  • reference numeral Q7 indicates a non-imaging region.
  • FIG. 4 is a plan layout view of the solid-state image sensor 400 from the light incident side.
  • the solid-state image sensor 400 has an effective imaging region 401 and a non-imaging region 402, and the non-imaging region 402 is arranged around the outer periphery of the effective imaging region 401.
  • the four sides of the side 403a, the side 403b, the side 403c, and the side 403d shown in FIG. 4 are the boundary region between the effective imaging region 401 and the non-imaging region 402 and the non-imaging region in the vicinity of the boundary region.
  • a rectangle is formed within 402.
  • the solid-state image sensor 500a shown in FIG. 5A has an effective imaging region P6 and non-imaging regions Q6 and Q7.
  • Reference numeral K5 in FIG. 5A indicates a boundary region between the effective imaging region P6 and the non-imaging regions Q6 and Q7.
  • the on-chip lens 65a-1, the red color filter 3 and the green color filter 4, the insulating film (for example, an oxide film) 5-1 and the photoelectric conversion unit (not shown) are arranged in this order from the light incident side.
  • the semiconductor substrate 15a-1 on which the above is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 15a-1 so as to divide the pixels.
  • a first light-shielding film 1 is arranged between the red color filter 3 and the green color filter 4.
  • the first light-shielding film 1 is arranged in a grid pattern between pixels in a plan view from the light incident side.
  • the first light-shielding film 1 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • An insulating film 5 (for example, an oxide film) having a trench structure (not shown in FIG. 5A) is formed on the semiconductor substrate 15a-2-1 so as to divide the pixels.
  • the second light-shielding film 2 includes a red color filter 3, a green color filter 4, and an insulating film 5 so as to cover the light incident surface J5a (upper surface in FIG.
  • the second light-shielding film 2 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the digging portion H5a starts at the light incident surface J5a of the semiconductor substrate 15a-2-1 with the boundary region K5 between the effective imaging region P6 and the non-imaging region Q6 as a starting position toward the outside of the non-imaging region Q6 (FIG. 5 (a) to the right) to the digging end position M1 (corresponding to the boundary region between the non-imaging region Q6 and the non-imaging region Q7) in the depth direction of the semiconductor substrate 15a-2-1 (FIG. 5). It is formed by digging in (a) downward in the middle) to a substantially uniform depth. Due to the formation of the dug portion H5a, the thickness of the semiconductor substrate 15a-2-1 (length in the vertical direction in FIG.
  • the thickness of the semiconductor substrate 15a-1 is reduced to the thickness of the semiconductor substrate 15a-1 (vertical direction in FIG. 5A). Is smaller than the length of). Further, the thickness of the semiconductor substrate 15a-2-1 (the length in the vertical direction in FIG. 5A) is the thickness of the semiconductor substrate 15a-2-2 in which the digging portion described later is not formed (FIG. 2 (Fig. 2). b) It is smaller than the length in the vertical direction inside).
  • the dug portion H5a there is a step in the surface position between the film thickness of the color filter (for example, the red color filter 3 and the green color filter 4) and the height of the on-chip lens (for example, the on-chip lens 65a-1). Is reduced.
  • the color filter for example, the red color filter 3 and the green color filter 4
  • the height of the on-chip lens for example, the on-chip lens 65a-1).
  • FIG. 5B is a cross-sectional view of the semiconductor substrates 15a-1b, 15a-2-1b and 15a-2-2b according to the n2-n1 line shown in FIG.
  • the semiconductor substrate 15a-1b is formed in the effective imaging region P6, the semiconductor substrate 15a-2-1b is formed in the non-imaging region Q6, and the semiconductor substrate 15a-2-2b is formed in the non-imaging region Q7.
  • the digging portion H5b is formed by digging the light incident surface J5b of the semiconductor substrate 15a-2-1b until the thickness of the semiconductor substrate 15a-2-1b becomes d5b.
  • the width of the dug portion H5b (the length in the left-right direction in FIG. 5B) corresponds to the area width of the non-imaging area Q6 and corresponds to the width of the side 403a (the length in the vertical direction in FIG. 4). To do. That is, the dug portion H5b is formed in the entire region of the side 403a.
  • FIG. 5 (c) is a cross-sectional view of the semiconductor substrates 15a-1c, 15a-2-1c and 15a-2-2c according to the n4-n3 line shown in FIG.
  • the semiconductor substrate 15a-1c is formed in the effective imaging region P6, the semiconductor substrate 15a-2-1c is formed in the non-imaging region Q6, and the semiconductor substrate 15a-2-2c is formed in the non-imaging region Q7.
  • the digging portion H5c is formed by digging the light incident surface J5c of the semiconductor substrate 15a-2-1c until the thickness of the semiconductor substrate 15a-2-1c becomes d5c.
  • the width of the dug portion H5c (the length in the left-right direction in FIG. 5C) corresponds to the area width of the non-imaging area Q6 and corresponds to the width of the side 403b (the length in the left-right direction in FIG. 4). To do. That is, the dug portion H5c is formed in the entire region of the side 403b.
  • FIG. 5D is a cross-sectional view of the semiconductor substrates 15a-1d, 15a-2-1d and 15a-2-2d according to the n6-n5 line shown in FIG.
  • the semiconductor substrate 15a-1d is formed in the effective imaging region P6, the semiconductor substrate 15a-2-1d is formed in the non-imaging region Q6, and the semiconductor substrate 15a-2-2d is formed in the non-imaging region Q7.
  • the light incident surface d5d of 15a-2-1d was not dug, and the dug portion was not formed. That is, no digging portion is formed in the entire region of the side 403b, and the side 403c is used as a pixel, for example, the pixel is used as a confirmation of the black level.
  • FIG. 5 (e) is a cross-sectional view of the semiconductor substrates 15a-1e, 15a-2-1e and 15a-2-2e according to the n8-n7 line shown in FIG.
  • the semiconductor substrate 15a-1e is formed in the effective imaging region P6, the semiconductor substrate 15a-2-1e is formed in the non-imaging region Q6, and the semiconductor substrate 15a-2-2e is formed in the non-imaging region Q7.
  • the digging portion H5e is formed by digging the light incident surface J5e of the semiconductor substrate 15a-2-1e until the thickness of the semiconductor substrate 15a-2-1e becomes d5e.
  • the width of the dug portion H5e corresponds to the area width of the non-imaging area Q6 and corresponds to the width of the side 403d (the length in the left-right direction in FIG. 4). To do. That is, the digging portion H5e is formed in the entire region of the side 403d.
  • the thickness d5b of the digging portion H5b, the thickness d5c of the digging portion H5c, and the thickness d5e of the digging portion H5e are the positions of the upper end portion of the on-chip lens on the light incident side corresponding to the central region of the effective imaging region.
  • the difference is not limited as long as the position of the upper end portion of the on-chip lens on the light incident side corresponding to the boundary region is 100 nm or less.
  • the width of the digging portion H5b, the width of the digging portion H5c, and the width of the digging portion H5e are also set to the position of the upper end portion on the light incident side of the on-chip lens corresponding to the central region of the effective imaging region and the boundary region.
  • the difference from the position of the upper end portion of the corresponding on-chip lens on the light incident side is 100 nm or less, the difference is not limited and may be arbitrary. Further, the size of the thickness is d5b ⁇ d5e ⁇ d5c, but the order of the size of the thickness is not limited to this.
  • the side 403c is used as the pixel, but which of the side 403a, the side 403b, and the side 403d is used.
  • One side may be used as a pixel, or any two or more sides of the side 403a, the side 403b, the side 403c, and the side 403d may be used as the pixel.
  • the contents of the description of the solid-state image sensor of the fifth embodiment (Example 5 of the solid-state image sensor) according to the present technology are the first to fourth aspects of the present technology described above, unless there is a particular technical contradiction. It can be applied to the solid-state image sensor of the embodiment and the solid-state image sensor of the sixth to ninth embodiments according to the present technology described later.
  • FIG. 6 is a cross-sectional view showing a configuration example (solid-state image sensor 600) of the solid-state image sensor of the sixth embodiment according to the present technology.
  • FIG. 7A in FIG. 7 is a plan layout view showing a configuration example (solid-state imaging device 700a) of the solid-state imaging device according to the sixth embodiment according to the present technology
  • FIG. 7B is FIG. It is sectional drawing of the semiconductor substrate 17-1b, 17-2-1b and 17-2-2b according to line n9-n10 shown in (a), and FIG. 7C is shown in FIG.
  • the solid-state imaging device 600 has an effective imaging region P7 and non-imaging regions Q8 and Q9.
  • Reference numeral K6 in FIG. 6 indicates a boundary region between the effective imaging region P7 and the non-imaging regions Q8 and Q9.
  • the on-chip lens 66-1, the red color filter 3 and the green color filter 4, the insulating film (for example, the oxide film) 5-1 and the photoelectric conversion unit (not shown) are in this order from the light incident side.
  • the semiconductor substrate 16-1 on which the above is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 16-1 so as to divide the pixels.
  • a first light-shielding film 1 is arranged between the red color filter 3 and the green color filter 4.
  • the first light-shielding film 1 is arranged in a grid pattern between pixels in a plan view from the light incident side.
  • the first light-shielding film 1 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the insulating film for example, the oxide film 5-1 and the photoelectric conversion unit (non-imaging)
  • the semiconductor substrate 16-2-1 on which (shown) is formed is formed.
  • An insulating film 5 for example, an oxide film having a trench structure is formed on the semiconductor substrate 16-2-1 so as to divide the pixels.
  • the second light-shielding film 2 includes a red color filter 3 and a green color filter 4 and an insulating film 5-so as to cover the light incident surface J6-1 (upper surface in FIG. 6) of the semiconductor substrate 16-2-1.
  • the second light-shielding film 2 in the non-imaging region Q8 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the digging portion H6-1 sets the light incident surface J6-1 of the semiconductor substrate 16-2-1 to the outside of the non-imaging region Q8 starting from the boundary region K6 between the effective imaging region P7 and the non-imaging region Q8.
  • the depth direction of the semiconductor substrate 16-2-1 up to the digging end position N1 (FIG. 6). It is formed by digging at a substantially uniform depth (downward in 6). Due to the formation of the dug portion H6-1, the thickness of the semiconductor substrate 16-2-1 (length in the vertical direction in FIG. 6) is the thickness of the semiconductor substrate 16-1 (length in the vertical direction in FIG. 6). It's getting smaller.
  • the thickness of the semiconductor substrate 16-2-1 (length in the vertical direction in FIG. 6) is deeper than the digging portion H6-1 in a stepped manner (downward in FIG. 6), which will be described later. It is larger than the thickness (length in the vertical direction in FIG. 6) of the semiconductor substrate 16-2-2 on which the digging portion H6-2 is formed.
  • an on-chip lens 66-2-2 in order from the light incident side, an on-chip lens 66-2-2, an antireflection film 7 (for example, a silicon nitride film), a red color filter 3, a green color filter 4, and an insulating film (for example, an oxide film). ) 5-1 and a semiconductor substrate 16-2-2 on which a photoelectric conversion unit (not shown) is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 16-2-2 so as to divide the pixels.
  • the second light-shielding film 2 includes a red color filter 3 and a green color filter 4 and an insulating film 5-so as to cover the light incident surface J6-2 (upper surface in FIG.
  • the second light-shielding film 2 in the non-imaging region Q9 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the digging portion H6-2 sets the light incident surface J6-2 of the semiconductor substrate 16-2-2 to the outside of the non-imaging region Q9 with the boundary region N1 between the non-imaging region Q8 and the non-imaging region Q9 as a starting position.
  • the depth is substantially uniform in the depth direction of the semiconductor substrate 16-2-2 (downward in FIG. 6) toward the direction (to the right in FIG. 6), and the digging portion 6-1 is dug. It is formed by digging deeper than the depth. Due to the formation of the dug portion H6-2, the thickness of the semiconductor substrate 16-2-2 (length in the vertical direction in FIG. 6) is the thickness of the semiconductor substrate 16-1 (length in the vertical direction in FIG. 6). It's getting smaller.
  • the thickness of the semiconductor substrate 16-2-2 (length in the vertical direction in FIG. 6) is such that the digging portion H6-2 is deeper than the digging portion H6-1 (downward in FIG. 6). Since it is continuously dug further in a stepped shape, it is smaller than the thickness of the semiconductor substrate 16-2-1 (the length in the vertical direction in FIG. 6).
  • the digging portion H6-1 and the digging portion 6-2 having a two-stage configuration, the film thickness of the color filter (for example, the red color filter 3 and the green color filter 4) and the on-chip lens (for example, on) are turned on. The level difference at the height of the chip lens 66-1) is reduced.
  • FIG. 7A is a plan layout view of the solid-state image sensor 700a from the light incident side.
  • the solid-state imaging device 700a has an effective imaging region 701a and a non-imaging region 702a, and the non-imaging region 702a is arranged around the outer periphery of the effective imaging region 701a.
  • the four sides of the side 703a, the side 703b, the side 703c, and the side 703d shown in FIG. 7A are the boundary region between the effective imaging region 701a and the non-imaging region 702a and the region near the boundary region.
  • a rectangle is formed within the non-imaging region 702a.
  • FIG. 7B is a cross-sectional view of the semiconductor substrates 17-1b, 17-2-1b and 17-2-2b according to the n10-n9 line shown in FIG. 7A.
  • the semiconductor substrate 17-1b is formed in the effective imaging region P7
  • the semiconductor substrate 17-2-1b is formed in the non-imaging region Q8,
  • the semiconductor substrate 17-2-2b is formed in the non-imaging region Q9.
  • the digging portion H7b has a stepped shape in the cross-sectional view of the light incident surface J7b of the semiconductor substrate 17-2-1b, and the digging portion becomes deeper from the P7 region side of the Q8 region to the Q9 side of the Q8 region ( It is formed by digging so as to be (downward in FIG. 7 (b)).
  • the width of the dug portion H7b corresponds to the area width of the non-imaging region Q8, and the width of the side 703b (the length in the left-right direction in FIG. 7 (a)).
  • the digging portion H7b is formed in the entire region of the side 703b.
  • the digging portion H7b may be formed on any one of the side 703a, the side 703c and the side 703d, and the digging portion H7b may be formed on any one of the side 703a, the side 703b, the side 703c and the side 703d. It may be formed on two or more sides.
  • FIG. 7 (c) is a cross-sectional view of the semiconductor substrates 17-1c, 17-2-1c and 17-2-2c according to the line n12-n11 shown in FIG. 7 (a).
  • the semiconductor substrate 17-1c is formed in the effective imaging region P7
  • the semiconductor substrate 17-2-1c is formed in the non-imaging region Q8,
  • the semiconductor substrate 17-2-2c is formed in the non-imaging region Q9.
  • the digging portion H7c inclines (gradients) the light incident surface J7c of the semiconductor substrate 17-2-1b in a cross-sectional view, and digs in from the P7 region side of the Q8 region to the Q9 side of the Q8 region. It is formed by digging deeply (downward in FIG. 7B).
  • the width of the dug portion H7c corresponds to the area width of the non-imaging region Q8, and the width of the side 703b (the length in the left-right direction in FIG. 7 (a)).
  • the digging portion H7b is formed in the entire region of the side 703b.
  • the digging portion H7c may be formed on any one of the side 703a, the side 703c and the side 703d, and the digging portion H7c may be formed on any one of the side 703a, the side 703b, the side 703c and the side 703d. It may be formed on two or more sides.
  • the contents of the description of the solid-state image sensor of the sixth embodiment (example 6 of the solid-state image sensor) according to the present technology are the first to fifth aspects of the present technology described above, unless there is a particular technical contradiction. It can be applied to the solid-state image sensor of the embodiment and the solid-state image sensor of the seventh to ninth embodiments according to the present technology described later.
  • FIG. 8A is a cross-sectional view showing a configuration example (solid-state image sensor 800a) of the solid-state image sensor according to the seventh embodiment according to the present technology.
  • solid-state image sensor 800a solid-state image sensor 800a
  • the solid-state image sensor of the seventh embodiment according to the present technology is not limited to the solid-state image sensor 800a.
  • the solid-state imaging device 800a has an effective imaging region P8 and a non-imaging region Q10.
  • Reference numeral K8 in FIG. 8A indicates a boundary region between the effective imaging region P8 and the non-imaging region Q10.
  • the on-chip lens 68a-1, the red color filter 3, the green color filter 4, the insulating film (for example, the oxide film) 5-1 and the photoelectric conversion unit (not shown) are arranged in this order from the light incident side.
  • the semiconductor substrate 18a-1 on which the above is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 18a-1 so as to divide the pixels.
  • a first light-shielding film 1 is arranged between the red color filter 3 and the green color filter 4.
  • the first light-shielding film 1 is arranged in a grid pattern between pixels in a plan view from the light incident side.
  • the first light-shielding film 1 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • an on-chip lens 68a-2, a red color filter 3 and a green color filter 4, and a semiconductor substrate 18-2 on which a photoelectric conversion unit (not shown) is formed are formed in this order from the light incident side.
  • the second light-shielding film 2 is red so as to cover the light incident surface J1 (upper surface in FIG. 1) of the P-type semiconductor region or the N-type semiconductor region 180a-2 formed on the semiconductor substrate 18a-2.
  • the second light-shielding film 2 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the digging portion H8a starts the light incident surface J8a of the semiconductor substrate 18a-2 (P-type semiconductor region or N-type semiconductor region 180a-2) and the boundary region K8 between the effective imaging region P8 and the non-imaging region Q10. As the position, the depth is substantially uniform toward the outside of the non-imaging region Q10 (rightward in FIG. 8A) and in the depth direction of the semiconductor substrate 18a-2 (downward in FIG. 8A). It is formed by digging in. Due to the formation of the dug portion H8a, the thickness of the semiconductor substrate 18a-2 (the length in the vertical direction in FIG. 8A) becomes the thickness of the semiconductor substrate 18a-1 (the length in the vertical direction in FIG. 8A). It's smaller than that. The dug portion H8a may be formed only in the non-imaging region Q10.
  • the dug portion H8a due to the formation of the dug portion H8a, there is a step in the surface position between the film thickness of the color filter (for example, the red color filter 3 and the green color filter 4) and the height of the on-chip lens (for example, the on-chip lens 68a-1). Is reduced.
  • the color filter for example, the red color filter 3 and the green color filter 4
  • the height of the on-chip lens for example, the on-chip lens 68a-1).
  • the semiconductor substrate 18a-2 is dug out from the light incident surface J8a while removing the insulating film 5 (5-1). Form a recess. Therefore, in order to prevent the light-shielding film 2 and the semiconductor substrate 18a-2 from coming into contact with each other and conducting conduction, a P-type semiconductor region or an N-type semiconductor region 180a-2 is directly under the light-shielding film 2. Is formed so that the light-shielding film 2 does not come into contact with anything other than the P-type semiconductor region or the N-type semiconductor region 180a-2.
  • the contents described about the solid-state image sensor of the seventh embodiment are the first to sixth aspects of the above-mentioned present technology unless there is a particular technical contradiction. It can be applied to the solid-state image sensor of the embodiment and the solid-state image sensor of the eighth to ninth embodiments according to the present technology described later.
  • FIG. 8B is a cross-sectional view showing a configuration example (solid-state image sensor 800b) of the solid-state image sensor according to the eighth embodiment according to the present technology.
  • solid-state image sensor of the eighth embodiment according to the present technology is not limited to the solid-state image sensor 800b.
  • the solid-state imaging device 800b has an effective imaging region P9 and a non-imaging region Q11.
  • Reference numeral K8 in FIG. 8B indicates a boundary region between the effective imaging region P9 and the non-imaging region Q11.
  • the on-chip lens 68b-1, the red color filter 3 and the green color filter 4, the insulating film (for example, the oxide film) 5-1 and the photoelectric conversion unit (not shown) are in this order from the light incident side.
  • the semiconductor substrate 18b-1 on which the above is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 18b-1 so as to divide the pixels.
  • a first light-shielding film 1 is arranged between the red color filter 3 and the green color filter 4.
  • the first light-shielding film 1 is arranged in a grid pattern between pixels in a plan view from the light incident side.
  • the first light-shielding film 1 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the semiconductor substrate 18b-2 on which the above is formed is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 18b-2 so as to divide the pixels.
  • the second light-shielding film 2 includes a red color filter 3 and a green color filter 4 and an insulating film 5-1 so as to cover the light incident surface J8b (upper surface in FIG. 8B) of the semiconductor substrate 18b-2.
  • the second light-shielding film 2 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the digging portion H8b starts from the light incident surface J8b of the semiconductor substrate 18b-2 with the boundary region K8 between the effective imaging region P9 and the non-imaging region Q11 as a starting position toward the outside of the non-imaging region Q11 (FIG. 8 (FIG. 8). It is formed by digging at a substantially uniform depth in the b) in the right direction) and in the depth direction of the semiconductor substrate 18-2b (downward in FIG. 8B). Due to the formation of the dug portion H8b, the thickness of the semiconductor substrate 18b-2 (length in the vertical direction in FIG. 8 (b)) is the thickness of the semiconductor substrate 18b-1 (length in the vertical direction in FIG. 8 (b)). It's smaller than that.
  • the dug portion H8b may be formed only in the non-imaging region Q11.
  • the dug portion H8b due to the formation of the dug portion H8b, there is a step in the surface position between the film thickness of the color filter (for example, the red color filter 3 and the green color filter 4) and the height of the on-chip lens (for example, the on-chip lens 68b-1). Is reduced.
  • the color filter for example, the red color filter 3 and the green color filter 4
  • the height of the on-chip lens for example, the on-chip lens 68b-1).
  • the semiconductor substrate 18b-2 is dug from the light incident surface J8b to form a dug portion. Then, the insulating film 5 (5-1) is formed. Therefore, since an insulating film (interlayer insulating film) is arranged between the light-shielding film 2 and the semiconductor substrate 18a-2, the light-shielding film 2 and the semiconductor substrate 18a-2 do not conduct with each other.
  • an insulating film interlayer insulating film
  • the contents described about the solid-state image sensor of the eighth embodiment (example 8 of the solid-state image sensor) according to the present technology are the first to seventh items according to the above-mentioned present technology unless there is a particular technical contradiction. It can be applied to the solid-state image sensor of the embodiment and the solid-state image sensor of the ninth embodiment according to the present technology described later.
  • FIG. 9 is a cross-sectional view showing a configuration example (solid-state image sensor 900) of the solid-state image sensor according to the ninth embodiment according to the present technology.
  • the solid-state image sensor of the ninth embodiment according to the present technology is not limited to the solid-state image sensor 900.
  • the solid-state imaging device 900 has an effective imaging region P10 and a non-imaging region Q12.
  • Reference numeral K9 in FIG. 9 indicates a boundary region between the effective imaging region P10 and the non-imaging region Q12.
  • the on-chip lens 69-1 and the transparent filter 34 (which may be formed from a part of the on-chip lens 69-1) and an insulating film (for example, an oxide film) are sequentially formed from the light incident side. ) 5-1 and a semiconductor substrate 19-1 on which a photoelectric conversion unit (not shown) is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 19-1 so as to divide the pixels.
  • a first light-shielding film 1 is arranged between the red color filter 3 and the green color filter 4.
  • the first light-shielding film 1 is arranged in a grid pattern between pixels in a plan view from the light incident side.
  • the first light-shielding film 1 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • an on-chip lens 69-2 in order from the light incident side, an on-chip lens 69-2, a transparent filter 34 (may be formed from a part of the on-chip lens 69-2), and an insulating film (for example, an oxide film). ) 5-1 and a semiconductor substrate 19-2 on which a photoelectric conversion unit (not shown) is formed.
  • An insulating film 5 (for example, an oxide film) having a trench structure is formed on the semiconductor substrate 19-2 so as to divide the pixels.
  • the second light-shielding film 2 includes a transparent filter 34 (on-chip lens 69-2), an insulating film 5-1 and an insulating film 5-1 so as to cover the light incident surface J9 (upper surface in FIG. 9) of the semiconductor substrate 19-2.
  • the second light-shielding film 2 may be composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the digging portion H9 starts at the light incident surface J9 of the semiconductor substrate 19-2 and starts at the boundary region K9 between the effective imaging region P10 and the non-imaging region Q12, and faces the outside of the non-imaging region Q12 (in FIG. 9).
  • the semiconductor substrate 19-2 is formed by digging in a substantially uniform depth in the depth direction (downward in FIG. 9). Due to the formation of the dug portion H9, the thickness of the semiconductor substrate 19-2 (length in the vertical direction in FIG. 9) becomes smaller than the thickness of the semiconductor substrate 19-1 (length in the vertical direction in FIG. 9). There is.
  • the dug portion H9 may be formed only in the non-imaging region Q12.
  • the digging portion H9 by forming the digging portion H9, the step in the surface position of the film thickness of the transparent filter 34 and the height of the on-chip lens (for example, the on-chip lens 69-1) is reduced.
  • the solid-state image sensor 900 has a transparent filter 34 and is used for TOF (Time Of Flight), and reduction of steps due to digging of a semiconductor substrate is performed by a color filter (for example, a blue (B) filter, a green (G)). It is not limited to a solid-state image sensor having a filter (filter, red (R) filter, etc.).
  • the contents of the description of the solid-state image sensor of the ninth embodiment are the first to eighth aspects of the present technology described above, unless there is a particular technical contradiction. It can be applied to the solid-state image sensor of the embodiment.
  • the electronic device of the tenth embodiment according to the present technology is equipped with the solid-state image sensor of any one of the first to ninth embodiments according to the present technology. It is an electronic device.
  • FIG. 12 is a diagram showing an example of using the solid-state image sensor of the first to ninth embodiments according to the present technology as an image sensor (solid-state image sensor).
  • the solid-state image pickup device of the first to ninth embodiments described above can be used in various cases for sensing light such as visible light, infrared light, ultraviolet light, and X-ray, as described below. it can. That is, as shown in FIG. 12, for example, the field of appreciation for taking an image used for appreciation, the field of transportation, the field of home appliances, the field of medical / healthcare, the field of security, the field of beauty, and sports.
  • the electronic device of the tenth embodiment described above is the solid-state image sensor of any one of the first to ninth embodiments. Can be done.
  • the first to ninth implementations are applied to devices for taking images to be used for appreciation, such as digital cameras, smartphones, and mobile phones with a camera function.
  • the solid-state imaging device of any one of the embodiments can be used.
  • in-vehicle sensors that photograph the front, rear, surroundings, inside of a vehicle, etc., and monitor traveling vehicles and roads for safe driving such as automatic stop and recognition of the driver's condition.
  • the solid-state image sensor of any one of the first to ninth embodiments is used as a device used for traffic such as a surveillance camera and a distance measuring sensor for measuring distance between vehicles. be able to.
  • devices used in home appliances such as television receivers, refrigerators, and air conditioners in order to photograph a user's gesture and operate the device according to the gesture.
  • the solid-state imaging device of any one of the ninth embodiments can be used.
  • the first to ninth implementations are applied to devices used for medical care and healthcare, such as endoscopes and devices that perform angiography by receiving infrared light.
  • the solid-state imaging device of any one of the embodiments can be used.
  • a device used for security such as a surveillance camera for crime prevention and a camera for personal authentication is used as a solid body of any one of the first to ninth embodiments.
  • An image sensor can be used.
  • a device used for cosmetology such as a skin measuring device for photographing the skin and a microscope for photographing the scalp, an embodiment of any one of the first to ninth embodiments.
  • a solid-state imaging device of the form can be used.
  • a solid-state image sensor In the field of sports, for example, a solid-state image sensor according to any one of the first to ninth embodiments is used as a device used for sports such as an action camera or a wearable camera for sports applications. Can be used.
  • a device used for agriculture such as a camera for monitoring the state of a field or a crop is subjected to solid-state imaging of any one of the first to ninth embodiments.
  • the device can be used.
  • the solid-state image sensor of any one of the first to ninth embodiments described above is used.
  • the solid-state imaging device 101 can be applied to all types of electronic devices having an imaging function, such as a camera system such as a digital still camera or a video camera, or a mobile phone having an imaging function.
  • FIG. 13 shows a schematic configuration of the electronic device 102 (camera) as an example.
  • the electronic device 102 is, for example, a video camera capable of capturing a still image or a moving image, and drives a solid-state image sensor 101, an optical system (optical lens) 310, a shutter device 311 and a solid-state image sensor 101 and a shutter device 311. It has a driving unit 313 and a signal processing unit 312.
  • the optical system 310 guides the image light (incident light) from the subject to the pixel portion 101a of the solid-state image sensor 101.
  • the optical system 310 may be composed of a plurality of optical lenses.
  • the shutter device 311 controls the light irradiation period and the light blocking period of the solid-state image sensor 101.
  • the drive unit 313 controls the transfer operation of the solid-state image sensor 101 and the shutter operation of the shutter device 311.
  • the signal processing unit 312 performs various signal processing on the signal output from the solid-state image sensor 101.
  • the video signal Dout after signal processing is stored in a storage medium such as a memory, or is output to a monitor or the like.
  • FIG. 14 is a diagram showing an example of a schematic configuration of an endoscopic surgery system to which the technique according to the present disclosure (the present technique) can be applied.
  • FIG. 14 shows a surgeon (doctor) 11131 performing surgery on patient 11132 on patient bed 11133 using the endoscopic surgery system 11000.
  • the endoscopic surgery system 11000 includes an endoscope 11100, other surgical tools 11110 such as an abdominal tube 11111 and an energy treatment tool 11112, and a support arm device 11120 that supports the endoscope 11100.
  • a cart 11200 equipped with various devices for endoscopic surgery.
  • the endoscope 11100 is composed of a lens barrel 11101 in which a region having a predetermined length from the tip is inserted into the body cavity of the patient 11132, and a camera head 11102 connected to the base end of the lens barrel 11101.
  • the endoscope 11100 configured as a so-called rigid mirror having a rigid barrel 11101 is illustrated, but the endoscope 11100 may be configured as a so-called flexible mirror having a flexible barrel. Good.
  • An opening in which an objective lens is fitted is provided at the tip of the lens barrel 11101.
  • a light source device 11203 is connected to the endoscope 11100, and the light generated by the light source device 11203 is guided to the tip of the lens barrel by a light guide extending inside the lens barrel 11101 to be an objective. It is irradiated toward the observation target in the body cavity of the patient 11132 through the lens.
  • the endoscope 11100 may be a direct endoscope, a perspective mirror, or a side endoscope.
  • An optical system and an image sensor are provided inside the camera head 11102, and the reflected light (observation light) from the observation target is focused on the image sensor by the optical system.
  • the observation light is photoelectrically converted by the image pickup device, and an electric signal corresponding to the observation light, that is, an image signal corresponding to the observation image is generated.
  • the image signal is transmitted as RAW data to the camera control unit (CCU: Camera Control Unit) 11201.
  • CCU Camera Control Unit
  • the CCU11201 is composed of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and the like, and comprehensively controls the operations of the endoscope 11100 and the display device 11202. Further, the CCU 11201 receives an image signal from the camera head 11102, and performs various image processing on the image signal for displaying an image based on the image signal, such as development processing (demosaic processing).
  • a CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the display device 11202 displays an image based on the image signal processed by the CCU 11201 under the control of the CCU 11201.
  • the light source device 11203 is composed of, for example, a light source such as an LED (Light Emitting Diode), and supplies irradiation light to the endoscope 11100 when photographing an operating part or the like.
  • a light source such as an LED (Light Emitting Diode)
  • LED Light Emitting Diode
  • the input device 11204 is an input interface for the endoscopic surgery system 11000.
  • the user can input various information and input instructions to the endoscopic surgery system 11000 via the input device 11204.
  • the user inputs an instruction to change the imaging conditions (type of irradiation light, magnification, focal length, etc.) by the endoscope 11100.
  • the treatment tool control device 11205 controls the drive of the energy treatment tool 11112 for ablation of tissue, incision, sealing of blood vessels, and the like.
  • the pneumoperitoneum device 11206 uses a gas in the pneumoperitoneum tube 11111 to inflate the body cavity of the patient 11132 for the purpose of securing the field of view by the endoscope 11100 and securing the work space of the operator.
  • Recorder 11207 is a device capable of recording various information related to surgery.
  • the printer 11208 is a device capable of printing various information related to surgery in various formats such as text, images, and graphs.
  • the light source device 11203 that supplies the irradiation light to the endoscope 11100 when photographing the surgical site can be composed of, for example, an LED, a laser light source, or a white light source composed of a combination thereof.
  • a white light source is configured by combining RGB laser light sources, the output intensity and output timing of each color (each wavelength) can be controlled with high accuracy. Therefore, the light source device 11203 adjusts the white balance of the captured image. It can be carried out.
  • the laser light from each of the RGB laser light sources is irradiated to the observation target in a time-divided manner, and the drive of the image sensor of the camera head 11102 is controlled in synchronization with the irradiation timing to support each of RGB. It is also possible to capture the image in a time-divided manner. According to this method, a color image can be obtained without providing a color filter on the image sensor.
  • the drive of the light source device 11203 may be controlled so as to change the intensity of the output light at predetermined time intervals.
  • the drive of the image sensor of the camera head 11102 in synchronization with the timing of changing the light intensity to acquire an image in a time-divided manner and synthesizing the image, so-called high dynamic without blackout and overexposure. Range images can be generated.
  • the light source device 11203 may be configured to be able to supply light in a predetermined wavelength band corresponding to special light observation.
  • special light observation for example, by utilizing the wavelength dependence of light absorption in body tissue to irradiate light in a narrow band as compared with the irradiation light (that is, white light) in normal observation, the surface layer of the mucous membrane.
  • a so-called narrow band imaging is performed in which a predetermined tissue such as a blood vessel is photographed with high contrast.
  • fluorescence observation may be performed in which an image is obtained by fluorescence generated by irradiating with excitation light.
  • the body tissue is irradiated with excitation light to observe the fluorescence from the body tissue (autofluorescence observation), or a reagent such as indocyanine green (ICG) is locally injected into the body tissue and the body tissue is injected. It is possible to obtain a fluorescence image by irradiating excitation light corresponding to the fluorescence wavelength of the reagent.
  • the light source device 11203 may be configured to be capable of supplying narrow band light and / or excitation light corresponding to such special light observation.
  • FIG. 15 is a block diagram showing an example of the functional configuration of the camera head 11102 and CCU11201 shown in FIG.
  • the camera head 11102 includes a lens unit 11401, an imaging unit 11402, a driving unit 11403, a communication unit 11404, and a camera head control unit 11405.
  • CCU11201 has a communication unit 11411, an image processing unit 11412, and a control unit 11413.
  • the camera head 11102 and CCU11201 are communicatively connected to each other by a transmission cable 11400.
  • the lens unit 11401 is an optical system provided at a connection portion with the lens barrel 11101.
  • the observation light taken in from the tip of the lens barrel 11101 is guided to the camera head 11102 and incident on the lens unit 11401.
  • the lens unit 11401 is configured by combining a plurality of lenses including a zoom lens and a focus lens.
  • the image pickup unit 11402 is composed of an image pickup element.
  • the image sensor constituting the image pickup unit 11402 may be one (so-called single plate type) or a plurality (so-called multi-plate type).
  • each image pickup element may generate an image signal corresponding to each of RGB, and a color image may be obtained by synthesizing them.
  • the image pickup unit 11402 may be configured to have a pair of image pickup elements for acquiring image signals for the right eye and the left eye corresponding to 3D (Dimensional) display, respectively.
  • the 3D display enables the operator 11131 to more accurately grasp the depth of the biological tissue in the surgical site.
  • a plurality of lens units 11401 may be provided corresponding to each image pickup element.
  • the imaging unit 11402 does not necessarily have to be provided on the camera head 11102.
  • the imaging unit 11402 may be provided inside the lens barrel 11101 immediately after the objective lens.
  • the drive unit 11403 is composed of an actuator, and the zoom lens and focus lens of the lens unit 11401 are moved by a predetermined distance along the optical axis under the control of the camera head control unit 11405. As a result, the magnification and focus of the image captured by the imaging unit 11402 can be adjusted as appropriate.
  • the communication unit 11404 is composed of a communication device for transmitting and receiving various information to and from the CCU11201.
  • the communication unit 11404 transmits the image signal obtained from the image pickup unit 11402 as RAW data to the CCU 11201 via the transmission cable 11400.
  • the communication unit 11404 receives a control signal for controlling the drive of the camera head 11102 from the CCU 11201 and supplies the control signal to the camera head control unit 11405.
  • the control signal includes, for example, information to specify the frame rate of the captured image, information to specify the exposure value at the time of imaging, and / or information to specify the magnification and focus of the captured image, and the like. Contains information about the condition.
  • the imaging conditions such as the frame rate, exposure value, magnification, and focus may be appropriately specified by the user, or may be automatically set by the control unit 11413 of the CCU 11201 based on the acquired image signal. Good.
  • the endoscope 11100 is equipped with a so-called AE (Auto Exposure) function, an AF (Auto Focus) function, and an AWB (Auto White Balance) function.
  • the camera head control unit 11405 controls the drive of the camera head 11102 based on the control signal from the CCU 11201 received via the communication unit 11404.
  • the communication unit 11411 is composed of a communication device for transmitting and receiving various information to and from the camera head 11102.
  • the communication unit 11411 receives an image signal transmitted from the camera head 11102 via the transmission cable 11400.
  • the communication unit 11411 transmits a control signal for controlling the drive of the camera head 11102 to the camera head 11102.
  • Image signals and control signals can be transmitted by telecommunications, optical communication, or the like.
  • the image processing unit 11412 performs various image processing on the image signal which is the RAW data transmitted from the camera head 11102.
  • the control unit 11413 performs various controls related to the imaging of the surgical site and the like by the endoscope 11100 and the display of the captured image obtained by the imaging of the surgical site and the like. For example, the control unit 11413 generates a control signal for controlling the drive of the camera head 11102.
  • control unit 11413 causes the display device 11202 to display an image captured by the surgical unit or the like based on the image signal processed by the image processing unit 11412.
  • the control unit 11413 may recognize various objects in the captured image by using various image recognition techniques. For example, the control unit 11413 detects the shape, color, and the like of the edge of an object included in the captured image to remove surgical tools such as forceps, a specific biological part, bleeding, and mist when using the energy treatment tool 11112. Can be recognized.
  • the control unit 11413 may superimpose and display various surgical support information on the image of the surgical unit by using the recognition result. By superimposing and displaying the surgical support information and presenting it to the surgeon 11131, it is possible to reduce the burden on the surgeon 11131 and to allow the surgeon 11131 to proceed with the surgery reliably.
  • the transmission cable 11400 that connects the camera head 11102 and CCU11201 is an electric signal cable that supports electric signal communication, an optical fiber that supports optical communication, or a composite cable thereof.
  • the communication was performed by wire using the transmission cable 11400, but the communication between the camera head 11102 and the CCU11201 may be performed wirelessly.
  • the above is an example of an endoscopic surgery system to which the technology according to the present disclosure can be applied.
  • the technique according to the present disclosure can be applied to the endoscope 11100, the camera head 11102 (imaging unit 11402), and the like among the configurations described above.
  • the solid-state image sensor according to the present technology can be applied to the image pickup unit 10402.
  • the endoscopic surgery system has been described as an example, but the technique according to the present disclosure may be applied to other, for example, a microscopic surgery system.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure is realized as a device mounted on any kind of moving body such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
  • FIG. 16 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a mobile control system to which the technique according to the present disclosure can be applied.
  • the vehicle control system 12000 includes a plurality of electronic control units connected via the communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (interface) 12053 are shown as a functional configuration of the integrated control unit 12050.
  • the drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs.
  • the drive system control unit 12010 provides a driving force generator for generating the driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism for adjusting and a braking device for generating a braking force of a vehicle.
  • the body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
  • the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, blinkers or fog lamps.
  • the body system control unit 12020 may be input with radio waves transmitted from a portable device that substitutes for the key or signals of various switches.
  • the body system control unit 12020 receives inputs of these radio waves or signals and controls a vehicle door lock device, a power window device, a lamp, and the like.
  • the vehicle outside information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000.
  • the image pickup unit 12031 is connected to the vehicle exterior information detection unit 12030.
  • the vehicle outside information detection unit 12030 causes the image pickup unit 12031 to capture an image of the outside of the vehicle and receives the captured image.
  • the vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as a person, a vehicle, an obstacle, a sign, or characters on the road surface based on the received image.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of the light received.
  • the image pickup unit 12031 can output an electric signal as an image or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared light.
  • the in-vehicle information detection unit 12040 detects the in-vehicle information.
  • a driver state detection unit 12041 that detects the driver's state is connected to the in-vehicle information detection unit 12040.
  • the driver state detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing.
  • the microcomputer 12051 calculates the control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit.
  • a control command can be output to 12010.
  • the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, vehicle lane deviation warning, and the like. It is possible to perform cooperative control for the purpose of.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform coordinated control for the purpose of automatic driving that runs autonomously without depending on the operation.
  • the microprocessor 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the vehicle exterior information detection unit 12030.
  • the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or the oncoming vehicle detected by the external information detection unit 12030, and performs coordinated control for the purpose of anti-glare such as switching the high beam to the low beam. It can be carried out.
  • the audio image output unit 12052 transmits the output signal of at least one of the audio and the image to the output device capable of visually or audibly notifying the passenger or the outside of the vehicle of the information.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are exemplified as output devices.
  • the display unit 12062 may include, for example, at least one of an onboard display and a heads-up display.
  • FIG. 17 is a diagram showing an example of the installation position of the imaging unit 12031.
  • the vehicle 12100 has image pickup units 12101, 12102, 12103, 12104, 12105 as the image pickup unit 12031.
  • the imaging units 12101, 12102, 12103, 12104, 12105 are provided at positions such as the front nose, side mirrors, rear bumpers, back doors, and the upper part of the windshield in the vehicle interior of the vehicle 12100, for example.
  • the image pickup unit 12101 provided on the front nose and the image pickup section 12105 provided on the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100.
  • the imaging units 12102 and 12103 provided in the side mirrors mainly acquire images of the side of the vehicle 12100.
  • the imaging unit 12104 provided on the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100.
  • the images in front acquired by the imaging units 12101 and 12105 are mainly used for detecting a preceding vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
  • FIG. 17 shows an example of the photographing range of the imaging units 12101 to 12104.
  • the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose
  • the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively
  • the imaging range 12114 indicates the imaging range of the imaging units 12102 and 12103.
  • the imaging range of the imaging unit 12104 provided on the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 as viewed from above can be obtained.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the image pickup units 12101 to 12104 may be a stereo camera composed of a plurality of image pickup elements, or may be an image pickup element having pixels for phase difference detection.
  • the microcomputer 12051 has a distance to each three-dimensional object within the imaging range 12111 to 12114 based on the distance information obtained from the imaging units 12101 to 12104, and a temporal change of this distance (relative velocity with respect to the vehicle 12100).
  • a predetermined speed for example, 0 km / h or more.
  • the microprocessor 12051 can set an inter-vehicle distance to be secured in front of the preceding vehicle in advance, and can perform automatic braking control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform cooperative control for the purpose of automatic driving or the like in which the vehicle travels autonomously without depending on the operation of the driver.
  • the microcomputer 12051 converts three-dimensional object data related to a three-dimensional object into two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, utility poles, and other three-dimensional objects based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles.
  • the microprocessor 12051 identifies obstacles around the vehicle 12100 into obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be provided.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104.
  • pedestrian recognition includes, for example, a procedure for extracting feature points in an image captured by an imaging unit 12101 to 12104 as an infrared camera, and pattern matching processing for a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian. It is done by the procedure to determine.
  • the audio image output unit 12052 When the microcomputer 12051 determines that a pedestrian is present in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 outputs a square contour line for emphasizing the recognized pedestrian.
  • the display unit 12062 is controlled so as to superimpose and display. Further, the audio image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
  • the above is an example of a vehicle control system to which the technology according to the present disclosure (the present technology) can be applied.
  • the technique according to the present disclosure can be applied to, for example, the imaging unit 12031 among the configurations described above.
  • the solid-state image sensor according to the present technology can be applied to the image pickup unit 12031.
  • the present technology can also have the following configurations.
  • a filter that transmits a plurality of specific lights, and a semiconductor substrate on which a photoelectric conversion unit is formed are provided.
  • An effective imaging region in which a first light-shielding film is arranged is formed between filters that transmit the specific light.
  • a non-imaging region in which a second light-shielding film is arranged is formed between the plurality of filters that transmit specific light and the semiconductor substrate so as to cover the light incident surface of the semiconductor substrate.
  • a solid-state imaging device in which a digging portion formed by digging an optical incident surface of the semiconductor substrate is provided in a boundary region between the effective imaging region and the non-imaging region and a region in the vicinity of the boundary region.
  • a digging portion formed by digging an optical incident surface of the semiconductor substrate is provided in a boundary region between the effective imaging region and the non-imaging region and a region in the vicinity of the boundary region.
  • the difference between the position of the upper end portion of the on-chip lens on the light incident side corresponding to the central region of the effective imaging region and the position of the upper end portion of the on-chip lens on the light incident side corresponding to the boundary region is 100 nm or less.
  • the solid-state imaging device according to any one of [1] to [3].
  • the first light-shielding film and / or the second light-shielding film is composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al).
  • the solid-state imaging device according to any one of [4].
  • the non-imaging region is arranged around the outer periphery of the effective imaging region, and in a plan view from the light incident side, the boundary region and the vicinity region of the boundary region have a rectangle, and among the four sides of the rectangle.
  • the solid-state imaging device according to any one of [1] to [5], wherein the dug portion is formed on at least one side.
  • the solid-state image sensor according to any one of [1] to [6], wherein the shape of the dug portion on the boundary region side in the depth direction in a cross-sectional view has a gradient.
  • the first light-shielding film and / or the second light-shielding film is in contact with the P-type semiconductor region or the N-type semiconductor region of the semiconductor substrate, [1] to [7]. ].
  • the solid-state imaging device In order from the light incident side, at least an on-chip lens, a filter that transmits a plurality of specific lights, and a semiconductor substrate on which a photoelectric conversion unit is formed are provided. An effective imaging region in which a first light-shielding film is arranged is formed between filters that transmit the specific light. A non-imaging region in which a second light-shielding film is arranged is formed between the plurality of filters that transmit specific light and the semiconductor substrate so as to cover the light incident surface of the semiconductor substrate. A solid-state image pickup device in which a digging portion formed by digging a light incident surface of the semiconductor substrate is provided in the non-imaging region.
  • the first light-shielding film and / or the second light-shielding film is composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al), [10] or The solid-state imaging device according to [11].
  • At least an on-chip lens In order from the light incident side, at least an on-chip lens, a filter that transmits a plurality of specific lights, and a semiconductor substrate on which a photoelectric conversion unit is formed are provided.
  • An effective imaging region in which a first light-shielding film is arranged is formed between filters that transmit the specific light.
  • a non-imaging region in which a second light-shielding film is arranged is formed between the plurality of filters that transmit specific light and the semiconductor substrate so as to cover the light incident surface of the semiconductor substrate.
  • a solid-state image pickup device in which a digging portion formed by digging a light incident surface of the semiconductor substrate is provided in the effective imaging region.
  • the first light-shielding film and / or the second light-shielding film is composed of at least one selected from the group consisting of a compound containing tungsten (W) and titanium (Ti) and aluminum (Al), [16] or The solid-state imaging device according to [17].
  • Effective imaging region Q1, Q2, Q3, Q4, Q5, Q6, Q7, Q8, Q9, Q10, Q11, Q12, 402, 702a, 1002 ...
  • Non-imaging area K1, K2, K3, K5, K6, K8, K9 ... Boundary area, L1, L2 ... Digging start position.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Power Engineering (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Electromagnetism (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Solid State Image Pick-Up Elements (AREA)

Abstract

L'invention concerne un dispositif d'imagerie à semi-conducteur avec lequel la qualité d'image du dispositif d'imagerie à semi-conducteur peut être encore améliorée. Le dispositif d'imagerie à semi-conducteur comprend, dans l'ordre à partir d'un côté d'incidence de lumière, au moins : une lentille sur puce ; une pluralité de filtres qui transmettent une lumière spécifique ; et un substrat semi-conducteur sur lequel est formée une unité de conversion photoélectrique. Une région d'imagerie efficace est formée, dans laquelle un premier film de blocage de lumière est disposé entre les filtres qui transmettent une lumière spécifique. Une région de non-imagerie est formée, dans laquelle un second film de blocage de lumière est disposé entre la pluralité de filtres qui transmettent une lumière spécifique et le substrat semi-conducteur de manière à recouvrir une surface d'incidence de lumière du substrat semi-conducteur. Une partie creusée qui est formée par excavation de la surface d'incidence de lumière du substrat semi-conducteur est disposée sur une région de délimitation de la région d'imagerie efficace et de la région de non-imagerie ainsi qu'une région à proximité de la région de délimitation.
PCT/JP2020/028084 2019-10-16 2020-07-20 Dispositif d'imagerie à semi-conducteur et appareil électronique WO2021075116A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019189262A JP2021064721A (ja) 2019-10-16 2019-10-16 固体撮像装置及び電子機器
JP2019-189262 2019-10-16

Publications (1)

Publication Number Publication Date
WO2021075116A1 true WO2021075116A1 (fr) 2021-04-22

Family

ID=75488146

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/028084 WO2021075116A1 (fr) 2019-10-16 2020-07-20 Dispositif d'imagerie à semi-conducteur et appareil électronique

Country Status (2)

Country Link
JP (1) JP2021064721A (fr)
WO (1) WO2021075116A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022215694A1 (fr) 2021-04-06 2022-10-13 日本製鉄株式会社 Plaque d'acier à damier plaquée de zn-al-mg

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013182906A (ja) * 2012-02-29 2013-09-12 Sony Corp 固体撮像装置及びその製造方法、電子機器
WO2014156933A1 (fr) * 2013-03-29 2014-10-02 ソニー株式会社 Élément d'imagerie et dispositif d'imagerie
JP2019160847A (ja) * 2018-03-07 2019-09-19 ソニーセミコンダクタソリューションズ株式会社 固体撮像装置および固体撮像素子

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013182906A (ja) * 2012-02-29 2013-09-12 Sony Corp 固体撮像装置及びその製造方法、電子機器
WO2014156933A1 (fr) * 2013-03-29 2014-10-02 ソニー株式会社 Élément d'imagerie et dispositif d'imagerie
JP2019160847A (ja) * 2018-03-07 2019-09-19 ソニーセミコンダクタソリューションズ株式会社 固体撮像装置および固体撮像素子

Also Published As

Publication number Publication date
JP2021064721A (ja) 2021-04-22

Similar Documents

Publication Publication Date Title
JP6971722B2 (ja) 固体撮像装置および電子機器
JP2022044653A (ja) 撮像装置
WO2018221191A1 (fr) Appareil d'imagerie et dispositif électronique
JP6951866B2 (ja) 撮像素子
US11750932B2 (en) Image processing apparatus, image processing method, and electronic apparatus
WO2019049662A1 (fr) Puce de capteur et machine électronique
WO2019207978A1 (fr) Élément de capture d'image et procédé de fabrication d'élément de capture d'image
JPWO2020137203A1 (ja) 撮像素子および撮像装置
WO2019038999A1 (fr) Dispositif d'imagerie à semi-conducteur et son procédé de production
WO2022064853A1 (fr) Dispositif d'imagerie à semi-conducteurs et appareil électronique
WO2021075116A1 (fr) Dispositif d'imagerie à semi-conducteur et appareil électronique
KR20240037943A (ko) 촬상 장치
WO2021075117A1 (fr) Dispositif d'imagerie à semi-conducteurs et appareil électronique
WO2020195180A1 (fr) Élément d'imagerie et dispositif d'imagerie
JP2019050338A (ja) 撮像素子および撮像素子の製造方法、撮像装置、並びに電子機器
JPWO2020100697A1 (ja) 固体撮像素子、固体撮像装置及び電子機器
WO2023080011A1 (fr) Dispositif d'imagerie et appareil électronique
WO2023021787A1 (fr) Dispositif de détection optique et son procédé de fabrication
US20240153978A1 (en) Semiconductor chip, manufacturing method for semiconductor chip, and electronic device
US20240038807A1 (en) Solid-state imaging device
WO2023013393A1 (fr) Dispositif d'imagerie
WO2024029408A1 (fr) Dispositif d'imagerie
JP2023152522A (ja) 光検出装置
TW202316649A (zh) 攝像裝置
JP2023152523A (ja) 光検出装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20877906

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20877906

Country of ref document: EP

Kind code of ref document: A1