CN202713478U - Imaging apparatus - Google Patents

Imaging apparatus Download PDF

Info

Publication number
CN202713478U
CN202713478U CN201220117378.6U CN201220117378U CN202713478U CN 202713478 U CN202713478 U CN 202713478U CN 201220117378 U CN201220117378 U CN 201220117378U CN 202713478 U CN202713478 U CN 202713478U
Authority
CN
China
Prior art keywords
pixel
division
image
exposure
imaging device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CN201220117378.6U
Other languages
Chinese (zh)
Inventor
徐辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JIANGSU SMARTSENS TECHNOLOGY Ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201220117378.6U priority Critical patent/CN202713478U/en
Application granted granted Critical
Publication of CN202713478U publication Critical patent/CN202713478U/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Landscapes

  • Solid State Image Pick-Up Elements (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

The utility model relates to an imaging apparatus, comprising a pixel array comprising a plurality of pixels arranged in rows and columns, wherein the pixel array comprises at least one group of fission pixels and the fission pixels have a same color and are adjacent to each other; and a control circuit for controlling the pixel array, wherein the control circuit controls the exposure time of each fission pixel of at least one group of fission pixels. The imaging apparatus can obtain an image with a wider optical dynamic range than that of the imaging apparatus.

Description

Imaging device
Technical field
The utility model relates to imaging field, relates to especially a kind of imaging device.
Background technology
Requirement for picture quality constantly improves all the time.Obtain high-quality image and be unable to do without good imaging device.Generally speaking, investigate from two aspects can imaging device quality: one is the degree of integration of pixel, namely obtains the resolution of image; Another is the expressive force that obtains image.At present, image appearance power aspect has obtained more concern.Particularly do not obtain the striving direction that the high-quality image of high-resolution becomes present imaging field R﹠D work especially by baroque hardware.For example, obtain the high-quality photo of high-resolution at the portable imaging device such as the card form camera.
Imaging device generally has pel array.Each pixel in the pel array comprises sensor devices, such as photodiode, optical switch etc.The ability that each sensor devices receives light is different.The difference of this ability is reflected to and makes imaging device have different optical dynamic ranges on the imaging device, and namely the imaging device can receive the scope of light.When the optical dynamic range of imaging device during less than the variation of ambient light intensity, extraneous scene just can't be reflected in the image that obtains fully.Wish in this area to have a kind of easy mode to address this problem always.
The utility model content
For problems of the prior art, according to an aspect of the present utility model, a kind of imaging device is proposed, comprising: pel array, it comprises a plurality of pixels that are arranged in rows and columns; Described pel array comprises that at least one component splits pixel; Described division pixel has same color and adjacent one another are; And control circuit, control described pel array, and described control circuit is controlled the time for exposure that described at least one component is split each division pixel in the pixel.
Aforesaid imaging device, wherein: described division pixel is rectangle.
Aforesaid imaging device, wherein: described pel array comprises that the first adjacent component is split pixel and second component splits pixel, and wherein the distance between two adjacent described division pixels is split pixel and described second component less than described the first component and split distance between the pixel.
Aforesaid imaging device, wherein: described division pixel comprises Macro Lens and photodiode, described Macro Lens and photodiode are arranged on a side of deflection division pixel.
Aforesaid imaging device, wherein: described division pixel comprises Macro Lens layer, colour filter layer, interconnection layer and semiconductor layer, wherein said colour filter layer is between described Macro Lens layer and described interconnection layer, described in the described semiconductor layer under the interconnection layer.
Aforesaid imaging device, wherein: described division pixel comprises the photodiode in the semiconductor layer, wherein has P trap and shallow trench isolation between two adjacent described division pixels.
Aforesaid imaging device, wherein: described P trap injects through three P traps and forms, and the energy that described three P traps inject is respectively about 150-260KeV, about 300-400KeV and about 500KeV.
Aforesaid imaging device, wherein: the width of described shallow trench isolation is about 0.1-0.3um, and the width of described P trap is about 0.25-0.55um, and the degree of depth of described P trap is 2-5um.
According to another aspect of the present utility model, a kind of imaging device is proposed, comprising: pel array, it comprises a plurality of pixels that are arranged in rows and columns; Described pel array comprises that at least one component splits pixel; And control circuit, control described pel array; Wherein, described control circuit splits first division pixel exposure in the pixel to described at least one component within the first time for exposure, draw the first image; Described control circuit splits the division of second in pixel pixel exposure to described at least one component within the second time for exposure, draw the second image; Wherein, described control circuit further reads described the first image and described the second image.
According to another aspect of the present utility model, a kind of imaging device is proposed, comprising: pel array, it comprises a plurality of pixels that are arranged in rows and columns; Described pel array comprises that at least one component splits pixel; Control circuit is controlled described pel array; Wherein, described control circuit splits first division pixel exposure in the pixel to described at least one component within the first time for exposure, draw the first image; Described control circuit splits the division of second in pixel pixel exposure to described at least one component within the second time for exposure, draw the second image; Described the first time for exposure is different from described the second time for exposure; Wherein, described control circuit further reads described the first image and described the second image; And image processor, it makes up described the first image and described the second image.
The image that imaging device of the present utility model can obtain has the optical dynamic range higher than imaging device itself.
Description of drawings
Fig. 1 is the schematic diagram that has represented a kind of structure of imaging device;
Fig. 2 is the schematic diagram that has represented a kind of representative pixels structure;
Fig. 3 is the signal domain that has represented a kind of representative pixels structure;
Fig. 4 is the schematic diagram according to the pel array of the imaging device of an embodiment of the present utility model
Fig. 5 is the flow chart according to the formation method of an embodiment of the present utility model;
Fig. 6 is the schematic diagram according to the pixel region structure of an embodiment of the present utility model;
Fig. 7 is the structural representation according to the division pixel of an embodiment of the present utility model;
Fig. 8 is the dot structure schematic diagram according to an embodiment of the present utility model;
Fig. 9 is the circuit diagram according to the pel array of an embodiment of the present utility model;
Figure 10 is according to the sequential chart under the high resolution model of an embodiment of the present utility model;
Figure 11 is according to the sequential chart under the ISO pattern of an embodiment of the present utility model;
Figure 12 is according to the sequential chart under the high optical dynamic range pattern of an embodiment of the present utility model;
Figure 13 is according to an embodiment of the present utility model, the method flow diagram of the image of combination division pixel double exposure;
Figure 14 a-Figure 14 c is the schematic diagram according to the combinational algorithm of embodiment shown in Figure 13;
Figure 15 is the circuit diagram according to the pel array of an embodiment of the present utility model;
Figure 16 is according to an embodiment of the present utility model, the method flow diagram of the image of 4 exposures of combination division pixel;
Figure 17 a-Figure 17 c is the schematic diagram according to the combinational algorithm of embodiment shown in Figure 16; And
Figure 18 is the schematic diagram according to the system of an embodiment of the present utility model.
Embodiment
In the following detailed description, can be referring to each Figure of description of the specific embodiment that is used for illustrating the application as the application's part.In the accompanying drawings, similar Reference numeral is at the graphic middle description of difference similar assembly substantially.Each specific embodiment of the application has carried out enough detailed description following, so that possess the technical scheme that the those of ordinary skill of this area relevant knowledge and technology can be implemented the application.Should be appreciated that and to utilize other embodiment or the application's embodiment is carried out structure, logic or electrical change.
The electronic component that term " pixel " word refers to contain sensor devices or is used for electromagnetic signal is converted to other devices of the signal of telecommunication.For illustrative purposes, Fig. 1 has described a kind of representative imaging device, and it comprises a pel array.Describe a kind of representational pixel among Fig. 2, and all pixels in the pel array will be made in a similar fashion all usually.
Fig. 1 has represented a kind of schematic diagram of structure of imaging device.Imaging device 100 shown in Figure 1, for example the cmos imaging device comprises pel array 110.Pel array 110 comprises a plurality of pixels that are arranged in rows and columns.Every delegation pixel selects line all to connect simultaneously by row in the pel array 110, and each row pixel is respectively by the output of column selection line options ground.Each pixel has row address and column address.The row address of pixel is selected line corresponding to the row that is driven by row decoding and drive circuit 120, and the column address of pixel is corresponding to the column selection line that is driven by row decoding and drive circuit 130.Control circuit 140 control row decodings and drive circuit 120 and row decoding and drive circuit 130 are selectively to read pixel output signal corresponding to row and column suitable in the pel array.
Pixel output signal comprises pixel reset signal Vrst and pixel image signal Vsig.The signal that obtains from floating diffusion region when pixel reset signal Vrst represents the floating diffusion region of reseting sensor devices (such as photodiode).Pixel image signal Vsig representative is transferred to the signal that obtains behind the floating diffusion region by the electric charge of the representative image that sensor devices obtains.Pixel reset signal Vrst and pixel image signal Vsig read by row sampling and holding circuit 150, and subtract each other through differential amplifier 160.The Vrst-Vsig signal that differential amplifier 160 is exported namely represents the picture signal that sensor devices obtains.Be converted to digital signal behind this picture signal process analog to digital converter ADC170, then be further processed by image processor 180, to export digitized image.
Fig. 2 is the schematic diagram that has represented a kind of representative pixels structure.The pixel 200 of Fig. 2 comprises photodiode 202, and transfering transistor 204 is reseted transistor 206, and source electrode is followed transistor 208 and row selecting transistor 210.Photodiode 202 is connected to the source electrode of transfering transistor 204.Transfering transistor 204 is controlled by signal TX.When TX controls metastasis transistor during to " on " state, the electric charge that accumulates in the photodiode is transferred in the storage area 21.Simultaneously, photodiode 202 is reset.The grid that source electrode is followed transistor 208 is connected to storage area 21.Source electrode is followed transistor 208 and is amplified the signal that receives from storage area 21.Reset transistor 206 source electrodes and also be connected to storage area 21.Reset transistor 206 by signal RST control, be used for reseting storage area 21.Pixel 200 also further comprises by row selecting transistor 210.Row selecting transistor 210 is followed transistor 208 amplifying signals with source electrode and is outputed to output line Vout by signal RowSel control.
Fig. 3 also is the schematic diagram that has represented a kind of representative pixels structure.Fig. 3 is not that abstract circuit logic concerns schematic diagram, but concrete semiconductor structure schematic diagram.The described pixel 300 of Fig. 3 has comprised that photodiode 302 is as sensor devices.Pixel 300 comprises transfer gate 303, itself and photodiode 302 and storage area, and namely floating diffusion region 304 forms transfering transistor together.Pixel 300 also comprises resets grid 305, and it is connected between floating diffusion region 304 and the active region 306, to reset floating diffusion region 304.Active region 306 is connected to electrode source Vaa.Pixel 300 comprises that also source electrode follows grid 307, and it is connected between active region 306 and 308, form source electrode and follow transistor, and source electrode is followed grid 307 and is electrically coupled to floating diffusion region 304 by being electrically connected 347.Pixel 300 further comprises row selecting transistor grid 309, and it is connected between active region 308 and the active region 310 as the pixel output, forms row selecting transistor.
Above-mentioned transistorized source/drain region, floating diffusion region, be defined as active region at channel region and the photodiode of grid next stage between source/drain regions because of its doping, it combines with grid structure and defines the active electronic device.
Adopt the double exposure of different exposure time can increase the optical dynamic range of imaging device for same image.If the time for exposure long enough, part darker in the image can be reflected in the image of final acquisition fully; But if the intensity variation of image has surpassed the dynamic range of imaging device, to be reflected on the image of final acquisition will all be white to brighter part in the image.That is to say that this part the intensity variation information that surpasses the imaging device photoperceptivity will be lost.If the time for exposure is enough short, the strongest luminous intensity does not also surpass the photoperceptivity of imaging device in the image, and the intensity variation information of brighter part will keep in the image; Yet, because the time for exposure is too short, lacking enough samplings, the information than dark-part in the image will be lost.The method that employing different exposure time of the present utility model increases the imaging device optical dynamic range has just considered above-mentioned two situations.For same image, adopt the different time for exposure to double expose; Then in the subsequent processes of image, thereby the result who considers double exposure is reflected in the image information that double exposure obtains in the image of final acquisition.Because the final image that obtains had both kept the information of dividing than highlights in the image, also kept in the image information than dark-part, so image has reflected wider intensity variation.Thus, can under the prerequisite that does not increase any hardware costs, improve the optical dynamic range of imaging device.
Propose to the utility model innovation a kind of theory that divides pixel, thereby under the prerequisite that is not lowered into picture device image resolution ratio, increased substantially the optical dynamic range of imaging device by multiexposure, multiple exposure.And imaging device of the present utility model can freely switch between high optical dynamic range pattern and Fei Gao optical dynamic range pattern.
Fig. 4 is the schematic diagram according to the pel array of the imaging device of an embodiment of the present utility model.As shown in the figure, pel array 400 is colorful array of pixels, and R, G represent respectively the different color of red, green and blue with B, and S represents short pixel of time for exposure, and L represents long pixel of time for exposure.Thus, pixel GS just represents short green pixel of time for exposure, and RL just represents long red pixel of time for exposure, by that analogy.Can find out clearly that from figure the pixel in the pel array 400 is not common square (comprising similar square), but splits into two rectangles division pixel of (comprising similar rectangle) from a foursquare pixel.Certainly, a pixel also can be divided into two parts or more parts (sub-pixel sub-pixel or division pixel split-pixel).According to an embodiment of the present utility model, the division pixel also can be foursquare.Take pixel group 410 as example, it comprises pixel 411-418.The G pixel in the upper left corner is replaced by 2 rectangular pixels GS411 and GL412, the R pixel in the upper right corner is replaced by 2 rectangular pixels RS413 and RL414, the B pixel in the lower left corner is replaced by 2 rectangular pixels BS415 and BL416, and the G pixel in the lower right corner is replaced by 2 rectangular pixels GS417 and GL418.Like this, originally, 4 pixels split into 8 pixels.Because the division pixel can be considered to from same pixel, so each division pixel has identical color and adjacent one another are; And the distance between the division pixel is less than the distance between the script pixel, i.e. rear each component of division is split the distance between the pixel.For example: the distance between two division pixels of direct neighbor is about 0.25um; And the distance (distance before namely not dividing between each pixel) that each component is split between the pixel is about 0.5um.
The time for exposure length of two division pixels that same pixel splits into can be different.Different exposure of realization is read in each division pixel.In subsequent treatment, will form the image of a complete high dynamic range after all division pixel of combination.And if when not needing the image of high dynamic range, all division pixels can be read with same exposure value at one time.Because the output signal of all division pixels can superpose, can the photosensitivity of low light photograph be had a distinct increment like this.Also just realized simultaneously the bumpless transfer from high optical dynamic range (High Dynamic Range) HDR pattern to the non-HDR pattern of high sensitivity.Certainly, if the time for exposure of division pixel is independently, it also can be used as independent pixel output, thereby improves the resolution of imaging device.
Fig. 5 is the flow chart according to the formation method of an embodiment of the present utility model.As shown in Figure 5, formation method 500 adopts the imaging device pickup image that comprises pel array.This imaging device has predetermined optical dynamic range.The pel array of this imaging device comprises at least one group of pixel, and it can be considered to by a pixel division, wherein comprises at least first division pixel and the second division pixel.In step 510, judge whether that the intensity variation of image to be absorbed has surpassed the optical dynamic range of imaging device, if surpass, then start high optical dynamic range pattern, otherwise adopt the normal mode pickup image.Existing imaging device, digital camera for example is much all with a display screen, to show in real time the camera lens target pointed of imaging device to the user.Can resolution image whether cross brightly or excessively dark by realtime graphic, whether reflected and wished the details paid close attention to, thereby can directly differentiate the high optical dynamic range pattern of whether should enabling.The display screen that should be noted that imaging device only is illustration purpose.Imaging device of the present utility model or formation method also do not require and comprise display screen.
Can also adopt several different methods to judge whether the luminous intensity of image to be absorbed exceeds the optical dynamic range of imaging device.For example, can be by the mean flow rate of computed image, contrast, perhaps the relation in region-of-interest brightness or contrast and other zones is judged.For example, generally speaking, image all can have a region-of-interest (ROI, Region Of Interest).The image of picked-up should reflect the details of region-of-interest as far as possible.In the situation that the details of region-of-interest is preferably processed, judge whether other zones of image are excessively bright or excessively dark, thereby can determine whether intensity variation exceeds the optical dynamic range of imaging device.
If whether the luminous intensity of image that need to be to be absorbed exceeds the optical dynamic range of imaging device, in step 520, switch to high photokinesis pattern, otherwise still adopt the normal mode pickup image.In step 530, the first division pixel at least one group of pixel in the whole pel array is adopted the exposure of the first time for exposure.In step 540, the division of second in this at least one group of pixel in whole pel array pixel is adopted the exposure of the second time for exposure.The first time for exposure was different from for the second time for exposure.According to an embodiment of the present utility model, the first time for exposure was longer than for the second time for exposure.For example, the first time for exposure was 40 milliseconds, and the second time for exposure was 10 milliseconds.In step 550, read, preferably read simultaneously, in the whole pel array in this at least one group of pixel the first division pixel and second the division pixel.In step 560, image that will the first division pixel draws in this at least one group of pixel from whole pel array with from whole pel array, second divide the image combining that pixel draws in this at least one group of pixel, thereby draw final image.Thus, what both comprised in the final image that the first division pixel obtains treats in the pickup image than the information of dark-part, the information that treating of having comprised also that the second division pixel obtains divided than highlights in the pickup image.So, obtained the optical dynamic range larger than imaging device itself in the final image.
Fig. 6 is the schematic diagram according to the pixel region structure of an embodiment of the present utility model.As shown in the figure, the pixel region of pel array 600 comprises Macro Lens layer 610, colour filter layer 620, interconnection layer 630 and semiconductor layer 640.Macro Lens layer 610 exemplarily, comprising at the outermost layer of pixel region: Macro Lens 611-613.As shown in the figure, each Macro Lens focuses on the light in the external world on the corresponding photodiode.Colour filter layer 620 is used for the light of other colors of filtering except light of particular color below Macro Lens layer 610, thereby makes pixel only to the light sensation light of a certain selected color, exemplarily, comprising: three kinds of filters of R, G and B.Semiconductor layer 640 comprises P type substrate and the photodiode that forms at P type substrate.Interconnection layer 630 is used for the interconnecting metal cabling, realizes the signal transmission, exemplarily, comprising: metal routing 633.According to an embodiment of the present utility model, transistor gate 636 also is arranged on interconnection layer 630.
Further, when realizing the technical scheme of division pixel of the present utility model, below 2 problems are the problems that merit attention.At first, owing to adopted non-square division pixel, the corner effect of non-square pixels periphery may be on the impact of dark current.And if wish Macro Lens optically focused to photodiode, Macro Lens also must be rectangle.The Macro Lens of rectangle has increased the difficulty of processing: (1) uses reflux technique (re-flow process) effect (merge effect) to occur to merge on long limit; (2) grow have on limit and the minor face curve shape different, minor face can than long limit shape more curve some.Secondly, because the two or more division pixel distances after the division are very near, dizzy loose (blooming) phenomenon between the division pixel can produce very large interference, affects picture quality.Particularly work as the length of time for exposure not simultaneously, dizzy loose phenomenon can be more obvious.
Fig. 7 is the structural representation according to the division pixel of an embodiment of the present utility model.As shown in the figure, the exemplary pixel 701 in the pel array 700 is divided into the up and down division pixel 702 and 703 of two rectangles.Take pixel 702 as example, it has comprised Macro Lens 704 and the photodiode below Macro Lens 704 706.With reference to figure 3, pixel 702 also comprises transfer gate 712 and resets grid 714 simultaneously.The cabling 722 of various control signals and transfer gate 712 be connected grid 714 and be connected respectively.Because this part describes in detail in Fig. 1-Fig. 3, only illustrates with cabling 722 among Fig. 7, also repeats no more here.Similarly, pixel 703 has comprised Macro Lens 705 and the photodiode below Macro Lens 705 707.With reference to figure 3, pixel 703 also comprises transfer gate 713 and resets grid 715 simultaneously.The cabling 724 of various control signals and transfer gate 713 be connected grid 715 and be connected respectively.Similarly, only illustrate with cabling 724 among Fig. 7, repeat no more.
Shown in Fig. 7 is the pixel domain of optimizing according to the process of embodiment of the present utility model.Wherein, the Macro Lens in the pixel 701 and the photodiode below Macro Lens are placed on the side in the rectangle division pixel domain, and various transistors and signal link then are placed on the opposite side in the rectangle division pixel domain.Thus, the edge of Macro Lens is to having been reserved a zone (shown in Reference numeral d among the figure) between the edge of rectangular pixels opposite side.If from whole pel array, one " space " appearred (gap) between each division pixel.From the design of the pel array of imaging device, " space " should be avoided as far as possible.Yet by reserving " space ", this embodiment of the present utility model can effectively solve because of on long limit and the minor face curve shape and have the different light focusing effect problems that have influence on.Simultaneously, this also provides convenience for the processing of Macro Lens, has reduced the cost of processing Macro Lens.
Fig. 8 is the dot structure schematic diagram according to an embodiment of the present utility model.Especially, Fig. 8 shows in detail for example structure of the semiconductor layer shown in Fig. 6.In addition, Fig. 8 has also described in detail among the embodiment of the present utility model as preventing dizzy loose the taking measures of electronics.As shown in the figure, pel array 800 has comprised 2 division pixels 810 and 820.There is shown the PN junction of the photodiode of two division pixels 810 and 820.When photodiode sensitization, can the enrichment electronics in the photodiode.Because warm-up movement or other reasons, portions of electronics can break away from and enter into substrate, form dizzy loose (blooming) electronics from photodiode.Particularly for the utility model, the time for exposure of two division pixels may be different.For example, division pixel 810 is to adopt that the long pixel of exposing in two division pixels, and division pixel 820 is to adopt that the short pixel of exposing in two division pixels.Because division 810 time for exposure of pixel are longer, the photoinduction electron number that produces in its photodiode will be far away more than division pixel 820.Therefore, the portions of electronics in the division pixel 810 can break away from and enter into division pixel 820, causes in division pixel 820 and produces wrong photoinduction electric charge.
For fear of above-mentioned situation occurring, a very important aspect is exactly the effective isolation between the division pixel.As shown in Figure 8, according to an embodiment of the present utility model, below 3 kinds of modes be used effectively to isolate.
(1) P trap (P-well)
The P trap can form an electronic barrier, stops dizzy loose electronics.For the dizzy loose electronics in surface and middle dizzy loose electronics, the P trap is all more effective.As shown in Figure 8, the P trap that it is L2 that division pixel 810 and 820 has a width has the shallow trench isolation that width is L1 (shallow trench isolation, STI) in the P trap.The width dimensions of STI and P trap is extremely important.The STI width is unsuitable narrow, otherwise will have influence on formation and the degree of depth of STI, and the transistorized opening feature of oxygen (field oxide) to such an extent as to impact is shown up causes logic error.The STI width is also unsuitable wide, if too near the edge of P trap, can cause very high dark current.This is because at the STI fringe region, and the silicon crystalline structure distortion is larger, and defective is also more concentrated, and defective can cause the generation of electron-hole, and causing is not having have larger electric current to flow through in the situation of light yet.Equally, the width of P trap is unsuitable too narrow, otherwise can form the shape of a rectangle in mix (drive in), can have influence on the injection degree of depth of Pwell.The P trap also should not be too wide, and then the area of photodiode will be reduced, thereby have influence on the photosensitivity of pixel.
According to an embodiment of the present utility model, in the design of the domain that divides pixel and technique, each related distance is of a size of (based on 0.11um CMOS technique):
Figure DEST_PATH_GDA00002241803300101
According to an embodiment of the present utility model, the form that adopts three P traps (pwell1, pwell2, pwell3) to inject comes balance to inject the contradiction of the width of the degree of depth and surperficial P trap.According to an embodiment of the present utility model, the method that forms the P trap comprises the steps:
1.DUV smearing of photoresist
The DUV photoresist is coated in wafer (wafer) surface, and the pattern of its use is to be decided by P trap mask, mainly covers pel array zone in addition.
2. the inspection of photoresist and mask
The flaw of inspection in photoresist image.Check that whether with between the wafer photo etched mask aim at.
3.P-well Implantation 1
This is the first step of three Pwell Implantations.What use is the boron Implantation.The energy that injects is medium, is approximately 150-260KeV, for example 200KeV.Zone below the thickness that energy is enough to satisfy the oxide layer penetrate wafer surface and STI arrives (for example: 1~2um).
4.Pwell Implantation 2
This is the second step of three Pwell Implantations.The higher energy of use during injection can reach the boron ion of injection (for example: 2~3um) to inject dark substrate zone for the first time.The energy that this time uses is approximately 300-400KeV, for example 350KeV.The satisfied equally oxide layer of wafer surface and the thickness of STI of penetrating of energy.
5.Pwell Implantation 3
This is the 3rd step of three Pwell Implantations.The highest energy of use during injection, for example 500KeV.The boron ion that injects can penetrate the oxide layer of wafer surface and the thickness of STI, and deep layer silicon area D1(for example below arriving: 3~4um).
6. remove photoresist and surface clean
The wafer of finishing Implantation is put into photoresist after the oxygen plasma chamber washes use, and will remains in the cleaning of the photoresist on the wafer surface.
Three times P trap injection mode has not only increased the injection degree of depth, and can form more uniform P trap distribution, thereby more effectively plays the effect of two adjacent division pixels of isolation.
(2) horizontal spill and leakage LOD(lateral overflow drain)
As shown in Figure 8, respectively form a horizontal spill and leakage LOD zone 841 and 842 in the outside of division pixel 810 and 820.The LOD zone is in high voltage bias always in the process of pixel integration (sensitization).Will form like this a potential well for electronics, it is collected for the dizzy loose electronics that the surface produces.
(3) the N bucket is collected (N-tub) technology
For the dizzy loose electronics at deep layer place, it is not too effective that top method becomes.As shown in Figure 8, the utility model adopts the N-tub technology to solve this problem.Namely collect dizzy loose electronics by the N-tub collecting zone that forms a substrate deep layer.
The method that forms N-tub can have a variety of.Such as, can carry out one comprehensive (blanket) to whole wafer between the preparatory stage by wafer and inject, and adopt higher energy to produce certain degree of depth.Perhaps adopt the backing material of N-type, P-Epi more thereon grows.Perhaps, on the wafer of P type substrate, inject one deck N, be polished to N again, one deck P-Epi afterwards more thereon grows.Because N-tub's is forever to be biased on the high potential, so also can produce the potential well for electronics, be used for collecting for the blooming electronics of profound level.
According to an embodiment of the present utility model, the method that forms N-tub comprises the steps:
Smearing of 1 DUV photoresist
The DUV photoresist is coated in wafer (wafer) surface, and the pattern of its use is to be decided by P trap mask, mainly covers pel array zone in addition.
2. the inspection of photoresist and mask
The flaw of inspection in photoresist image.Check that whether with between the wafer photo etched mask aim at.
3.N-tub Implantation
Use very high ion implantation energy (for example about 1MeV) that phosphonium ion is driven into for example 4~5 microns of the darker regional D2(of wafer).The Implantation metering in this step is less, and what approximately use is about 1x1012cm-2.
4. remove photoresist and surface clean
The wafer of finishing Implantation is put into photoresist after the oxygen plasma chamber washes use, and will remains in the cleaning of the photoresist on the wafer surface.
The characteristics of N-tub Implantation are to adopt larger Implantation Energy to increase to inject the degree of depth, so just can not have influence on the photoelectricity characteristic of top light sensitive diode.And adopting the reason of phosphonium ion rather than arsenic ion is that phosphonium ion is less, not only can be injected in the darker silicon body, and can not cause to top structure the ion collision physical damage.
Fig. 9 is the circuit diagram according to the pel array of an embodiment of the present utility model.As shown in Figure 9, each division pixel group comprises 2 division pixels.In order to save the effective area of circuit, two division pixels can adopt transistors share (transistor sharing) structure.For example, two division pixels can be shared one group of pixel readout circuit, and it is shared and includes but not limited to: reset transistor, source electrode are followed transistor and row selecting transistor.By transistors share, can increase the pixel photosensitive area as far as possible, improve image quality.Figure 10 is the sequential chart under the high resolution model.Figure 11 is the sequential chart under the ISO pattern.Figure 12 is the sequential chart under the high optical dynamic range pattern.Shown in the circuit connecting relation of Fig. 9, and with reference to sequential chart 10-12, be appreciated that imaging device of the present utility model can freely switch between high-resolution, ISO and high optical dynamic range pattern.
As shown in figure 10, at first provide a pulse to select this row on the RowSel line.Provide a pulse to reset storage area at the RST line, for example the storage area among Fig. 2 21.Next, provide a pulse to come the sampling of the storage area after reseting is produced the Vrst signal at the SHR line.The electric charge that provides simultaneously a pulse signal will divide on the sensor devices of pixel (such as the photodiode 202 among Fig. 2) on the TxA line is transferred on its storage area separately.Then provide a pulse signal at the SHS line, the electric charge of storing on the storage area of sampling division pixel is to produce the Vsig signal.Next, when being high, provide a pulse to reset the sensor devices that separately divide pixel at the TxA line at the RST line.Photo-sensitive cell begins stored charge after reseting.For the TxB line, itself and TxA line are fully independent.Under high resolution model high resolution mode (HR), all division pixel cells all can be read successively.Because the division pixel particularly number of the division pixel of green color is former twice.The resolution of whole image is effectively promoted.
As shown in figure 11, at first provide a pulse to select this row on the RowSel line.Provide a pulse to reset storage area at the RST line, for example the storage area among Fig. 2 21.Next, provide a pulse to come the sampling of the storage area after reseting is produced the Vrst signal at the SHR line.The electric charge that provides simultaneously a pulse signal will divide on the sensor devices of pixel (such as the photodiode 202 among Fig. 2) on TxA and TxB line is transferred on its storage area separately.Then provide a pulse signal at the SHS line, the electric charge of storing on the storage area of sampling division pixel is to produce the Vsig signal.
Next, when being high, at TxA and TxB line provide a pulse to reset the sensor devices that separately divide pixel simultaneously at the RST line.Photo-sensitive cell begins stored charge after reseting.Under ISO pattern high sensitivity mode (HS), the sub-pixel unit of long exposure and short exposure can be used same exposure value.The signal of two sub-pixels is transmitted (TXA and TXB open simultaneously) simultaneously, and directly will obtain stack of charge-domain in diffusion (floating diffusion) zone of floating.So that the low lifting that has twice according to photosensitivity of whole image.
As shown in figure 12, at first provide a pulse to select this row on the RowSel line.Provide a pulse to reset storage area at the RST line, for example the storage area among Fig. 2 21.Next, provide a pulse to come the sampling of the storage area after reseting is produced the Vrst signal at the SHR line.The electric charge that provides simultaneously a pulse signal will divide on the sensor devices of pixel (such as the photodiode 202 among Fig. 2) on TxA and TxB line is transferred on its storage area separately.Then provide a pulse signal at the SHS line, the electric charge of storing on the storage area of sampling division pixel is to produce the Vsig signal.
Next, when being high, provide a pulse to reset the sensor devices of division pixel A at the RST line on the TxA line.In the different moment, when same RST line is high, provide another pulse to reset the sensor devices of division pixel B at the TxB line.Photo-sensitive cell begins stored charge after reseting.Because division pixel A and B begin stored charge from the different moment; And as previously mentioned, they almost are sampled simultaneously, therefore, have different charge accumulation times, thereby have had the different time for exposure.
Figure 13 is according to an embodiment of the present utility model, the method of the image of combination division pixel double exposure, wherein the first division pixel has the different time for exposure with the second division pixel, and read the first division pixel and draw the first output voltage, read the second division pixel and draw the second output voltage.In the present embodiment, first division pixel and second the first output point of drawing of division pixel are also made up to draw final output voltage with the second output voltage.As shown in figure 13, in step 1320, at first read the first output voltage V 1 of first division pixel.The first output voltage V 1 that reads can remain in the memory 1.In step 1340, the first output voltage V 1 is amplified predetermined multiple.This predetermined multiple is the ratio of the second pixel and the first pixel exposure time.For example, if the time for exposure of the second pixel is 2 times of the first pixel exposure time, this multiplication factor is exactly 2.Multiplication factor also can be less than 1.In step 1350, determine whether the first output voltage V 1 through amplifying surpasses a predetermined threshold value.Should predetermined threshold value be less than or equal to saturation voltage.In step 1360,, the first output voltage V 1 through amplifying reads and keeps the second output voltage V 2 of the second pixel if greater than threshold value, then giving up the first output voltage V 1.In step 1370,, the first output voltage V 1 through amplifying keeps the first output voltage V 1 of the first pixel if less than threshold value, then giving up the second output voltage V 2 of the second pixel.In step 1380, the voltage that output keeps is as the final voltage after making up.
Figure 14 a-Figure 14 c is the schematic diagram of an embodiment of Figure 13 combinational algorithm, its centre circle and plus sige, i.e. and " ⊕ " is the HDR composite operator, is used for representing to make up different numerical value.In the present embodiment, suppose that two sub-pixels are respectively: division pixel A and division pixel B.The time for exposure of wherein dividing pixel A is the twice of division pixel B.Figure 14 a has represented division pixel A and division pixel B response curve separately.After Figure 14 b has represented in processing the data of pixel B are carried out the computing of x2, division pixel A and division pixel B response curve separately.Figure 14 c has represented that the data to output compare the response curve after selecting to finish.If data are less than Vsat value (saturation voltage that namely divides pixel), the then output of choice for use pixel A; If data are greater than the Vsat value, the then output of choice for use pixel B x2.Final resultant curve remains straight line.And the saturation voltage of final whole induction curve is equivalent to rise to 2x Vsat from Vsat before.The dynamic range of the curve after synthetic is only compared can be by following computing formula with the recruitment of a time for exposure:
ΔDR=20log(PixelA/PixelB)=20log(2:1)=6dB
Figure 15 is the circuit diagram according to the pel array of an embodiment of the present utility model.As shown in figure 15, each division pixel group comprises 4 division pixels, pixel A-D.In order to save the effective area of circuit, 4 division pixels can adopt transistors share (transistor sharing) structure.For example, 4 division pixels can be shared one group of pixel readout circuit, and namely share including but not limited to: reset transistor, source electrode are followed transistor and row selecting transistor.By transistors share, can increase the pixel photosensitive area as far as possible, improve image quality.Figure 16 is according to an embodiment of the present utility model, the method of the image of 4 exposures of combination division pixel, wherein first division pixel, the second division pixel, tripartition pixel have the different time for exposure with the quadripartion pixel, and read the first division pixel and draw the first output voltage, read the second division pixel and draw the second output voltage, read the tripartition pixel and draw the 3rd output voltage, and read the quadripartion pixel and draw the 4th output voltage.In the present embodiment, at first with first division pixel and the second division combination of pixels, make up to draw final output voltage simultaneously with tripartition pixel and quadripartion combination of pixels, and then with the result after the result after the first and second division combination of pixels and the third and fourth division combination of pixels.The mode of each combination is all similar with the described mode of the embodiment of Figure 13.
As shown in figure 16, in step 1602, at first read the first output voltage V 1 of first division pixel.The first output voltage V 1 that reads can remain in the memory 1.In step 1604, the first output voltage V 1 is amplified predetermined multiple.This predetermined multiple is the second division pixel and ratio of first division pixel exposure time.In step 1605, determine whether the first output voltage V 1 through amplifying surpasses a predetermined threshold value.Should predetermined threshold value be less than or equal to saturation voltage.In step 1606,, the first output voltage V 1 through amplifying reads and keeps the second output voltage V 2 of the second division pixel if greater than threshold value, then giving up the first output voltage V 1.In step 1607,, the first output voltage V 1 through amplifying keeps the first output voltage V 1 of first division pixel if, then giving up the second output voltage V 2 of the second division pixel less than threshold value.In step 1608, the result of the voltage that output keeps after as combination, i.e. the first voltage as a result.
In step 1620, read the 3rd output voltage V 3 of tripartition pixel.The first output voltage V 3 that reads can remain in the memory 2.In step 1640, the first output voltage V 3 is amplified predetermined multiple.This predetermined multiple is the ratio of quadripartion pixel and tripartition pixel exposure time.In step 1650, determine whether the 3rd output voltage V 3 through amplifying surpasses predetermined threshold value.Should predetermined threshold value be less than or equal to saturation voltage.In step 1660,, the 3rd output voltage V 3 through amplifying reads and keeps the 4th output voltage V 4 of quadripartion pixel if greater than threshold value, then giving up the 3rd output voltage V 3.In step 1670,, the 3rd output voltage V 3 through amplifying keeps the 3rd output voltage V 3 of tripartition pixel if less than threshold value, then giving up the 4th output voltage V 4 of quadripartion pixel.In step 1680, the result of the voltage that output keeps after as combination, i.e. the second voltage as a result.
Next, make up the first voltage and second voltage as a result as a result.In step 1690, the multiple that the first as a result voltage amplification is predetermined.This predetermined multiple is the second division pixel and the ratio of first division pixel exposure time and the product of the ratio of quadripartion pixel and tripartition pixel exposure time.In step 1691, determine through amplifying first as a result voltage whether surpass predetermined threshold value.Threshold value that should be predetermined usually multiply by the second division pixel by saturation voltage and the ratio of first division pixel exposure time is determined with the average of the ratio of quadripartion pixel and tripartition pixel exposure time.Read if the first voltage through amplifying greater than threshold value, is then given up the first voltage and keep the second output voltage.In step 1693, if the first voltage through amplifying less than threshold value, is then given up the second output voltage and kept the first output voltage.In step 1680, the voltage that output keeps is as the result's output after making up.Usually the ratio of getting the second division pixel and ratio of first division pixel exposure time and quadripartion pixel and tripartition pixel exposure time is identical, positive integer n for example, n=2,4,6,8.Thus, voltage and second is as a result during voltage as a result in combination first, and the multiplication factor of reservation is n2, and threshold value to be n multiply by saturation voltage.
Figure 17 a-Figure 17 c is the schematic diagram of an embodiment of Figure 16 combinational algorithm, its centre circle and plus sige, i.e. and " ⊕ " is the HDR composite operator, is used for representing to make up different numerical value.In the present embodiment, suppose that two sub-pixels are respectively: division pixel A-D.The time for exposure of wherein dividing pixel A is the twice of division pixel B, by that analogy.Figure 17 a has represented division pixel A-D response curve separately.The response curve that Figure 17 b has obtained after having represented in processing the data of pixel A and B and division pixel C and D made up respectively.The response curve that obtains behind the response curve that Figure 17 c has represented to continue to obtain after the data of packed-pixel A and B and division pixel C and D make up respectively.Final resultant curve remains straight line.The saturation voltage of final whole induction curve is equivalent to rise to 8xVsat from Vsat before.The dynamic range of the curve after synthetic is only compared can be by following computing formula with the recruitment of a time for exposure:
ΔDR=20log(PixelA/PixelD)=20log(8:1)=18dB
Figure 18 is the schematic diagram according to the system of an embodiment of the present utility model.Figure 18 explanation comprises the processor system 1800 of imageing sensor 1810.Wherein, imageing sensor 1810 is such as the described imageing sensor of the utility model.Described processor system 1800 exemplary illustration have the system of the digital circuit that can comprise image sensor apparatus.In situation without restriction, this system can comprise computer system, camera system, scanner, machine vision, automobile navigation, visual telephone, surveillance, autofocus system, celestial body tracker system, movement detection systems, image stabilization system and data compression system.
Processor system 1800(for example, camera system) generally include for example microprocessor of CPU (CPU) 1840(), it is communicated by letter with I/O (I/O) device 1820 via bus 1801.Imageing sensor 1810 is also communicated by letter with CPU 1840 via bus 1801.System 1800 based on processor also comprises random-access memory (ram) 1830, and can comprise for example flash memory of removable memory 1850(), it is also communicated by letter with CPU 1840 via bus 1801.Imageing sensor 1810 can with processor (for example CPU, digital signal processor or microprocessor) combination, single integrated circuit or be different from the chip of described processor and can be with or without memory storage apparatus.The calculating of image combining and processing can be carried out by imageing sensor 1810 or by CPU 1840.
Technology contents of the present utility model and technical characterstic disclose as above, yet one of ordinary skill in the art still may make based on teaching of the present utility model and disclosure all substituting and revising of the utility model spirit that do not deviate from.Therefore, protection range of the present utility model should be not limited to the content that embodiment discloses, and should comprise various of the present utility model the substituting and correction of not deviating from, and is contained by the claims book.

Claims (10)

1. an imaging device is characterized in that, comprising:
Pel array, it comprises a plurality of pixels that are arranged in rows and columns; Described pel array comprises that at least one component splits pixel; Described division pixel has same color and adjacent one another are; And
Control circuit is controlled described pel array, and described control circuit is controlled the time for exposure that described at least one component is split each division pixel in the pixel.
2. imaging device according to claim 1, wherein: described division pixel is rectangle.
3. imaging device according to claim 1, wherein: described pel array comprises that the first adjacent component is split pixel and second component splits pixel, and wherein the distance between two adjacent described division pixels is split pixel and described second component less than described the first component and split distance between the pixel.
4. imaging device according to claim 1, wherein: described division pixel comprises Macro Lens and photodiode, described Macro Lens and photodiode are arranged on a side of deflection division pixel.
5. imaging device according to claim 1, wherein: described division pixel comprises Macro Lens layer, colour filter layer, interconnection layer and semiconductor layer, wherein said colour filter layer is between described Macro Lens layer and described interconnection layer, described in the described semiconductor layer under the interconnection layer.
6. imaging device according to claim 5, wherein: described division pixel comprises the photodiode in the semiconductor layer, wherein has P trap and shallow trench isolation between two adjacent described division pixels.
7. imaging device according to claim 6, wherein: described P trap injects through three P traps and forms, and the energy that described three P traps inject is respectively about 150-260KeV, approximately 300-400KeV and about 500KeV.
8. imaging device according to claim 7, wherein: the width of described shallow trench isolation is about 0.1-0.3um, and the width of described P trap is about 0.25-0.55um, and the degree of depth of described P trap is 2-5um.
9. an imaging device is characterized in that, comprising:
Pel array, it comprises a plurality of pixels that are arranged in rows and columns; Described pel array comprises that at least one component splits pixel; And
Control circuit is controlled described pel array; Wherein, described control circuit splits first division pixel exposure in the pixel to described at least one component within the first time for exposure, draw the first image; Described control circuit splits the division of second in pixel pixel exposure to described at least one component within the second time for exposure, draw the second image; Wherein, described control circuit further reads described the first image and described the second image.
10. an imaging device is characterized in that, comprising:
Pel array, it comprises a plurality of pixels that are arranged in rows and columns; Described pel array comprises that at least one component splits pixel;
Control circuit is controlled described pel array; Wherein, described control circuit splits first division pixel exposure in the pixel to described at least one component within the first time for exposure, draw the first image; Described control circuit splits the division of second in pixel pixel exposure to described at least one component within the second time for exposure, draw the second image; Described the first time for exposure is different from described the second time for exposure; Wherein, described control circuit further reads described the first image and described the second image; And
Image processor, it makes up described the first image and described the second image.
CN201220117378.6U 2012-03-26 2012-03-26 Imaging apparatus Expired - Lifetime CN202713478U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201220117378.6U CN202713478U (en) 2012-03-26 2012-03-26 Imaging apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201220117378.6U CN202713478U (en) 2012-03-26 2012-03-26 Imaging apparatus

Publications (1)

Publication Number Publication Date
CN202713478U true CN202713478U (en) 2013-01-30

Family

ID=47593732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201220117378.6U Expired - Lifetime CN202713478U (en) 2012-03-26 2012-03-26 Imaging apparatus

Country Status (1)

Country Link
CN (1) CN202713478U (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103369253A (en) * 2012-03-26 2013-10-23 徐辰 Imaging device and imaging method
CN104010128A (en) * 2013-02-20 2014-08-27 佳能株式会社 Image capturing apparatus and method for controlling the same
CN104144305A (en) * 2013-05-10 2014-11-12 江苏思特威电子科技有限公司 Dual-conversion gain imaging device and imaging method thereof
CN105245796A (en) * 2014-07-01 2016-01-13 晶相光电股份有限公司 Sensor and sensing method
WO2021036850A1 (en) * 2019-08-26 2021-03-04 Oppo广东移动通信有限公司 Image sensor, image processing method, and storage medium

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103369253A (en) * 2012-03-26 2013-10-23 徐辰 Imaging device and imaging method
CN103369253B (en) * 2012-03-26 2017-02-08 江苏思特威电子科技有限公司 Imaging device and imaging method
CN104010128A (en) * 2013-02-20 2014-08-27 佳能株式会社 Image capturing apparatus and method for controlling the same
CN104010128B (en) * 2013-02-20 2017-05-17 佳能株式会社 Image capturing apparatus and method for controlling the same
CN104144305A (en) * 2013-05-10 2014-11-12 江苏思特威电子科技有限公司 Dual-conversion gain imaging device and imaging method thereof
CN104144305B (en) * 2013-05-10 2017-08-11 江苏思特威电子科技有限公司 Dual conversion gain imaging device and its imaging method
CN105245796A (en) * 2014-07-01 2016-01-13 晶相光电股份有限公司 Sensor and sensing method
CN105245796B (en) * 2014-07-01 2018-07-20 晶相光电股份有限公司 Sensor and sensing method
WO2021036850A1 (en) * 2019-08-26 2021-03-04 Oppo广东移动通信有限公司 Image sensor, image processing method, and storage medium
CN113709382A (en) * 2019-08-26 2021-11-26 Oppo广东移动通信有限公司 Image sensor, image processing method and storage medium
CN113709382B (en) * 2019-08-26 2024-05-31 Oppo广东移动通信有限公司 Image sensor, image processing method and storage medium

Similar Documents

Publication Publication Date Title
US20210335875A1 (en) Solid-state imaging element, manufacturing method, and electronic device
CN101969065B (en) Solid-state imaging device, method for manufacturing the same, and electronic apparatus
US6403998B1 (en) Solid-state image sensor of a MOS structure
US8604408B2 (en) Solid-state imaging device, method of manufacturing the same, and electronic apparatus
CN102005461B (en) Solid-state imaging device, manufacturing method thereof, and electronic apparatus
CN102034839B (en) Solid-state image pickup device, image pickup apparatus including the same, and method of manufacturing the same
US9231015B2 (en) Backside-illuminated photosensor array with white, yellow and red-sensitive elements
EP1868377A1 (en) Light sensor, solid-state image pickup device and method for operating solid-state image pickup device
US20060226438A1 (en) Solid-state imaging device
US20100002108A1 (en) Solid state imaging device, method for producing the same, and electronic apparatus
US20160086984A1 (en) Approach for Reducing Pixel Pitch using Vertical Transfer Gates and Implant Isolation Regions
US8035714B2 (en) Solid-state imaging device, method of manufacturing the same, and camera
US10504956B2 (en) Photogate for front-side-illuminated infrared image sensor and method of manufacturing the same
US20090194794A1 (en) Solid-state image pickup device and manufacturing method thereof
CN202713478U (en) Imaging apparatus
US20050168604A1 (en) Solid-state image pickup device and module type solid-state image pickup device
CN103227183A (en) Method for inhibiting electrical mutual interference of backside illuminated CMOS image sensor
KR20110049303A (en) Cmos image sensor having drain path shielding metal layer and method for manufacturing same
EP1195816A2 (en) CMOS active pixel with scavenging diode
KR20080031782A (en) Solid-state imaging device and electronic device
KR20090128429A (en) Solid-state imaging device
US8462239B2 (en) Solid-state imaging device and electronic imaging device having multi-stage element isolation layer
JP2007281310A (en) Solid-state imaging apparatus
CN103369253A (en) Imaging device and imaging method
CN117957659A (en) Solid-state imaging device and electronic apparatus

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: JIANGSU SMARTSENS TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: XU CHEN

Effective date: 20130814

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20130814

Address after: 301 room 5, building 215513, Changshou City economic and Technological Development Zone, Jiangsu, China

Patentee after: JIANGSU SMARTSENS TECHNOLOGY, LTD.

Address before: Four road 215513 Jiangsu Province along the Yangtze River in Changshou City Development Zone No. 11 Branch Park Room 301

Patentee before: Xu Chen