CN112335049A - Imaging assembly, touch screen, camera shooting module, intelligent terminal, camera and distance measuring method - Google Patents

Imaging assembly, touch screen, camera shooting module, intelligent terminal, camera and distance measuring method Download PDF

Info

Publication number
CN112335049A
CN112335049A CN201880095111.1A CN201880095111A CN112335049A CN 112335049 A CN112335049 A CN 112335049A CN 201880095111 A CN201880095111 A CN 201880095111A CN 112335049 A CN112335049 A CN 112335049A
Authority
CN
China
Prior art keywords
light
spacer
light guide
photoelectric converter
touch screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201880095111.1A
Other languages
Chinese (zh)
Other versions
CN112335049B (en
Inventor
陈振宇
周凯伦
蒋伟杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Sunny Opotech Co Ltd
Original Assignee
Ningbo Sunny Opotech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Sunny Opotech Co Ltd filed Critical Ningbo Sunny Opotech Co Ltd
Publication of CN112335049A publication Critical patent/CN112335049A/en
Application granted granted Critical
Publication of CN112335049B publication Critical patent/CN112335049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Power Engineering (AREA)
  • Electromagnetism (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Studio Devices (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

An imaging assembly includes a spacer and a photoelectric converter. The spacer is opaque and forms a light guide channel therein. The photoelectric converter is parallel to and spaced apart from the spacer, and is positioned corresponding to the light guide channel. The light emitted by the object to be imaged reaches the photoelectric converter after passing through the light guide channel. The application also provides a method for manufacturing the imaging assembly, a touch screen, a camera module, an intelligent terminal, a multi-view depth camera, a light field camera and a distance measuring method.

Description

Imaging assembly, touch screen, camera shooting module, intelligent terminal, camera and distance measuring method Technical Field
The present application relates to imaging assemblies, and more particularly to a photoelectric imaging assembly utilizing light-conducting channels for light confinement.
Background
With the development and popularization of mobile terminal devices, related technologies of imaging components applied to mobile terminal devices to help users acquire images (e.g., videos or images) have been rapidly developed and advanced, and in recent years, imaging components have been widely applied in many fields such as medical treatment, security, industrial production, and the like.
One of the important trends in the development of mobile terminal devices is the decreasing size of mobile terminal devices. In order to meet the increasingly wide market demands, a small-size and large-aperture diaphragm is an irreversible development trend of the existing camera module. In addition, the market has proposed higher and higher demand to the formation of image quality of the module of making a video recording.
Under the prerequisite that the size demand is higher and higher, the demand of people to electronic product size can't be satisfied to current conventional camera device's structure.
Specifically, the conventional image pickup apparatus mostly employs a lens imaging system. In a lens imaging system, there are various problems of aberration, loss of brightness, and the like, which must occur after light passes through a lens. After the light passes through the lens, the brightness is inevitably lost.
In addition, the lens imaging system has a complex structure, and further size reduction inevitably leads to higher cost, and the requirement of people for thinning electronic products cannot be met.
In addition, if the lens imaging system includes a plurality of lenses and lens barrels, manufacturing tolerances of the respective components thereof are accumulated in the assembly process, and the assembly process also generates assembly tolerances. These tolerances limit further improvements in lens performance.
In conventional lens imaging optical systems, the maximum effective size of the chip (i.e., the area of the chip that can be illuminated) is limited by the size of the lens aperture. The space in which the aperture size of the lens can be increased in optical design is very limited.
Disclosure of Invention
The present invention aims to provide a solution that overcomes at least one of the above-mentioned drawbacks of the prior art.
According to an aspect of the present invention, there is provided an imaging assembly, which may include:
a spacer opaque to light and having at least one light guide channel formed therein; and
and the photoelectric converters can be parallel to and spaced from the spacers and can be respectively arranged in one-to-one correspondence to the light guide channels, so that light emitted by the object to be imaged can reach the photoelectric converters after passing through the light guide channels.
The spacer forms a plurality of light guide channels, and the light guide channels can form a light guide channel array in the spacer.
Wherein, the size of the light guide channel can be set to be more than 800 nm.
The size of the light guide channel can be set to be that the light with a specific wavelength in the passing light is diffracted to split the light, so that the light with a specific waveband reaches a preset photoelectric converter.
Wherein the spacer may be made of a light absorbing material.
The photoelectric converter receives all light from the corresponding light guide channel, and the light of the corresponding light guide channel irradiates the whole light receiving surface of the corresponding photoelectric converter.
Wherein the spacer may be coated with a light blocking layer.
Wherein, the light-blocking layer can be a diffuse reflection coating or a light absorption coating.
According to one aspect of the present application, a method of making an imaging assembly is also provided. The method can comprise the following steps:
at least one light-conducting channel can be formed in the light-impermeable spacer;
at least one photoelectric converter can be arranged in parallel with and spaced apart from the spacer, and the photoelectric converters respectively correspond to the light guide channels, so that light emitted by the object to be imaged can reach the photoelectric converters after passing through the light guide channels.
According to an aspect of the present application, a touch screen is also provided. The touch screen may include:
a spacer opaque to light and having at least one light guide channel formed therein; and
at least one photoelectric converter, can be parallel to spacer and spaced apart, and can set up with the light guide channel one-to-one respectively, in order to make the light that the object to be imaged sends out reach the photoelectric converter after passing the light guide channel; and
a light flow member positionable over the spacer member, comprising:
a fluid body, which may include a total reflection plate;
a light input part which is positioned in the streamer body and can output light forming an angle with the total reflection plate;
and
a light output section for outputting light from the light source,
wherein, the light emitted by the light input part can be totally reflected in the light body and can be output from the light output part.
According to one aspect of the present application, a touch screen is provided. The touch screen may include:
a spacer opaque to light and having at least one light guide channel formed therein; and
at least one photoelectric converter, can be parallel to spacer and spaced apart, and can set up with the light guide channel one-to-one respectively, in order to make the light that the object to be imaged sends out reach the photoelectric converter after passing the light guide channel;
a transparent elastic mechanism which can be positioned above the spacing piece; and
a light source may be located on a side of the spacer facing the transparent elastic mechanism and emit light toward the transparent elastic mechanism.
Wherein, the transparent elastic mechanism can be a transparent film.
According to an aspect of the present application, a touch screen is also provided. The touch screen includes:
a spacer opaque to light and having at least one light guide channel formed therein; and
the photoelectric converters are parallel to and spaced apart from the spacers and are respectively arranged in one-to-one correspondence to the light guide channels, so that light emitted by an object to be imaged passes through the light guide channels and then reaches the photoelectric converters;
the transparent elastic mechanism can be positioned above the spacing piece, and an opaque blocking piece is arranged in the transparent elastic mechanism.
According to an aspect of the application, still provide a module of making a video recording. The camera module can comprise the imaging assembly and the display screen. Wherein the imaging assembly is located below the display screen.
The display screen is one of an OLED screen, an LCD screen and an LED screen.
Wherein the substrate in the OLED screen may form spacers.
Wherein the cathode layer in the OLED screen may form spacers.
Wherein the anode layer in the OLED screen may form spacers.
And an optical element for converging light rays can be arranged above each light guide channel in the spacer of the imaging component.
Wherein, the optical element can be a convex lens.
And a superlens for converging light rays can be arranged above each photoelectric converter of the imaging assembly.
Wherein, the light path turning element can be arranged above the light guide channel.
Wherein the optical path turning element may include a MEMS device and a mirror.
The camera module can be positioned on a substrate in the OLED screen, a driving piece can be arranged on the substrate, and the driving piece can adjust the distance between a photoelectric converter of the imaging assembly and the spacing piece.
Wherein the color filter in the LCD screen may be integrated as the color filter of the imaging assembly.
Wherein the aperture of the light-conducting channel can be set to a specific wavelength.
According to an aspect of the application, an intelligent terminal is also provided. This intelligent terminal can include above-mentioned module of making a video recording.
According to an aspect of the present application, there is also provided a method of distance measurement. The method can comprise the following steps:
a plurality of light-conducting channels may be formed in the light-impermeable spacer;
the photoelectric converters can be arranged in parallel with the spacers and spaced apart from the spacers, and respectively correspond to the light guide channels one by one, so that light emitted by an object to be imaged can reach the photoelectric converters after passing through the light guide channels;
obtaining a plurality of images formed by the object to be imaged through a plurality of light guide channels according to the electric signals output by the photoelectric converter; and
the distance to the object to be imaged is calculated from the degree of repetition of the plurality of images.
The repetition degree may be the whole or a local repeated pixel area of the object to be photographed.
According to an aspect of the present application, there is also provided a light field camera. The light field camera may have a microlens array, and may further include:
a spacer that is opaque and in which at least one light guide channel is formed; and
at least one photoelectric converter, which can be parallel to and spaced apart from the spacer and can respectively correspond to the light guide channels one by one,
the micro lens array can be positioned between the spacer and the photoelectric converter, and light emitted by an object to be imaged reaches the photoelectric converter after passing through the light guide channel and the micro lens array.
According to an aspect of the present application, there is also provided a light field camera. The light field camera may have a main lens, and may further include:
a spacer opaque to light and having at least one light guide channel formed therein; and
at least one photoelectric converter parallel to and spaced apart from the spacer and corresponding to the light guide channels one to one, respectively,
the spacer can be positioned between the main lens and the photoelectric converter, and light emitted by an object to be imaged reaches the photoelectric converter after passing through the main lens and the light guide channel.
According to an aspect of the present application, there is also provided a multi-view depth camera. The multi-view depth camera may include:
a spacer that is opaque and in which a plurality of light guide channels are formed; and
the photoelectric converters can be parallel to and spaced apart from the spacers and respectively correspond to the light guide channels one by one, so that light emitted by an object to be imaged can reach the photoelectric converters after passing through the light guide channels;
wherein the central axes of the light guide channels may be staggered with respect to each other.
According to an aspect of the present application, there is also provided a pixel color filter array member. The pixel color filter array member may include:
a substrate;
a dielectric layer attached to the substrate; and
a plurality of pixel color filters are attached to the dielectric layer and form an array.
The dielectric layer is one of a photoelectric converter and a display screen.
According to an aspect of the present application, there is also provided a method of forming a pixel color filter array member. The method of forming a pixel color filter array member may include the steps of:
arranging a substrate;
attaching a dielectric layer to the substrate;
arranging a first color filter array on a first carrier plate;
transferring the first color filter array from the first carrier plate to the second carrier plate through a transfer head to form a second color filter array, wherein the transfer head is expanded repeatedly during the transfer so that the gaps among the color filters are suitable for the gaps among the color filters in the second color filter array;
coating a transparent adhesive material on the substrate; and
and integrally bonding the second color filter array on the second carrier plate to the dielectric layer.
The dielectric layer is one of a photoelectric converter and a display screen.
Compared with the prior art, the invention has at least one of the following technical effects:
1. the aberration problem does not exist, and the brightness loss is smaller.
2. The size is smaller.
3. The structure is simple, and the assembly tolerance items are fewer.
4. Through setting up the light guide channel at the interval on the screen, the maximum effective size of chip, the area that can illuminate the chip can promote through increasing the distribution area of light guide channel on the screen, therefore the chip area does not receive the restriction of lens light ring size, and adjustable range is big. The light guide channel and the imaging pixel are arranged at intervals in the projection direction and are not on the same horizontal plane. The number of pixels forming the chip multiplied by the size of the pixels is equal to the area of the chip, the size of the pixels is positively correlated with the sensitivity, and the number is positively correlated with the resolution.
5. Under the condition of being used as a front camera, the small holes and the display screen imaging pixels are alternately arranged, so that the screen occupation ratio of the intelligent terminal is improved.
6. Under the condition of being used as a back-up, the overall thickness of the mobile phone is reduced, wherein the back-up is the maximum thickness item of the intelligent terminal, and the reduction of the back-up thickness is possible to reduce the overall thickness of the intelligent terminal.
7. Because the imaging does not involve a lens, the phenomenon of short-distance defocusing can not occur, and the macro imaging can be realized.
Drawings
Exemplary embodiments are illustrated in referenced figures of the drawings. The embodiments and figures disclosed herein are to be regarded as illustrative rather than restrictive.
Figures 1a to 1d show schematic views of an embodiment of an imaging assembly according to the invention;
FIG. 2 shows a detailed schematic diagram illustrating a single light-conducting channel in an embodiment of an imaging assembly according to the invention;
FIG. 3 shows a flow chart of a method of manufacturing an imaging assembly according to the invention;
correction 16.10.2018 according to rules 26 fig. 4a to 4b show schematic views of an embodiment of a touch screen according to the invention;
FIG. 5 shows a schematic view of a streamer in an embodiment of a touch screen in accordance with the invention;
FIGS. 6 a-6 c show schematic diagrams of another embodiment of a touch screen according to the present invention;
FIGS. 7 a-7 b show schematic diagrams of another embodiment of a touch screen according to the present invention;
FIG. 8 shows a schematic view of an embodiment of a camera module according to the invention;
fig. 9a to 9b show schematic views of another embodiment of a camera module according to the invention;
fig. 10 shows a schematic view of the above-described embodiment of the camera module according to the invention;
FIG. 11 shows a schematic view of another embodiment of a camera module according to the invention;
FIG. 12 shows a schematic view of another embodiment of a camera module according to the invention;
FIG. 13 shows a schematic view of another embodiment of a camera module according to the invention;
FIG. 14 shows a flow chart of a method of distance measurement according to the invention;
15 a-15 d show schematic views of an embodiment of a method of distance measurement according to the invention;
FIG. 16 shows a schematic view of another embodiment of a camera module according to the invention;
FIG. 17 shows a schematic diagram of a prior art light field camera;
18 a-18 b show schematic diagrams of a prior art light field camera;
FIG. 19 shows a refocusing schematic of a prior art light field camera;
FIG. 20 shows a refocusing effect diagram for a prior art light field camera;
FIG. 21 shows a schematic diagram of an embodiment of a light field camera according to the present invention;
FIG. 22 shows a refocusing effect diagram for an embodiment of a light field camera according to the invention;
FIG. 23 shows a schematic view of another embodiment of a light field camera according to the present invention;
FIG. 24 shows a schematic diagram of an embodiment of a multi-view depth camera according to the present invention;
FIG. 25 shows a flow chart of a prior art photocopying process; and
FIG. 26 shows a flow chart of a pad printing process.
Detailed Description
For a better understanding of the present application, various aspects of the present application will be described in more detail with reference to the accompanying drawings. It should be understood that the detailed description is merely illustrative of exemplary embodiments of the present application and does not limit the scope of the present application in any way. Like reference numerals refer to like elements throughout the specification. The expression "and/or" includes any and all combinations of one or more of the associated listed items.
It should be noted that the expressions first, second, etc. in this specification are used only to distinguish one feature from another feature, and do not indicate any limitation on the features. Thus, a first body discussed below may also be referred to as a second body without departing from the teachings of the present application.
In the drawings, the thickness, size, and shape of an object have been slightly exaggerated for convenience of explanation. The figures are purely diagrammatic and not drawn to scale.
It will be further understood that the terms "comprises," "comprising," "includes," "including," "has," "including," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Moreover, when a statement such as "at least one of" appears after a list of listed features, the entirety of the listed features is modified rather than modifying individual elements in the list. Furthermore, when describing embodiments of the present application, the use of "may" mean "one or more embodiments of the present application. Also, the term "exemplary" is intended to refer to an example or illustration.
As used herein, the terms "substantially," "about," and the like are used as terms of table approximation and not as terms of table degree, and are intended to account for inherent deviations in measured or calculated values that will be recognized by those of ordinary skill in the art.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1a to 1d show schematic views of embodiments of an imaging assembly according to the invention. As shown in fig. 1a to 1d, the imaging module 1 includes a spacer 2 and a plurality of photoelectric converters 3. The spacer 2 is opaque and has a plurality of light guide channels 21 formed therein. The photoelectric converters 3 are parallel to and spaced apart from the spacers 2 and respectively correspond to the light guide channels 21 one by one, so that light emitted from an object to be imaged passes through the light guide channels 21 and reaches the photoelectric converters 3.
Fig. 2 shows a detailed schematic diagram illustrating a single light-conducting channel 21 in an embodiment of the imaging assembly 1 according to the invention. As shown in fig. 2, according to the principle of straight-line propagation of light, light from an object on the object side can be received by the photoelectric converter 3 on the other side of the spacer 2 through the light guide channel 21.
The spacer 2 including the light guide channel 21 and the photoelectric converter 3 constitute an imaging module 1. The light guide channel 21 is surrounded by a spacer, which blocks light irradiated to the spacer, i.e., the light guide channel 21 restricts light from passing through. The spacers may be made of a light absorbing material, such as a ferrous metal. In addition, in other embodiments, the spacer 2 may be coated with a light blocking layer, which may be a diffuse reflective coating or a light absorbing coating.
The size of the light-conducting channel 21 is preferably a size in which no significant diffraction occurs, i.e. the size of the light-conducting channel 21 is preferably 800nm or more.
In some embodiments, the size of the light-guiding channel 21 is preferably such that light passing within the light-guiding channel 21 is diffracted, i.e. only certain wavelengths are diffracted, thereby performing a color filtering function.
Specifically, the light guide channel 21 is sized to diffract a specific wavelength of the incident light to realize light splitting, so that the light of each wavelength band is distributed on the pre-arranged photoelectric converters, that is, the light of a desired wavelength band reaches the photoelectric converters, and the light of an undesired wavelength band reaches the non-photosensitive region. After the photoelectric converter receives the light rays in the corresponding wave band, the electric signals provided by the photoelectric converter can be processed through an algorithm to synthesize a color image. The above-described process achieves a function similar to a bayer array, and thus, a bayer array on an electrical converter may be eliminated in the embodiment according to the present invention, thereby further reducing the size.
As shown in fig. 2, the height of the light guide channel 21 is h, the width of the light guide channel is d, and the maximum angle of the range of the light ray passing through the light guide channel 21 at the object side is defined as 2 α.
Where Tan α ═ d ÷ h, and thus α ═ arc Tan (d/h)
As shown, the light-guiding channel 21 has a height h and a width d, which constrains a portion of the object-side light rays, with the collection angle of the light-guiding channel 21 defined as 2 α. The confinement range is defined in the present invention as the collection angle of the light-conducting channel 21, wherein the light on the object side can only be transmitted to the image side through the light-conducting channel 21 in the region of the collection angle. Object-side light rays not within this range are blocked by the spacers. In addition, the object side region is divided into a collection region and an unrecovered region. The relationship between the acquisition region and the image-side receiving region is bounded on the one hand by the light-conducting channel 21 and on the other hand by the size of the photoelectric converter 3.
A photoelectric converter 3 is provided in the image side receiving region so as to receive the object side light. On the basis, one or more imaging assemblies 1 form a photosensitive surface on the image side. The light from the object side is transmitted to the light sensing surface through the light guide channel 21, and is finally received by the photoelectric converter 3.
In the schematic diagrams shown in fig. 1a to 1d, only one cross section of an embodiment of an imaging assembly 1 according to the invention is shown. As can be seen from this cross section, the spacer 2 has a plurality of light-conducting channels 21 arranged uniformly therein. In this embodiment, the imaging assembly 1 may have a plurality of cross sections similar to the cross section, and thus the light guide channels 21 may form an array of light guide channels 21 in the spacer 2, and accordingly the photoelectric converters 3 respectively correspond to the positions of the light guide channels 21, and thus also form an array of photoelectric converters 3.
Fig. 1a to 1d also show the relationship of the position at which the photoelectric converter 3 according to the present invention is arranged and the acquisition area on the object side. In this mode, the photoelectric converter 3 of the present invention has no lens to confine received light, but receives light in all directions by the photoelectric converter 3.
The side of the photoelectric converter 3 facing the spacer 2 defines a photosensitive surface, located at an imaginary first boundary receiving surface formed on the image side by the collecting range of each light guiding channel 21.
The following is included in the relationship between the position where the photoelectric converter 3 is disposed and the acquisition region on the object side shown in fig. 1a to 1 d:
in fig. 1a to 1c, the light-sensing surfaces are respectively located above, coincident with, and below the first boundary receiving surface, but each photoelectric converter 3 receives light from one light-guiding channel 21; and
in fig. 1d, the light passing through the light guide channels 21 are partially overlapped, and one photosensor receives light from a plurality of light guide channels 21. In this case, the received overlapping light information needs to be recombined into a complete image by a software algorithm.
In fig. 1b and 1c, some light information is not received by the photoelectric converter 3.
In fig. 1a, there is no overlapping region of light rays received by the photoelectric converter 3 through the light guide channel 21 from the object side, and the area of the light-sensing surface is the largest. In this case, the photoelectric converters receive all the light from the corresponding light guide channels, and the light of the corresponding light guide channels illuminates the entire light receiving face of the corresponding photoelectric converters. Therefore, it is preferable that the photoelectric converter 3 be disposed at this position, where both the black bars and the striped bars in the figure are the photoelectric converters 3.
It is noted that the acquisition angle α is calculated in a similar manner as described above. In the actual design process, several basic parameters, such as the height h, the width d, the collection angle α, the pitch of the light guide channel 21, and the like of the light guide channel 21, may be set, and then the positional relationship between the first boundary receiving surface on the image side and the light guide channel 21 may be calculated step by using these basic parameters as the reference quantities for the design.
In this case, in order for the photoelectric converters to receive all the light from the corresponding light guide channels, and the light of the corresponding light guide channels illuminates the entire light receiving surface of the corresponding photoelectric converters, the vertical spacing H between the first boundary receiving surface on the image side and the lower surface of the light guide channel 21 (i.e., the lower surface of the spacer) may be determined according to the dimension D1 of the corresponding photoelectric converters by the following formula:
H=0.5*D1/Tanα-0.5*h
for example, in fig. 1a, the collection angle α is 45 °, i.e., the height H and the width D of the light-conducting channel 21 are equal, then H is 0.5 × D1-0.5 × H.
In addition, for example, after the size of the photoelectric converter 3 is selected, the size of the light guide channel 21 is further determined, and it is preferable that diffraction does not occur. The spacing between the optoelectronic converters 3 and thus the dimensions between the light-conducting channels 21 is determined after the second interface positions have been met, which is advantageous in that the optoelectronic converters 3 are preferably arranged not to receive light from the adjacent light-conducting channels 21.
Fig. 3 shows a flow chart of a method of manufacturing an imaging assembly 1 according to the invention.
The method of manufacturing the imaging assembly 1 comprises the steps of:
s1: forming at least one light-conducting channel 21 in the light-impermeable spacer 2;
s2: at least one photoelectric converter 3 is arranged in parallel with and spaced apart from the spacer 2 and in one-to-one correspondence with the light guide channels 21, respectively, so that light emitted by an object to be imaged passes through the light guide channels 21 and then reaches the photoelectric converter 3.
In this method, the light guide channel 21 is surrounded by spacers that block light that is irradiated to the spacers, i.e., the light guide channel 21 restricts light from passing through. The spacers may be made of a light absorbing material, such as a ferrous metal. In addition, in other embodiments, the spacer 2 may be coated with a light blocking layer, which may be a diffuse reflective coating or a light absorbing coating. The size of the light guide channel 21 is preferably a size in which light diffraction does not occur, that is, the size of the light guide channel 21 is preferably 800nm or more.
In some embodiments, the size of the light-guiding channel 21 is preferably such that light passing within the light-guiding channel 21 is diffracted, i.e. only certain wavelengths are diffracted, thereby performing a color filtering function.
Fig. 4a to 4b are schematic diagrams illustrating an embodiment of the touch screen 4 according to the present invention, wherein fig. 4b is an enlarged schematic diagram of a-a portion in fig. 4 a.
As shown in fig. 4a to 4b, the touch screen 4 includes at least one spacer 2, at least one photoelectric converter 3, and a streamer 5. The spacer 2 is light-tight and has at least one light-conducting channel 21 formed therein. The photoelectric converters 3 are parallel to and spaced apart from the spacers 2, and may be respectively disposed in one-to-one correspondence with the light guide channels 21, so that light emitted from an object to be imaged may reach the photoelectric converters 3 after passing through the light guide channels 21. The flow-through member 5 is located above the spacer 2.
In this embodiment, the light guide channel 21 is surrounded by a spacer, which serves to block light irradiated to the spacer, i.e., the light guide channel 21 restricts light from passing through. The spacers may be made of a light absorbing material, such as a ferrous metal. In addition, in other embodiments, the spacer 2 may be coated with a light blocking layer, which may be a diffuse reflective coating or a light absorbing coating. The size of the light guide channel 21 is preferably a size in which light diffraction does not occur, that is, the size of the light guide channel 21 is preferably 800nm or more.
In some embodiments, the size of the light-guiding channel 21 is preferably such that light passing within the light-guiding channel 21 is diffracted, i.e. only certain wavelengths are diffracted, thereby performing a color filtering function.
A detailed schematic of the streamer 5 according to the invention is shown in figure 5. As shown in fig. 5, the streamer 5 includes a streamer 6, a light input section 7, and a light output section 8. The fluid body 6 includes a total reflection plate 9. The light input portion is located within the fluid body 6 and can output light at an angle to the total reflection plate. The light emitted from the light input unit can be totally reflected in the light guide body 6 and can be output from the light output unit.
In the present embodiment, the light flux 6 is implemented as a total reflection panel. A total reflection panel is defined herein as a panel capable of total reflection. The fluid 6 is therefore capable of totally reflecting the light rays. In this way, the light can be continuously reflected in the light flowing member 5, so as to achieve the effect of flowing the light, and therefore, the light flowing member 5 has a light flowing area inside.
In addition, in the present embodiment, the light input portion of the flow member 5, i.e., the light source, is located on the side of the flow member 5, and the light source on the side of the flow member 5 serves as the light input end. The other side of the flow-light member 5 serves as an output end of the light.
The light in the streamer 5 may be invisible light such as near infrared light, or may be visible light. As long as the angle of incidence is controlled well, total reflection will not be affected.
In this embodiment, the exterior of the streamer 5 is preferably the outside environment, i.e. ambient air. Therefore, the refractive index of the light flow member 5 is preferably larger than that of air to satisfy the condition that diffuse reflection occurs.
When there is a substance replacing the original external environment on the input area of the light flowing member 5, that is, on the upper surface of the light flowing member 5 shown in fig. 5, that is, when the condition of total reflection of the light flowing member 5 is not satisfied, the condition of total reflection of the light in the light flowing member 5 is not satisfied because sweat on the surface of the user's finger, for example, the refractive index of the texture of the user's finger skin itself, the refractive index of sweat is higher than that of air, and thus the light passes through the upper surface (input area) of the light flowing member 5 to reach the user's finger.
The finger surface itself is highly non-uniform. Generally, the surface of the finger is divided into ridges and valleys, wherein the ridges are the texture of the skin higher than the valleys. During the contact of the finger on the touch screen 4, the ridges on the surface of the finger are in surface contact with the streamer 5 and the valleys are not in surface contact with the streamer 5, wherein the streamer 5 surface is preferably a transparent medium, such as glass. Therefore, the light irradiated on the glass surface of the portion contacted by the fingerprint ridge line is diffusely reflected, and the light irradiated on the glass surface corresponding to the fingerprint valley line is totally reflected. Since the valley line is not in contact with the glass surface, air is present and total reflection still occurs. Therefore, in the information captured by the photoelectric converter 3, the intensity of the part of the light corresponding to the ridge line of the fingerprint is high, and the intensity of the part of the light corresponding to the valley line of the fingerprint is low.
In an embodiment, the photoelectric converter 3 is preferably a CCD or CMOS sensor. For example, after receiving the light signal, the color of the portion corresponding to the ridge line of the fingerprint is darker, and the color of the portion corresponding to the valley line of the fingerprint is lighter.
In this embodiment, since the photoelectric converter 3 receives the signal indicating the change of the light intensity after the finger is pressed, the stimulated signal is output at the corresponding position where the user presses the photoelectric converter 3 with the finger based on the original photoelectric converter 3. The light intensity of the natural light received from the external environment is, for example, a reference signal, and the reference signal may be manually set according to different use scenarios.
Based on the change of the excitation signal of the photoelectric converter 3 with respect to the reference signal, it is known that the object is pressed on the fluid 6.
In addition, the magnitude of the pressing force of the object can be obtained according to the change of the value of the signal. Specifically, taking the example of a finger being divided into fingerprint units having relatively small areas, the magnitude of the contact area indicates the magnitude of the force of contact. When the pressing force is increased, the finger muscles are deformed, resulting in an increased contact area of the skin texture with the upper surface.
In addition, since the blood vessels are present on the finger, the blood vessels may jump over time, and thus the contact between the finger and the touch screen 4 may change slightly, for example, the ridges on the texture of the finger may have different influences on the light due to the jumping of the blood vessels. Thus, the living body detection can be performed using the touch screen 4 in this manner.
Furthermore, sweat pores on the finger texture also correspond to valleys in the finger texture, and the finger does not affect the total reflection similarly to the touch panel 4, and therefore, the finger texture is low in luminance and can be used as a living body detection method.
Fig. 6a to 6c show schematic views of another embodiment of a touch screen 4 according to the present invention. In this embodiment of the touch screen 4, the pressure is measured by providing the resilient structure 9 above the corresponding light-conducting channel 21. In particular, as shown in fig. 6a, the elastic structure 9 is preferably a transparent material, such as a film.
As shown in fig. 6b to 6c, after the user presses the film with a finger, the position and shape of the film correspondingly changes with the user's pressing, i.e., the user presses the elastic structure 9, which changes the optical properties of the film. In this way, when different forces are used for pressing, the light ray information can be correspondingly changed. In this embodiment, the elastic structure 9 is preferably disposed above the imaging assembly 1.
As can be seen from fig. 6b to 6c, the light emitting cells 10 are alternately disposed with the unit imaging assemblies 1, specifically, with reference to the arrangement of the light emitting elements of the OLED screen. Thus, when a user's finger presses against the elastic membrane, the elastic membrane is bent or deformed, and thus the optical properties of the elastic membrane are changed. In this case, a part of the light originally emitted from the light emitting unit to the elastic film is reflected to the photoelectric converter 3. As the pressing force increases, the amount of light received by the photoelectric converter 3 also increases, and thus a change in the pressing force can be obtained. Further, the pressing force can be measured by linking the pressing force with the amount of light information.
Fig. 7a to 7b show schematic views of another embodiment of a touch screen 4 according to the invention. In this embodiment another way is used to measure the pressing force against the touch screen 4. This embodiment is similar to the embodiment shown in fig. 6b to 6c, again with the resilient structure 9 located above the imaging assembly 1. In contrast, as shown in fig. 7a, the light emitting unit is not included in this embodiment, but a filling stopper 11 is provided in the elastic structure 9.
Specifically, in an embodiment, a fill block 11 is provided in the elastic film, and the single imaging assembly 1 receives an image of the block. The size of the image formed by the stopper varies with the pressing force. Meanwhile, the size of the stopper in the image formed in each unit imaging module 1 is associated with the pressure, so that the force application point and the force area can be measured. In particular, when used in conjunction with an OLED screen, a partial area may be provided specifically for measuring pressure.
The application also comprises a camera module. This camera module includes above-mentioned formation of image subassembly 1 and display screen. Wherein the imaging assembly 1 is located below the display screen.
In this embodiment, a plurality of imaging assemblies 1 are combined with one of the existing OLED screen, LCD screen, and LED screen to form a device capable of image pickup and display.
The embodiment meets the development trend of the full-face screen of the existing mobile terminal such as a mobile phone, and saves the front camera, so that the screen occupation ratio of the display screen can be further improved.
Taking an OLED screen as an example, the OLED screen comprises a substrate, a cathode layer, an anode layer and a light emitting layer (organic light emitting diode, OLED). There is now a new technology based on OLEDs-Flexible organic light emitting display technology (flexile OLED, FOLED). This technology may enable highly portable, foldable display technologies to be suitable for the present invention in the future.
In this embodiment, the imaging assembly 1 is used in conjunction with an OLED screen, and includes a structure for supporting and encapsulating a substrate, a cover plate for protecting, a photoelectric converter 3 for receiving light and transmitting information, and the OLED screen. The spacer is used for blocking light.
When used in conjunction with an FOLED, the structure in this manner is a substrate, cathode, anode and light emitting layer, since the FOLED technology can make flexible bodies. In particular, in this manner, the flexible substrate is made of a soft material. In this way, the package is portable and bendable. For example, metal foil is used as a substrate of the FOLED, the ITO film layer in the anode is replaced by conductive polymer (flexibility), and the whole structure of the FOLED is realized by adopting a multilayer film packaging mode.
In addition, in an actual structure, as a substrate of the package, the substrate may be a transparent material or an opaque material, and thus the substrate may be preferable as a spacer.
Note that the anode layer is generally implemented as a light-transmitting ITO film as a structure for emitting light. In this way, the cathode layer may also act as a spacer.
In addition, in an inverted structure (IOLED), since the cathode layer serves as an emission layer, the anode layer may also serve as a spacer in this manner.
By bending the flexible body, the shooting range can be enlarged, thereby realizing multi-angle change. The imaging assembly 1 may be arranged on a substrate while in circuit, the anode and cathode of the OLED may be used as power supply. The OLED light-emitting layer may be disposed between every two light-guiding channels 21.
It should be understood that although in the present embodiment the imaging assembly 1 is combined with an OLED screen, in a similar solution, for example, the imaging assembly 1 may be combined with an LED screen, an LCD screen, without limiting the invention.
Fig. 8 shows a schematic view of an embodiment of a camera module according to the invention. In particular, in combination with an LED screen or an LCD screen, the liquid crystal layer 12 in the screen can act as a propagation channel for turning light on or off. In this way, therefore, the function of the imaging module 1 for capturing the outside world can be realized. The liquid crystal layer acts like an aperture in a camera. Here, the liquid crystal layer controls the light entering amount of the light guide channel 21 to control the circle of confusion of the light on the image plane, and thus the combination method in this embodiment can adjust the light entering amount to blur the background. Thus, when the aperture becomes large (when the amount of incoming light increases), the diameter of the circle of confusion becomes large, thereby reducing the depth of field, i.e., the background is not easily imaged sharp, and when the aperture becomes small, i.e., when the amount of incoming light decreases, the diameter of the circle of confusion becomes small, thereby the depth of field becomes large, thereby making the background easily imaged sharp.
Fig. 8 illustrates the use of a controlled liquid crystal layer to adjust the depth of field range. In this way, different background blurring effects can be achieved according to the depth of field.
In addition, in the background blurring method, the photoelectric converters 3 may be controlled at intervals to further expand the reception range. For example, by operating the first photoelectric converter and the third photoelectric converter, and turning off the second photoelectric converter and the fourth photoelectric converter, the cross light is not received any more, so that the working range of the photoelectric converters is expanded, that is, the object distance for clear imaging is increased.
In this embodiment, the light-conducting channel 21 is preferably a circular hole. Of course, since the light-conducting channel 21 may be extremely small compared to the object, other shapes of the hole may be chosen in case of satisfying pinhole imaging. However, it is preferable that the pattern is axisymmetric, so that the light information collected by the photoelectric converter 3 through the light guide channel 21 can be symmetric for later operation and processing.
Light rays of minute details from an object located on the object side pass through the light guide channel 21 in diffused reflection, and since the light guide channel 21 restricts the range through which the light rays pass, the diffusely reflected light rays form a shape similar to that of the channel on the image side when passing through the light guide channel 21, and thus, the light rays as if the minute details of the object passed through the light guide channel 21 are formed by being superimposed. Therefore, in this theory, selecting a hole having an axisymmetric shape can increase the resolution of the image of the object in which the above-described fine-detail light rays are superimposed.
When the object is square and the light guide channel 21 is circular, the image boundary is circular, and the resolution is not high.
When the object is square and the light guide channel 21 is square, the image boundary is square, and the resolution is high.
In this embodiment, the light rays passing through the light guide channel 21 at the image side with the minute details at the object side are taken as the minimum resolution.
In addition, by using the light emitting function of the OLED screen, when light is irradiated onto an object, the light diffusely reflected by the object is received by the photoelectric converter 3 through the light guide channel 21, and then the light diffusely reflected by the object can be received. Therefore, in this way, for example, fingerprint recognition, self-timer shooting can be realized.
Fingerprint recognition functionality may also be implemented in this embodiment. Also, similarly to the above-described imaging module 1, specifically, the judgment is made based on the occluded image, and the size of the subject to be photographed or the size of the contact surface can be judged.
Fig. 9a to 9b show schematic views of another embodiment of a camera module according to the invention. In particular, fig. 9a to 9b show an imaging process of the camera module according to the present invention. As shown in fig. 9a to 9b, in this embodiment, the object itself emits light or emits light by diffuse reflection, the light passes through the light guide channel 21 in a straight direction, and pinhole imaging occurs in the unit imaging element 1.
The light is received by the photoelectric converter 3 after passing through the light guide channel 21, and the photoelectric converter 3 then receives the signal, processes the signal and outputs an image, thereby outputting an object image.
Referring to fig. 9b, when the object is located at the boundary, it is located between the first imaging assembly 1 and the second imaging assembly 1, so that the first imaging assembly 1 and the second imaging assembly 1 acquire complete information of the object. Therefore, only the information received by the first imaging assembly 1 and the second imaging assembly 1 needs to be superposed to obtain a complete picture of the object.
Referring to fig. 9a, when the object is outside the boundary, for example, when the object is within the collection angles of the first to sixth imaging assemblies 1 to 1, the information collected by the first to sixth imaging assemblies 1 to 1 is overlapped for multiple times, and the image of the complete object can be output after combining the images of the overlapped objects and overlapping the information without overlapping. This method uses a large number of photoelectric converters 3, but the shooting range is large.
The imaging module adopts the mode to shoot the object. For example, in this way, it is possible to perform a moving scan of the screen against a person's business card or the surface of an object, and therefore, it is possible to photograph the surface information of the object at a short distance, that is, to perform a short-distance photographing with high accuracy.
Fig. 10 shows a schematic view of another embodiment of a camera module according to the invention. Unlike the above-described embodiment of the camera module, an optical element for converging parallel light rays is provided above each of the imaging assemblies 1 in this embodiment. The optical element is for example a convex lens 13, thereby further constraining the acquisition angle of the imaging assembly 1.
As shown in fig. 10, the acquisition angle of the imaging assembly 1 performs pinhole imaging in such a way that it receives only parallel light rays. Thus, in this manner, the acquisition angle of the imaging assembly 1 is a fixed angle, i.e. the width over which parallel light rays are acquired is achieved. In this way, since the problem of overlapping information in which an object is outside the boundary in the above-described embodiment is reduced, only a simple superimposition process is required to image after the image is acquired. In this way, an object at a distance can be photographed with high accuracy.
Fig. 10 shows an ideal case, in which the utilization rate of the photoelectric converters 3 is improved by making the corresponding photoelectric converters 3 collect parallel light rays having a fixed width passing through the light guide channel 21 in terms of optical design. In addition, alternatively, a different convex lens 13 may be provided, and although this increases the overlapping area, the acquisition angle becomes large, and thus the shooting range also becomes large.
In addition, it is also understood that the light rays may be converged by the convex lens 13 corresponding to the photoelectric converter 3 or the concave lens facing away from the photoelectric converter 3.
As shown in fig. 11, this embodiment collects only light rays of a portion of the parallel area on the object side, thereby eliminating interference of unnecessary stray light. For the same reason, when the resultant image is calculated later or the like, information stitching of each photoelectric converter 3 is facilitated, and thus the calculation is simpler.
Fig. 12 shows a schematic view of another embodiment of a camera module according to the invention. Unlike the above-described embodiment of the camera module, each of the imaging assemblies 1 in this embodiment is provided with a superlens 14 on the side close to the photoelectric converter 3 to condense light.
The superlens 14 may converge the light. Specifically, light is concentrated using nanostructures in the superlens 14 that are smaller in size than the wavelength of the light. These structures may have different shapes, sizes and arrangements to block, absorb, enhance, refract photons so that the superlens 14 may focus light. Such a superlens 14 is arranged above the single imaging component 1, preferably above the photoelectric converter 3. This has the advantage that the light passing through the light-conducting channel 21 can be concentrated to a smaller extent, thereby increasing the brightness of the light. This is particularly suitable for the case where the number of photoelectric converters 3 in the imaging assembly 1 is small, and ensures that the light information received by each photoelectric converter 3 is sufficient, thereby enabling shooting with high brightness compared to the mode without the superlens 14. The superlens 14 utilizes the effect of light rays after diffraction to offset part of the light rays, thereby improving the convergence degree and simultaneously eliminating part of stray light.
The superlens 14 may also effect filtering. The light with different wavelengths is diffracted according to the size of the super lens 14, so that the light rays can be converged, and only the light within the wavelength range can be received. By this arrangement, the bayer filter in the camera module can be eliminated. The latter RGB algorithm is defined by the location of the different wavelength diffractions in the design, while the size is further reduced by the removal of the filters.
In this embodiment, the photoelectric converter 3 in the imaging module 1 can perform color imaging when selected as an RGB pixel, is easy to manufacture when selected as a monochrome pixel, and is suitable for an imaging mode requiring less strict requirements such as fingerprint recognition.
Of course, it should be understood that the variations in this manner may also be incorporated with an LCD.
Fig. 13 shows a schematic view of another embodiment of a camera module according to the invention. Unlike the previous embodiment of the camera module, each of the imaging assemblies 1 in this embodiment is provided with an optical path-deflecting element 15 on the light guide channel 21 to deflect the optical path.
In particular, it is preferable to add a mirror by means of a MEMS device. The MEMS can also realize moving the reflecting surface, thereby realizing large-angle shooting, and realizing a scanning shooting mode without moving the whole imaging device. As mentioned above, since the imaging device increases with the shooting distance, there is a superposition of images shot by the single imaging module 1. The partially overlapped images have repeated information that is suitable for processing between the two images. Therefore, compared with a shooting mode that a mobile phone needs to be rotated to carry out panorama shooting, the image shot by the camera module is more stable, and the picture has no splicing trace.
In addition, in this embodiment, the information of the overlapped central area is subjected to the superimposition processing, and thus the resolution of the central area of the image is high. Meanwhile, the periphery of the image is received by only a few imaging assemblies 1, so that the edge is not clear, and edge blurring is realized.
The present application also provides a method of distance sensing, i.e., a method of remote imaging. Fig. 14 shows a flow chart of a method of distance measurement according to the invention.
The method comprises the following steps:
s1: forming a plurality of light-guiding channels 21 in the light-impermeable spacer 2;
s2: arranging a plurality of photoelectric converters 3 in parallel with and spaced apart from the spacers 2 and in one-to-one correspondence with the light guide channels 21, respectively, so that light emitted by an object to be imaged passes through the light guide channels 21 and reaches the photoelectric converters 3;
s3: obtaining a plurality of images of the object to be imaged formed by the plurality of light guide channels 21 according to the electrical signals output by the photoelectric converter 3; and
s4: the distance to the object to be imaged is calculated from the degree of repetition of the plurality of images.
Specifically, in the manner that the imaging assembly 1 of the above embodiment is collocated with the OLED screen, the OLED screen emits light, the light emitted by the screen is diffusely reflected by the object to be photographed, and then received by the imaging device, and the image of the object to be photographed is output after being received. In the above process, in the imaging method mentioned in the present application, the images obtained by capturing the object may have different degrees of repetition at different boundaries. The repetition degree refers to the area of a repeated pixel of the whole or a certain local shot object, and the distance between the object and the camera module can be judged according to the area of the repeated pixel.
Fig. 15a to 15d show schematic views of an embodiment of a method of distance measurement according to the invention.
In this embodiment, the single imaging module 1 takes a subject and outputs an image. Then, the degree of repetition of the object or the local repetition of the object in the image output by the different single imaging assemblies 1 is judged.
Referring to fig. 15a to 15d, after different images are recognized, the degree of repetition is recognized based on different boundary lines, and the degree of repetition is calculated by scaling the images based on the boundary lines.
For example, as shown in fig. 15a, the object or the object part is photographed only between the first boundary line and the second boundary line. In practice, in this way, the user can take an object close to the imaging assembly 1 for a pre-calibration. For example, the pre-calibration may be performed after the user places the object at a predetermined distance, for example, the object is photographed at 20cm, which is assumed to be the first boundary line.
In this way, the object surface is required to have a color different from the external environment to be used as a characteristic for judging the object, so that the distance of the object is judged according to the repetition degree of the information of the object on the image information of the later period.
This is also preset in a calibrated manner, since the repetition rate differs between different borderlines. The pre-calibration has the advantage that the distance can be detected in real time after the object moves. When shooting is carried out, the corresponding focal length can be changed after the distance is identified, so that the object can be shot in time. Therefore, by the method, dynamic focusing can be realized, and even real-time focusing can be realized. Of course, when shooting objects which are common in daily life, images for pre-storage, such as users themselves, are shot first, and when live broadcast or video shooting is performed, rapid and clear object output can be achieved along with the continuous change of the distance.
Fig. 16 shows a schematic view of another embodiment of a camera module according to the invention. It was mentioned above that the OLED screen can be implemented as a flexible screen. As shown in fig. 16, in this embodiment, the driver 16 is provided on the substrate of the OLED screen to adjust the distance between the photoelectric converter 3 and the light guide channel 21 in the single imaging module 1 and the degree of curvature. The driver 16 is preferably arranged on the substrate below the photoelectric converter 3.
In this embodiment, the distance of the photosensitive surface from the light-conducting channel 21 can be adjusted or the depth of field of different individual imaging assemblies 1 can be made different.
In addition, the light-sensing surface is bent, so that the focusing of the light-sensing surface of the adjusting part can be realized.
This way, as shown, an image with background blurring can be taken.
In another embodiment, similar to the OLED approach, the color filter in the LCD structure can also be integrated as the color filter in the imaging device according to the present application, i.e., the imaging screen and camera module share the color filter structure. This way also color stitching can be achieved. Of course, lenses may be used instead of the filters.
In another embodiment, the aperture of the light-conducting channel 21 is controlled. When the aperture is close to a certain wavelength, the light of the wavelength is diffracted by the light guide channel 21. The selectivity of the diffraction hole to light with a specific wavelength range is utilized to realize the color filtering function within a certain wavelength range, so that the color filtering sheet can be eliminated.
The application also provides a light field camera. The light field camera may have a microlens array, and may further include: a spacer 2, the spacer 2 being light-tight and having at least one light-conducting channel 21 formed therein; and at least one photoelectric converter 3, the photoelectric converter 3 may be parallel to and spaced apart from the spacer 2, and may respectively correspond to the light guide channels 21 one to one.
The microlens array may be located between the spacer 2 and the photoelectric converter 3, and light emitted from an object to be imaged passes through the light guide channel 21 and the microlens array and reaches the photoelectric converter 3.
Fig. 17 shows a schematic diagram of a prior art light field camera. As shown in the figure, the conventional light field camera records light by adding a micro lens array at the focal length of a common lens, and then realizes digital refocusing by a later algorithm.
The light corresponding to the shot object is imaged after passing through the main lens, passes through the micro lens array and is imaged on the pixels of the photoelectric converter 3 behind the micro lens array again. After the shot object passes through the lens, the shot object can form an image on different pixel areas of the photoelectric sensor through each micro lens in the micro lens array.
Any light ray passes through the lens element, the microlens array element and the photoelectric sensor in a conjugate relation on the optical path. The direction information of the light can be obtained by this relationship. In fig. 17, a planar space is taken as an example, and a cubic space may be analogized.
Fig. 18a to 18b show schematic diagrams of a prior art light field camera.
Since the focal length of the microlens is much smaller than that of the main lens, the main lens can be regarded as being located at infinity from the microlens mirror. Therefore, for example, it can be considered that a vertical bar region on the main lens in fig. 18a passes through a microlens and then is focused on a certain pixel behind the microlens, and since the microlens is one hundredth of the main lens, it can be considered that the one photo-sensing chip pixel collects all light information inside another color line. In this way a light ray inside the camera is recorded. Similarly, other pixels correspond to one light ray.
Similarly, as shown in FIG. 18b, each of the pixels of the microchip after the microlens can also be regarded as light transmitted from different regions of the lens. Since the position of each pixel is fixed, the position of the microlens corresponding to each pixel is also fixed. Since the light rays travel straight, the direction information of the light rays can also be obtained.
Light field cameras are also typically required to have a refocusing function. Fig. 19 shows a refocusing schematic of a prior art light field camera.
Since the direction information and the intensity information of all the light rays are obtained by the micro lens array and the photoelectric converter 3, all the light rays can be focused on different planes by simply performing similar triangular transformation by matching the algorithm corresponding to the arrangement positions of the micro lens array 17 and the pixels of the photosensitive chip.
Fig. 20 shows a refocusing effect diagram of a prior art light field camera. As shown in the figure, during shooting, the camera focuses on the rear louver, and the focus is shifted to the portrait through refocusing.
Fig. 21 shows a schematic view of an embodiment of a light field camera according to the present invention. In this embodiment, an array of light-conducting channels 21 is employed in place of the main lens of a conventional light field camera.
Due to the refocusing function of the light field camera, the camera module does not need to be accurately calibrated during assembly, and the out-of-focus image can be refocused through an algorithm.
The light field camera is carried, so that the requirement of the camera module on the assembly precision is reduced, and the production cost is reduced.
Fig. 22 shows a refocusing effect diagram of an embodiment of a light field camera according to the invention. The image shown in the left part of fig. 22 uses an f/4 diaphragm. Because the depth of field is small, when the image is focused on a person in the middle of the image, the person in the lower image cannot be clearly imaged.
With the f/22 smaller aperture in the image shown in the middle of fig. 22, the depth of field becomes larger and most people are clearly imaged. Meanwhile, due to insufficient light input quantity, more noise points appear, and the imaging quality is degraded.
A sufficient amount of image depth information is acquired at the time of shooting in the image shown in the right part of fig. 22. And in the later stage, a plurality of refocusing images with different focal lengths are obtained through a refocusing algorithm, the sub-images received by the pixels of the photosensitive chips are traversed through the refocusing images, the depth of the sub-pixel in a certain refocusing image which enables the sub-pixel to be clearest is taken as the depth of the sub-image, and the sub-image is refocused. And then, splicing the sub-images subjected to refocusing, so that the whole image frame has a better imaging effect, the brightness is not sacrificed while the large depth of field is realized, and noise is not generated.
The situation that the light field camera is matched with the traditional lens imaging system for use is that the depths (object distances) are different, and the imaging definition is influenced. However, the method of traversing the depth of the refocusing image by the sub-pixels to obtain the clearest depth cannot achieve the purpose of obtaining the accurate depth.
The depth precision of the sub-pixel is influenced by the depth distribution of the sequence of the refocusing image and the definition evaluation algorithm.
In this embodiment, the main lens is replaced by an array of light guide channels 21, and the change of the array imaging depth (object distance) of the light guide channels 21 only changes the size of the image and the image acquisition range, and the images at different depths are clear images. In terms of algorithm, as long as an image with a large acquisition range (or with a small magnification, preferably including an image of which the depth needs to be acquired) is taken, each sub-pixel is compared with a corresponding region in the image to obtain a size proportional relationship, and then depth information can be acquired.
Embodiments according to the invention have the advantage over light field cameras employing a main lens that:
the main lens is replaced by an array of the light guide channel 21, the definition of all images is consistent, only one pair of images with certain depth is needed to be taken, and all sub-images of the photosensitive pixels are traversed to compare the magnification, so that the calculation amount is small. In other words, the depth of each sub-pixel obtained by the main lens scheme is the depth in the pre-fetched refocused image sequence, and the accuracy also depends on the step size of the refocused image depth value, but the more the acquisition, the larger the calculation amount.
In other words, the solution of using the master lens is to prepare a set of answer library, then use the sub-image of the photosensitive chip to compare with the answer, take the depth of the closest answer, and determine whether the depth calculation depends on whether the answer library is prepared completely. The scheme of the array of the light guide channels 21 is that all answer solutions are obtained through a topic solution with the largest information quantity, and all the sub-pixel images of the photosensitive chips can solve the depth by means of the topic.
The depth in this embodiment is calculated as follows: compared with the contrast of the definition of the traditional light field camera, the comparison of the corresponding image size multiplying power is more accurate in numerical evaluation and the obtained depth is more accurate.
In addition, if the object distance is less than one time of the focal length of the main lens, a virtual image is formed, and the light rays which form the real image with other parts simultaneously penetrate through the micro lens array and then fall on the chip. Because different algorithms are needed for refocusing the virtual image and the real image, that is, the virtual image needs to be processed by an additional algorithm, the light rays of the virtual image and the real image are difficult to be distinguished by a chip and are processed differently, and therefore, the light field camera adopting the traditional lens imaging system is difficult to realize macro shooting.
In the embodiment, the lens in front of the traditional light field camera is replaced by the light-passing channel, no matter how the relation between the object distance and the focal length, the real image is presented, the uniform image algorithm can be adopted for refocusing, and the effect of micro-distance large-depth-of-field shooting is realized.
In addition, a disadvantage of light field cameras is insufficient spatial resolution. With the same number of pixels, a conventional camera records a two-dimensional image, the number of pixels being fully used. The light field camera records four-dimensional images and then integrates the four-dimensional images to generate two-dimensional images, information is lost in the integration process, namely, a plane lattice is changed into a line lattice, the number of pixels of the two-dimensional images is reduced, and the result is insufficient spatial resolution.
The spatial resolution is proportional to the number of microlens arrays. If the conventional mobile phone camera module carries an array imaging system of the light guide channel 21, the maximum number of the micro lenses is limited by the light transmission amount, that is, the diaphragm aperture is limited, and the lifting space of the diaphragm aperture on the optical design of the lens is very limited. The luminous flux of the optical system for imaging by the light guide channel 21 can be realized by expanding the distribution of the light guide channel, and the luminous flux has huge space for improving. And further can greatly make up the deficiency of the spatial resolution of the light field camera.
Fig. 23 shows a schematic view of another embodiment of a light field camera according to the present invention. In this embodiment, a high-grade lens and a large-bottom photosensitive chip are matched. This embodiment differs from the above-described embodiment in that it replaces the main lens of a conventional light field camera with an array of light-conducting channels 21, whereas it replaces the microlens array with an array of light-conducting channels 21.
After the microlens array with light field camera trades the array of leaded light passageway 21 for, the expansibility that the array of leaded light passageway 21 and screen match make the sensitization chip of carrying on also can design great to the performance of the senior camera lens of adaptation, thereby make the camera lens design no longer limited by chip photosensitive area.
The invention provides a multi-view depth camera, which realizes the effect of depth recognition of objects within an overlapping range after correspondingly overlapping acquired images by utilizing a mode of arranging light guide channels 21 in a staggered manner. FIG. 24 shows a schematic diagram of an embodiment of a multi-purpose depth camera according to the present invention. As shown, the multi-view depth camera includes: a spacer 2, the spacer 2 being light-tight and having a plurality of light guide channels 21 formed therein; and a plurality of photoelectric converters 3, wherein the photoelectric converters 3 may be parallel to and spaced apart from the spacers 2 and may respectively correspond to the light guide channels 21 one by one, so that light emitted from the object to be imaged may reach the photoelectric converters 3 after passing through the light guide channels 21. Wherein the central axes of the light-conducting channels 21 may be staggered with respect to each other.
Due to the included angle between the light guide channels 21, there are images of objects at different angles during imaging, which are superimposed to form a depth image of the object. The trigonometric theory can be used for measuring the distance of the object and can further measure the characteristic information of the surface of the object. In this manner, it is also necessary to have a difference between the surface of the object and the external environment as a criterion for judgment.
The present application further provides a pixel color filter array element for use with a light guide channel. The pixel color filter array member may include: a substrate; a dielectric layer attached to the substrate; and a plurality of pixel color filters attached to the dielectric layer and forming an array. The medium layer is one of the photoelectric converter 3 and the display screen.
The present application also provides a method of forming the above-described pixel color filter array member. The method of forming a pixel color filter array member may include the steps of: arranging a substrate; attaching a dielectric layer to the substrate; transferring the three-color filter to a support plate according to the RGB array in a transfer printing mode to form a color filter array; coating a transparent adhesive material on the substrate; and integrally adhering the color filter array on the carrier plate to the dielectric layer. Wherein, the medium layer is one of the photoelectric converter 3 and the display screen.
FIG. 24 shows a flow chart of a prior art photocopying process.
It is known in the art that the fabrication of a photomask is a critical part of the process flow, is the highest cost part of the photolithography process flow, and is one of the bottlenecks that limit the minimum line width. The conventional photolithography process requires a large-area mask when manufacturing a large-area chip array.
The imaging principle of the photosensitive chip needs to arrange a bayer array color filter array on each pixel, and the color filter has three RGB (RGB channels are taken as an example), and needs to be applied to the pixels of the photosensitive chip through three photolithography processes. As shown in fig. 24, after the 6-pass process, a color filter material is applied in the photoresist gap with a specific pattern, then the photoresist is removed by light, and the upper-pass process is performed again, another color filter material is applied, and finally, another color filter material is processed again.
Due to the three sets of similar procedures, there are certain limitations on the selection of color filter materials and processing techniques, and subsequent photolithography techniques cannot have an effect on the previously formed color filter regions, such as the non-selection of thermoplastic materials (thermally curable reversible materials); the color filter can not be obtained by a solvent volatilization process, and the solute obtained by the former solvent volatilization process can be re-dissolved by the latter solvent.
In addition, the utilization rate of the supplied materials of the RGB color filter material is high. In the traditional photoetching process, a layer of color filter material needs to be integrally evaporated or sputtered on the surface of a photosensitive chip on which a photoresist with a specific shape is laid, the photoresist is removed through a certain photoresist removing process, meanwhile, the color filter material attached to the surface of the photoresist is taken away, and the color filter material evaporated in a photoresist groove is left. Therefore, the color filter material attached to the surface of the photoresist is wasted.
In the embodiment of the present invention, since the pad printing method is adopted, the color filter material is firstly made into the whole plate by the evaporation method, etc., and in the pad printing process, the color filter material needs to be placed on the elastic support plate with certain elasticity, and is cut into the required units by the laser cutting method, etc. When the pad printing head is pressed down, the color filter units are attached to the pad printing head, and the interval between the color filter units is enlarged. The pad print head is then further expanded by inflation or mechanical support, etc., and the spacing between the filter elements is further expanded. If this is not done, the color filter units are transferred from the pad printing head to the transfer carrier plate, and the interval is restored to the gap on the flexible carrier plate.
A specific flow of the pad printing process is shown in fig. 25. Specifically, in this process, first, the first color filter array 31 is disposed on the transfer support plate 30, and then, through three sets of pad printing processes, the three-color filters are transferred from the transfer support plate 30 to the elastic support plate 33 by the pad printing head 32 in the arrangement required for RGB array, respectively, to form the second color filter array 34, wherein the pad printing head 32 is expanded during the transfer process such that the gap between the color filters of the first color filter array 31 is adapted to the gap between the color filters in the second color filter array 34. And then, the transparent glue material is spin-coated on the photosensitive chip, so that the RGB color filter array on the carrier plate is integrally bonded on the photosensitive chip array, the complicated photoetching process, auxiliary components and equipment thereof are not needed, and the material and the forming process of the color filter are freely selected.
In this embodiment, the mechanical support or the inflation is reasonably arranged to expand the transfer printing head, so that the arrangement of the color filter units when the color filter units are transferred onto the transfer printing plate is equal to the arrangement of the same color filter units required on the photosensitive chip.
In addition, by reasonably arranging the area with adhesive force on the pad printing head to the color filter material, the color filter unit of the leveling plate can be utilized, and the material loss is only the loss caused by laser cutting of the blocks.
The three-color filter is arranged and transferred to a carrying plate according to the RGB array through three groups of transfer printing procedures, and then the transparent glue material is coated on the photosensitive chip in a spinning mode, so that the RGB filter array on the carrying plate is integrally bonded to the photosensitive chip array. The reason why the transfer printing is not directly performed on the photosensitive chip is that the glue material needs to be spin-coated in order to be uniformly distributed.
The adhesion of the color filter material is as follows: transfer carrier plate > transfer printing head > elastic carrier plate
It will be readily appreciated by those skilled in the art that similar processes may be used where, for example, LEDs or the like require an array of color filter materials or chips, where a photolithographic process was originally required.
The above description is only a preferred embodiment of the present application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present application is not limited to the embodiments with a specific combination of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (85)

  1. An imaging assembly, comprising:
    a spacer opaque to light and having at least one light guide channel formed therein; and
    the photoelectric converters are parallel to and spaced apart from the spacers and are respectively arranged in one-to-one correspondence to the light guide channels, so that light emitted by an object to be imaged passes through the light guide channels and then reaches the corresponding photoelectric converters.
  2. An imaging assembly according to claim 1, wherein the spacer forms a plurality of light-conducting channels, the plurality of light-conducting channels forming an array of light-conducting channels in the spacer.
  3. An imaging assembly according to claim 1, wherein the light-conducting channel is dimensioned to be above 800 nm.
  4. An imaging assembly according to claim 1, wherein the light-conducting channel is sized to diffract certain wavelengths of the passing light to split the light such that certain wavelength bands of light reach a predetermined photoelectric converter.
  5. An imaging assembly according to claim 1, wherein the spacer is made of a light absorbing material.
  6. An imaging assembly according to claim 1, wherein the photoelectric converter receives all light from the corresponding light-conducting channel, and the light of the corresponding light-conducting channel illuminates the entire light-receiving face of the corresponding photoelectric converter.
  7. An imaging assembly according to claim 1, wherein the spacer is coated with a light blocking layer.
  8. An imaging assembly according to claim 7, wherein the light blocking layer is a diffuse reflective coating or a light absorbing coating.
  9. A method of making an imaging assembly comprising the steps of:
    forming at least one light-conducting channel in the light-impermeable spacer;
    at least one photoelectric converter is arranged to be parallel to and spaced apart from the spacer and respectively correspond to the light guide channels one to one, so that light emitted by an object to be imaged passes through the light guide channels and then reaches the photoelectric converters.
  10. The method of claim 9, wherein the spacer forms a plurality of light-conducting channels that form an array of light-conducting channels in the spacer.
  11. The method of claim 9, wherein the light-conducting channel is sized to be 800nm or greater.
  12. The method of claim 9, wherein the light guide channel is sized to diffract a particular wavelength of the passing light to split the light such that the light of the particular wavelength band reaches a predetermined photoelectric converter.
  13. The method of claim 9, wherein the spacers are made of a light absorbing material.
  14. The method of claim 9, wherein the opto-electric converter receives all light from the corresponding light-conducting channel, and the light of the corresponding light-conducting channel illuminates the entire light-receiving face of the corresponding opto-electric converter.
  15. The method of claim 9, wherein the spacer is coated with a light blocking layer.
  16. The method of claim 15, wherein the light blocking layer is a diffuse reflective coating or a light absorbing coating.
  17. A touch screen, comprising:
    a spacer opaque to light and having at least one light guide channel formed therein; and
    the photoelectric converters are parallel to and spaced apart from the spacers and are respectively arranged in one-to-one correspondence to the light guide channels, so that light emitted by an object to be imaged passes through the light guide channels and then reaches the photoelectric converters; and
    a light flow member located over the spacer, comprising:
    a fluid body including a total reflection plate;
    a light input portion which is positioned in the fluid light body and outputs light having an angle with the total reflection plate; and
    and a light output unit from which the light emitted from the light input unit is totally reflected in the fluid body and is output.
  18. The touch screen of claim 17, wherein the light guide channels form an array of light guide channels in the spacer.
  19. The touch screen of claim 17, wherein the light guide channel is sized to be 800nm or greater.
  20. The touch screen of claim 17, wherein the light guide channel is sized to diffract a particular wavelength of the passing light to split the light such that a particular wavelength band of light reaches a predetermined photoelectric converter.
  21. The touch screen of claim 17, wherein the spacers are made of a light absorbing material.
  22. The touch screen of claim 17, wherein the photoelectric converters receive all light from the corresponding light-conducting channels, and the light of the corresponding light-conducting channels illuminates the entire light-receiving face of the corresponding photoelectric converters.
  23. The touch screen of claim 17, wherein the spacer is coated with a light blocking layer.
  24. The touch screen of claim 23, wherein the light blocking layer is a diffuse reflective coating or a light absorbing coating.
  25. A touch screen, comprising:
    a spacer opaque to light and having at least one light guide channel formed therein; and
    the photoelectric converters are parallel to and spaced apart from the spacers and are respectively arranged in one-to-one correspondence to the light guide channels, so that light emitted by an object to be imaged passes through the light guide channels and then reaches the photoelectric converters;
    a transparent elastic mechanism located above the spacer; and
    a light source located on a side of the spacer facing the transparent elastic mechanism and emitting light toward the transparent elastic mechanism.
  26. The touch screen of claim 25, wherein the light guide channels form an array of light guide channels in the spacer.
  27. The touch screen of claim 25, wherein the light guide channel is sized to be 800nm or greater.
  28. The touch screen of claim 25, wherein the light guide channel is sized to diffract a particular wavelength of the passing light to split the light such that a particular wavelength band of light reaches a predetermined photoelectric converter.
  29. The touch screen of claim 25, wherein the spacers are made of a light absorbing material.
  30. The touch screen of claim 25, wherein the photoelectric converters receive all light from the corresponding light-conducting channels, and the light of the corresponding light-conducting channels illuminates the entire light-receiving face of the corresponding photoelectric converters.
  31. The touch screen of claim 25, wherein the spacer is coated with a light blocking layer.
  32. The touch screen of claim 31, wherein the light blocking layer is a diffuse reflective coating or a light absorbing coating.
  33. The touch screen of claim 25, wherein the transparent elastic mechanism is a transparent film.
  34. A touch screen, comprising:
    a spacer opaque to light and having at least one light guide channel formed therein; and
    the photoelectric converters are parallel to and spaced apart from the spacers and are respectively arranged in one-to-one correspondence to the light guide channels, so that light emitted by an object to be imaged passes through the light guide channels and then reaches the photoelectric converters;
    the transparent elastic mechanism is positioned above the spacing piece, and an opaque blocking piece is arranged in the transparent elastic mechanism.
  35. The touch screen of claim 34, wherein the light guide channels form an array of light guide channels in the spacer.
  36. The touch screen of claim 34, wherein the light-conducting channel is sized to be 800nm or greater.
  37. The touch screen of claim 34, wherein the light guide channel is sized to diffract a particular wavelength of the passing light to split the light such that a particular wavelength band of light reaches a predetermined photoelectric converter.
  38. The touch screen of claim 34, wherein the spacers are made of a light absorbing material.
  39. The touch screen of claim 34 wherein the photoelectric converters receive all light from the corresponding light-conducting channel and the light of the corresponding light-conducting channel illuminates the entire light-receiving face of the corresponding photoelectric converter.
  40. The touch screen of claim 34, wherein the spacer is coated with a light blocking layer.
  41. The touch screen of claim 40, wherein the light blocking layer is a diffuse reflective coating or a light absorbing coating.
  42. The utility model provides a module of making a video recording which characterized in that includes:
    the imaging assembly of any of claims 1-8; and
    a display screen is arranged on the display screen,
    wherein the imaging assembly is located below the display screen.
  43. The camera module of claim 42, wherein the display screen is one of an OLED screen, an LCD screen, and an LED screen.
  44. The camera module of claim 43, wherein the substrate in the OLED screen forms a spacer.
  45. The camera module of claim 43, wherein the cathode layer in the OLED screen forms a spacer.
  46. The camera module of claim 43, wherein the anode layer in the OLED screen forms a spacer.
  47. The camera module of claim 42, wherein an optical element for collecting light is disposed above each light-conducting channel in the spacer of the imaging assembly.
  48. The camera module of claim 47, wherein the optical element is a convex lens.
  49. The camera module of claim 43, wherein a superlens is disposed over each photoelectric converter of the imaging assembly to focus light.
  50. The camera module of claim 49, wherein an optical path turning element is disposed above the light guide channel.
  51. The camera module of claim 50, wherein the optical path-deflecting element comprises a MEMS device and a mirror.
  52. The camera module of claim 43, wherein the camera module is positioned on a substrate in the OLED screen, wherein the substrate has an actuator disposed thereon, wherein the actuator adjusts a distance between the photoelectric converter of the imaging assembly and the spacer.
  53. The camera module of claim 43, wherein the color filter in the LCD screen is integrated as the color filter of the imaging assembly.
  54. A camera module according to claim 42, wherein the aperture of the light-conducting channel is set to a specific wavelength.
  55. An intelligent terminal, characterized in that it comprises a camera module according to any one of claims 42 to 54.
  56. A method of distance measurement, comprising:
    forming a plurality of light-conducting channels in the light-impermeable spacer;
    arranging a plurality of photoelectric converters in parallel with the spacers and in spaced-apart relation to the light guide channels in a one-to-one correspondence manner, so that light emitted by an object to be imaged passes through the light guide channels and then reaches the photoelectric converters;
    obtaining a plurality of images formed by the object to be imaged through the plurality of light guide channels according to the electric signals output by the photoelectric converter; and
    and calculating the distance between the object to be imaged according to the repetition degrees of the plurality of images.
  57. The method of distance measurement according to claim 56, wherein the degree of repetition is a repeated pixel area of the whole or a certain part of the object to be photographed.
  58. A light field camera having a microlens array, further comprising:
    a spacer opaque to light and having at least one light guide channel formed therein; and
    at least one photoelectric converter parallel to and spaced apart from the spacer and corresponding to the light guide channels one to one, respectively,
    the micro lens array is positioned between the spacer and the photoelectric converter, and light emitted by an object to be imaged passes through the light guide channel and the micro lens array and then reaches the photoelectric converter.
  59. The light field camera of claim 58 wherein the spacer forms a plurality of light guide channels that form an array of light guide channels in the spacer.
  60. The light field camera of claim 58 wherein the light guide channel is sized to be 800nm or greater.
  61. The light field camera of claim 58 wherein the light-conducting channel is sized to diffract certain wavelengths of the passing light to split the light so that certain wavelength bands of light reach a predetermined photoelectric converter.
  62. The light field camera of claim 58 wherein the spacer is made of a light absorbing material.
  63. The light field camera of claim 58 wherein the photoelectric converters receive all light from a corresponding light-conducting channel, and the light of the corresponding light-conducting channel illuminates the entire light-receiving face of the corresponding photoelectric converter.
  64. The light field camera of claim 58 wherein the spacer is coated with a light blocking layer.
  65. The light field camera of claim 64 wherein the light blocking layer is a diffuse reflective coating or a light absorbing coating.
  66. A light field camera having a main lens, further comprising:
    a spacer opaque to light and having at least one light guide channel formed therein; and
    at least one photoelectric converter parallel to and spaced apart from the spacer and corresponding to the light guide channels one to one, respectively,
    the spacer is located between the main lens and the photoelectric converter, and light emitted by an object to be imaged passes through the main lens and the light guide channel and then reaches the photoelectric converter.
  67. The light field camera of claim 66 wherein the spacer forms a plurality of light guide channels that form an array of light guide channels in the spacer.
  68. The light field camera of claim 66 wherein the light guide channel is sized to be 800nm or greater.
  69. The light field camera of claim 66 wherein the light-conducting channel is sized to diffract certain wavelengths of the passing light to split the light so that certain wavelength bands of light reach a predetermined photoelectric converter.
  70. The light field camera of claim 66 wherein the spacer is made of a light absorbing material.
  71. The light field camera of claim 66 wherein the photoelectric converters receive all light from a corresponding light guide channel and the light of the corresponding light guide channel illuminates the entire light receiving face of the corresponding photoelectric converter.
  72. The light field camera of claim 66 wherein the spacer is coated with a light blocking layer.
  73. The light field camera of claim 72 wherein the light blocking layer is a diffuse reflective coating or a light absorbing coating.
  74. A multi-view depth camera, comprising:
    a spacer opaque to light and having a plurality of light guide channels formed therein; and
    the photoelectric converters are parallel to and spaced apart from the spacers and respectively correspond to the light guide channels one by one, so that light emitted by an object to be imaged passes through the light guide channels and then reaches the photoelectric converters;
    wherein the central axes of the light guide channels are staggered with each other.
  75. The multi-purpose depth camera of claim 74, wherein the plurality of light-conducting channels form an array of light-conducting channels in the spacer.
  76. The multi-purpose depth camera of claim 74, wherein the light-conducting channel is sized to be 800nm or greater.
  77. The multi-purpose depth camera of claim 74, wherein the light-guiding channel is sized to diffract a particular wavelength of the light passing therethrough to split the light such that the light of the particular wavelength band reaches a predetermined photoelectric converter.
  78. The multi-purpose depth camera of claim 74, wherein the spacer is made of a light absorbing material.
  79. The multi-eye depth camera of claim 74, wherein the photoelectric converter receives all light from the corresponding light-conducting channel, and the light of the corresponding light-conducting channel illuminates the entire light-receiving face of the corresponding photoelectric converter.
  80. The multi-purpose depth camera of claim 74, wherein the spacer is coated with a light blocking layer.
  81. The multi-eye depth camera of claim 80, wherein the light blocking layer is a diffuse reflective coating or a light absorbing coating.
  82. A pixel color filter array member, comprising:
    a substrate;
    a dielectric layer attached to the substrate; and
    a plurality of pixel color filters attached to the dielectric layer and forming an array.
  83. The pixel color filter array member of claim 82, wherein the dielectric layer is one of a photoelectric converter and a display screen.
  84. A method of forming a pixel color filter array element, comprising:
    arranging a substrate;
    attaching a dielectric layer to the substrate;
    arranging a first color filter array on a first carrier plate;
    transferring the first color filter array from the first carrier plate to a second carrier plate by a transfer head to form a second color filter array, wherein the transfer head expands during the transfer process such that a gap between the color filters is adapted to a gap between color filters in the second color filter array;
    coating a transparent adhesive material on the substrate; and
    and integrally bonding the second color filter array on the second carrier plate to the dielectric layer.
  85. The method of forming a pixel color filter array member of claim 84, wherein the dielectric layer is one of a photoelectric converter and a display screen.
CN201880095111.1A 2018-08-24 2018-08-24 Imaging assembly, touch screen, camera module, intelligent terminal, camera and distance measurement method Active CN112335049B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/102244 WO2020037650A1 (en) 2018-08-24 2018-08-24 Imaging assembly, touch screen, camera module, smart terminal, cameras, and distance measuring method

Publications (2)

Publication Number Publication Date
CN112335049A true CN112335049A (en) 2021-02-05
CN112335049B CN112335049B (en) 2024-03-22

Family

ID=69592183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880095111.1A Active CN112335049B (en) 2018-08-24 2018-08-24 Imaging assembly, touch screen, camera module, intelligent terminal, camera and distance measurement method

Country Status (2)

Country Link
CN (1) CN112335049B (en)
WO (1) WO2020037650A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115914804A (en) * 2021-09-29 2023-04-04 宁波舜宇光电信息有限公司 Imaging assembly, manufacturing method thereof, camera module and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014039096A (en) * 2012-08-13 2014-02-27 Fujifilm Corp Multi-eye camera photographing system and control method of the same
CN104182727A (en) * 2014-05-16 2014-12-03 深圳印象认知技术有限公司 Ultra-thin fingerprint and palm print collection device, and fingerprint and palm print collection method
CN105760808A (en) * 2014-11-14 2016-07-13 深圳印象认知技术有限公司 Imaging plate, image collector and terminal
CN107515435A (en) * 2017-09-11 2017-12-26 京东方科技集团股份有限公司 Display panel and display device
WO2018110570A1 (en) * 2016-12-13 2018-06-21 Sony Semiconductor Solutions Corporation Imaging element, manufacturing method of imaging element, metal thin film filter, and electronic device
CN108369135A (en) * 2015-12-03 2018-08-03 辛纳普蒂克斯公司 Optical sensor for being integrated in display

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07114018A (en) * 1993-10-15 1995-05-02 Rohm Co Ltd Color liquid crystal display device
TWI243263B (en) * 2000-10-12 2005-11-11 Sanyo Electric Co Color filter formation method, luminous element layer formation method and manufacture method of color display device derived therefrom
TWI425629B (en) * 2009-03-30 2014-02-01 Sony Corp Solid state image pickup device, method of manufacturing the same, image pickup device, and electronic device
JPWO2013136820A1 (en) * 2012-03-16 2015-08-03 株式会社ニコン Imaging device and imaging apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014039096A (en) * 2012-08-13 2014-02-27 Fujifilm Corp Multi-eye camera photographing system and control method of the same
CN104182727A (en) * 2014-05-16 2014-12-03 深圳印象认知技术有限公司 Ultra-thin fingerprint and palm print collection device, and fingerprint and palm print collection method
CN105760808A (en) * 2014-11-14 2016-07-13 深圳印象认知技术有限公司 Imaging plate, image collector and terminal
CN108369135A (en) * 2015-12-03 2018-08-03 辛纳普蒂克斯公司 Optical sensor for being integrated in display
WO2018110570A1 (en) * 2016-12-13 2018-06-21 Sony Semiconductor Solutions Corporation Imaging element, manufacturing method of imaging element, metal thin film filter, and electronic device
CN107515435A (en) * 2017-09-11 2017-12-26 京东方科技集团股份有限公司 Display panel and display device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115914804A (en) * 2021-09-29 2023-04-04 宁波舜宇光电信息有限公司 Imaging assembly, manufacturing method thereof, camera module and electronic equipment

Also Published As

Publication number Publication date
WO2020037650A1 (en) 2020-02-27
CN112335049B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
US10653313B2 (en) Systems and methods for lensed and lensless optical sensing of binary scenes
KR101721455B1 (en) Multi-spectral imaging
US7106526B2 (en) Thin imaging apparatus, a thin camera, and an imaging method
JP6260006B2 (en) IMAGING DEVICE, IMAGING SYSTEM USING THE SAME, ELECTRONIC MIRROR SYSTEM, AND RANGING DEVICE
EP3129813B1 (en) Low-power image change detector
EP2380345B1 (en) Improving the depth of field in an imaging system
US20070081200A1 (en) Lensless imaging with controllable apertures
JP2009225064A (en) Image input device, authentication device, and electronic apparatus having them mounted thereon
JPWO2005081020A1 (en) Optics and beam splitters
CN111552066B (en) Zoom assembly, lens module and electronic equipment
GB2488519A (en) Multi-channel image sensor incorporating lenslet array and overlapping fields of view.
CN108513047B (en) Image sensor and image pickup apparatus
US7405761B2 (en) Thin camera having sub-pixel resolution
CN111866387A (en) Depth image imaging system and method
CN111164611A (en) Under-screen biological feature recognition device and electronic equipment
WO2010119447A1 (en) Imaging system and method
CN112335049B (en) Imaging assembly, touch screen, camera module, intelligent terminal, camera and distance measurement method
CN212160750U (en) Sensor module for fingerprint authentication and fingerprint authentication device
TW202107065A (en) Imaging layer, imaging apparatus, electronic device, zone plate structure and photosensitive image element
KR20220073835A (en) Method and electronic device for authenticating image acquisition optical structures and biometric features
Bimber et al. Toward a flexible, scalable, and transparent thin-film camera
US20240125591A1 (en) Wide field-of-view metasurface optics, sensors, cameras and projectors
CN112055134B (en) Image acquisition device and electronic equipment
EP4213116A1 (en) Compact optical sensor
CN117528239A (en) Image pickup module, focusing method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant