CN107483828B - Zooming method, zooming device and electronic equipment - Google Patents

Zooming method, zooming device and electronic equipment Download PDF

Info

Publication number
CN107483828B
CN107483828B CN201710818041.5A CN201710818041A CN107483828B CN 107483828 B CN107483828 B CN 107483828B CN 201710818041 A CN201710818041 A CN 201710818041A CN 107483828 B CN107483828 B CN 107483828B
Authority
CN
China
Prior art keywords
imaging
electromagnetic wave
regions
wave signal
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710818041.5A
Other languages
Chinese (zh)
Other versions
CN107483828A (en
Inventor
杜琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201710818041.5A priority Critical patent/CN107483828B/en
Publication of CN107483828A publication Critical patent/CN107483828A/en
Application granted granted Critical
Publication of CN107483828B publication Critical patent/CN107483828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure relates to a zooming method, a zooming device and an electronic device. Wherein, the zooming method comprises the following steps: and acquiring light ray information of reflected electromagnetic wave signals formed by the reflection of the electromagnetic wave signals by an imaging subarea in the image sensor to determine the incident light rays so as to form a preview image. Determining a zooming parameter aiming at the preview image and an actual imaging area corresponding to the zooming parameter, and adjusting the distribution density of the imaging sub-area in the actual imaging area so as to enable the distribution density of the imaging sub-area in the actual imaging area to be larger than that in other areas, thereby improving the image quality after zooming. The imaging sub-area can deform under the irradiation of incident light, and the reflected electromagnetic wave signals change along with the deformation, so that the light information of the incident light can be determined conveniently through the reflected electromagnetic wave signals.

Description

Zooming method, zooming device and electronic equipment
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a zooming method, a zooming apparatus, and an electronic device.
Background
When taking a picture, we usually use a zoom function to obtain an image corresponding to the zoom factor. In an image pickup apparatus such as a camera of the related art, an image sensor receives incident light through a light sensing unit, and converts light information into an electronic signal for storage. Based on an image sensor in the related art, the zooming adopts a method of intercepting the central part of an image and carrying out sampling calculation on the intercepted image so as to obtain an image corresponding to zooming times.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a zooming method, a zooming apparatus, and an electronic device.
According to a first aspect of the present disclosure, a zooming method is provided, the method comprising:
acquiring a reflected electromagnetic wave signal, wherein the reflected electromagnetic wave signal is formed by the reflection of an electromagnetic wave signal by an imaging subarea in an image sensor; the image sensor comprises a plurality of imaging sub-regions, wherein the imaging sub-regions can deform under the irradiation of incident light;
determining light ray information of the incident light ray according to the reflected electromagnetic wave signal to form a preview image;
acquiring a zooming parameter aiming at the preview image to determine an actual imaging area corresponding to the zooming parameter;
and adjusting the positions of the imaging sub-areas so that the distribution density of the imaging sub-areas of the actual imaging area is higher than that of other areas.
Optionally, determining the light information of the incident light according to the reflected electromagnetic wave signal includes:
demodulating the reflected electromagnetic wave signal to obtain a first signal;
and recovering the light ray information of the incident light ray according to the first signal.
Optionally, the imaging sub-area comprises:
a photosensitive layer which senses the irradiation of incident light and deforms;
and the reflecting layer returns corresponding reflected electromagnetic wave signals and can deform corresponding to the photosensitive layer.
Optionally, determining the light information of the incident light according to the reflected electromagnetic wave signal includes:
sending the reflected electromagnetic wave signal to a monitoring model, wherein a training sample of the monitoring model comprises a data pair between a pre-obtained reflected electromagnetic wave signal and a deformation parameter of a photosensitive layer;
receiving deformation parameters of the photosensitive layer output by the monitoring model;
and determining the light ray information of the incident light ray according to the deformation parameters.
Optionally, the deformation properties of at least two of the imaging sub-regions are different;
and/or the electromagnetic wave signal reflection characteristics of at least two of the imaging sub-regions are different.
Optionally, adjusting the positions of the imaging sub-regions to make the distribution density of the imaging sub-regions of the actual imaging region greater than that of other regions includes:
acquiring constraint information matched with the distribution density of the imaging subareas of the actual imaging area;
and adjusting the distribution density of the imaging subarea according to the constraint information.
Optionally, obtaining constraint information for the imaging sub-region includes at least one of the following methods:
acquiring constraint information matched with the distribution density of the imaging subareas of the actual imaging area by using a customized algorithm;
acquiring constraint information matched with the distribution density of the imaging subareas of the actual imaging area by using an evolution calculation method;
and acquiring constraint information matched with the distribution density of the imaging subareas of the actual imaging area by using a gradient descent and penalty function.
Optionally, the constraint information includes a positional relationship between any two adjacent imaging sub-regions.
Optionally, adjusting the positions of the imaging sub-regions to make the distribution density of the imaging sub-regions of the actual imaging region greater than that of other regions includes:
applying an external field to at least one of said imaging sub-regions;
and applying a force to the imaging subarea by using the external field to adjust the position of the imaging subarea so that the distribution density of the imaging subareas of the actual imaging area is greater than that of other areas.
Optionally, the external field includes: at least one of a magnetic field, an electric field, and an optical field.
According to a second aspect of the present disclosure, there is provided a zoom apparatus including:
an acquisition unit that acquires a reflected electromagnetic wave signal formed by reflection of an electromagnetic wave signal by an imaging sub-area in an image sensor; the image sensor comprises a plurality of imaging sub-regions, wherein the imaging sub-regions can deform under the irradiation of incident light;
the processing unit is used for determining the light ray information of the incident light ray according to the reflected electromagnetic wave signal so as to form a preview image;
a determination unit that acquires a zoom parameter for the preview image to determine an actual imaging area corresponding to the zoom parameter;
and the execution unit is used for adjusting the position of the imaging subarea so that the distribution density of the imaging subareas of the actual imaging area is higher than that of other areas.
Optionally, the processing unit includes:
a first processing subunit, configured to demodulate the reflected electromagnetic wave signal to obtain a first signal;
and the second processing subunit recovers the light ray information of the incident light ray according to the first signal.
Optionally, the imaging sub-area comprises:
a photosensitive layer which senses the irradiation of incident light and deforms;
and the reflecting layer returns corresponding reflected electromagnetic wave signals and can deform corresponding to the photosensitive layer.
Optionally, the processing unit includes:
the transmitting subunit is used for transmitting the reflected electromagnetic wave signal to a monitoring model, and a training sample of the monitoring model comprises a data pair between a pre-obtained reflected electromagnetic wave signal and a deformation parameter of the photosensitive layer;
the receiving subunit is used for receiving the deformation parameters of the photosensitive layer output by the monitoring model;
and the third processing subunit determines the light ray information of the incident light ray according to the deformation parameter.
Optionally, the deformation properties of at least two of the imaging sub-regions are different;
and/or the electromagnetic wave signal reflection characteristics of at least two of the imaging sub-regions are different.
Optionally, the execution unit includes:
the first execution subunit acquires constraint information matched with the distribution density of the imaging subareas of the actual imaging area;
and the second execution subunit adjusts the distribution density of the imaging subarea according to the constraint information.
Optionally, the first execution subunit includes at least one of:
the first execution module is used for acquiring constraint information matched with the distribution density of the imaging subareas of the actual imaging area by using a customized algorithm;
the second execution module is used for acquiring constraint information matched with the distribution density of the imaging subareas of the actual imaging area by using an evolution calculation method;
and the third execution module acquires constraint information matched with the distribution density of the imaging sub-regions of the actual imaging region by using a gradient descent and penalty function.
Optionally, the constraint information includes a positional relationship between any two adjacent imaging sub-regions.
Optionally, the execution unit includes:
a third execution subunit for applying an external field to at least one of the imaging sub-regions;
and the fourth execution subunit applies a force to the imaging sub-area by using the external field to adjust the position of the imaging sub-area so that the distribution density of the imaging sub-area of the actual imaging area is greater than that of other areas.
Optionally, the external field includes: at least one of a magnetic field, an electric field, and an optical field.
According to a third aspect of the present disclosure, an electronic device is provided, the electronic device comprising:
a processor configured to implement the zooming method described above.
According to a fourth aspect of the present disclosure, a computer-readable storage medium is proposed, on which computer instructions are stored, which instructions, when executed by a processor, implement the steps of the above-described zooming method.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the embodiments, the light information of the incident light is determined by obtaining the reflected electromagnetic wave signal formed by the reflection of the electromagnetic wave signal by the imaging subarea in the image sensor, and the preview image is formed according to the light information. The imaging sub-area can deform under the irradiation of incident light, and the reflected electromagnetic wave signal changes along with the deformation, so that the light information of the incident light is convenient to determine. In addition, the distribution density of the actual imaging area of the imaging subarea after zooming can be adjusted to improve the image quality after zooming.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1a is a flow chart of a zoom method of an exemplary embodiment of the present disclosure;
FIG. 1b is a schematic diagram of the operation of an imaging sub-region of an exemplary embodiment of the present disclosure;
FIG. 2a is a flow chart of a zoom method of another exemplary embodiment of the present disclosure;
FIG. 2b is a preview image before zooming in of an exemplary embodiment of the present disclosure;
FIG. 2c is the image of FIG. 2b after the preview image has been zoomed;
FIG. 2d is a density profile of the corresponding imaged sub-region of the preview image shown in FIG. 2 b;
FIG. 2e is a density profile of the corresponding imaged sub-region of the image shown in FIG. 2 c;
FIG. 2f is a schematic structural diagram of an imaging sub-region of an exemplary embodiment of the present disclosure;
FIG. 3a is a flow chart of a zoom method of yet another exemplary embodiment of the present disclosure;
FIG. 3b is a schematic diagram of a deformation mode of a reflected electromagnetic wave signal according to an exemplary embodiment of the present disclosure;
FIG. 3c is a deformation mode diagram of a reflected electromagnetic wave signal according to another exemplary embodiment of the present disclosure;
FIG. 3d is a deformation mode diagram of a reflected electromagnetic wave signal according to yet another exemplary embodiment of the present disclosure;
FIG. 3e is a deformation mode diagram of a reflected electromagnetic wave signal according to yet another exemplary embodiment of the present disclosure;
FIG. 4 is a schematic view of a zoom apparatus according to an exemplary embodiment of the disclosure;
FIG. 5 is a schematic diagram of a processing unit in an exemplary embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a processing unit according to another exemplary embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an execution unit according to an exemplary embodiment of the disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Fig. 1a is a flowchart of a zooming method of an exemplary embodiment of the present disclosure. In order to ensure the picture quality after zooming in the process of zooming shooting by using an image pickup device such as a camera, a zooming method as shown in fig. 1a is proposed, and the zooming method can comprise the following steps:
in step 101, a reflected electromagnetic wave signal is acquired.
The image sensor may include a plurality of imaging sub-regions, the imaging sub-regions may deform under the irradiation of incident light, and the reflected electromagnetic wave signal is formed by the reflection of the electromagnetic wave signal by the imaging sub-regions in the image sensor. Specifically, as shown in fig. 1b, the imaging subregion D may include a photosensitive layer D1 and a reflective layer D2. The photosensitive layer D1 is used for receiving the incident light H1 and generating deformation corresponding to the light information of the incident light H1. The reflective layer D2 can deform corresponding to the photosensitive layer D1 and reflect the reflected electromagnetic wave signal H2 corresponding to the incident light H1. The receiver I receives the reflected electromagnetic wave signal H2 for processing.
It should be noted that the deformation properties of the at least two imaging sub-regions D are different, and/or the electromagnetic wave signal reflection characteristics of the at least two imaging sub-regions D are different, so as to locate and distinguish the electromagnetic wave signals reflected by the different imaging sub-regions D. Wherein the above-mentioned "and/or" includes three cases, one case is that the deformation properties of at least two imaging sub-regions D are different, and the electromagnetic wave reflection characteristics of the imaging sub-regions D are the same. In another case, the electromagnetic wave reflection characteristics of at least two imaging sub-regions D are different, and the deformation properties of the imaging sub-regions D are the same. A further case is that the deformation properties of at least two of said imaging sub-regions D are different and/or that the electromagnetic wave signal reflection properties of at least two of said imaging sub-regions D are different. The electromagnetic wave signals reflected by the imaging sub-region D can be located and distinguished in all of the above three cases.
In step 102, ray information of the incident ray is determined according to the reflected electromagnetic wave signal to form a preview image.
Wherein the light information may include: at least one of the intensity, color, and polarization direction of the incident light. In one embodiment, the image sensor comprises a monitoring model trained according to the reflected electromagnetic wave signal and the deformation parameters of the photosensitive layer corresponding to the reflected electromagnetic wave signal. In order to obtain ray information of the incident ray, the reflected electromagnetic wave signal may be sent to a monitoring model, and a training sample of the monitoring model includes a data pair between a pre-obtained reflected electromagnetic wave signal and a deformation parameter of the photosensitive layer. And determining the light ray information of the incident light ray according to the received deformation parameters.
The reflection parameter and the deformation parameter are changes based on the same incident light, and are data corresponding to each other and having synchronism. Because the photosensitive layers of different light-induced deformation materials have different deformation parameters for incident light, each light-induced deformation material has a light-induced deformation function corresponding to the light-induced deformation material, and the light information of the incident light can be calculated.
In another embodiment, the reflected electromagnetic wave signal may be demodulated to obtain a first signal, and then the light information of the incident light may be recovered according to the first signal.
In step 103, a zoom parameter for the preview image is acquired to determine an actual imaging area corresponding to the zoom parameter.
In the embodiments described above, the zoom parameter may include a zoom magnification. When the image sensor receives that the current zoom multiple designated by the user is w, the original imaging area is R1, the actual imaging area on the passive photosensitive sensor is calculated to be R2, and the length and the width are respectively 1/w times of the original length and width.
In step 104, the positions of the imaging sub-regions are adjusted so that the distribution density of the imaging sub-regions of the actual imaging region is greater than that of other regions.
In this embodiment, in order to adjust the distribution density of the imaging sub-regions in the actual imaging region, constraint information for the imaging sub-regions is acquired, where the constraint information may include a position relationship between any two adjacent imaging sub-regions. The constraint information is obtained by using at least one of a customized algorithm, an evolution calculation method, and a gradient descent and penalty function, and other algorithms may also be used, which is not limited in this disclosure. Secondly, the position of the imaging sub-area can be adjusted according to the constraint information, so that the distribution density of the imaging sub-area of the actual imaging area is larger than that of other areas.
In the above embodiment, the manner of adjusting the distribution density of the imaging sub-regions may include: applying an external field to at least one of said imaging sub-regions, applying a force to said imaging sub-region using said external field to move said imaging sub-region in a direction perpendicular to the incident light rays towards said actual imaging region. It should be noted that the external field may include: at least one of a magnetic field, an electric field, and an optical field, which is not limited by the present disclosure.
In the above embodiment, the imaging sub-region can deform under the irradiation of incident light, and the light information of the incident light can be determined conveniently by monitoring the change of the reflected electromagnetic wave signal, so that the arrangement of the photosensitive structure of the image sensor is simplified.
Fig. 2a is a flowchart of a zooming method according to another exemplary embodiment of the present disclosure. As shown in fig. 2a, the zooming method may include the steps of:
in step 201, a reflected electromagnetic wave signal is acquired.
The image sensor may include a plurality of imaging sub-regions, the imaging sub-regions may deform under the irradiation of incident light, and the reflected electromagnetic wave signal is formed by the reflection of the electromagnetic wave signal by the imaging sub-regions in the image sensor. In particular, the imaging sub-region may include a photosensitive layer and a reflective layer. The photosensitive layer can be used for receiving incident light and generating deformation corresponding to the light information of the incident light. The reflecting layer can generate deformation corresponding to the photosensitive layer and reflect the reflected electromagnetic wave signal corresponding to the incident light. The receiver receives the reflected electromagnetic wave signal for processing.
The image sensor trains a monitoring model according to the reflected electromagnetic wave signals and the deformation parameters of the photosensitive layer corresponding to the reflected electromagnetic wave signals. Specifically, when incident light irradiates on the imaging subareas, reflected electromagnetic wave signals returned by each imaging subarea and photosensitive layer deformation parameters corresponding to the reflected electromagnetic wave signals are collected to form a data sample. By the above principle, a large number of data samples can be recorded for different polarization directions, intensities, colors, etc. of incident light. Based on the data samples, a large number of problems about logistic regression are automatically generated, and a certain relation existing between the data samples and the performance of the training model is further learned, so that a simple rule is obtained for corresponding the reflected electromagnetic wave signals and the deformation parameters of the photosensitive layer.
In step 202, the reflected electromagnetic wave signal is sent to a monitoring model, and a training sample of the monitoring model includes a data pair between the pre-obtained reflected electromagnetic wave signal and a deformation parameter of the photosensitive layer.
In step 203, ray information of the incident ray is determined according to the received deformation parameters to form a preview image.
In the above embodiments, the image sensor includes a monitoring model trained according to the reflected electromagnetic wave signal and the deformation parameter of the photosensitive layer corresponding thereto. In order to obtain the ray information of the incident ray, the reflected electromagnetic wave signal may be sent to the monitoring model, and the monitoring model outputs the deformation parameter of the photosensitive layer corresponding to the reflected electromagnetic wave signal according to the reflected electromagnetic wave signal. And determining the light ray information of the incident light ray according to the received deformation parameters. Wherein the light information may include: at least one of the intensity, color, and polarization direction of the incident light.
The deformation parameters of the reflective layer and the photosensitive layer are changes based on the same incident light, and are data corresponding to each other and having synchronism. Because the photosensitive layers of different light-induced deformation materials have different deformation parameters for incident light, each light-induced deformation material has a light-induced deformation function corresponding to the light-induced deformation material, and the light information of the incident light can be calculated.
In step 204, the zoom parameters for the preview image are acquired to determine the actual imaging area corresponding to the zoom parameters.
In step 205, the positions of the imaging sub-regions are adjusted so that the distribution density of the imaging sub-regions of the actual imaging region is greater than that of the other regions.
As shown in fig. 2b, the preview image R1 before zooming is first enlarged twice as much as the human avatar region R2 in the preview image R1, and the human avatar region R2 is set as the actual imaging region, and as shown in fig. 2c, the length and width of R2 after zooming are 1/2 times as large as the length and width of R1, respectively. During zooming, the distribution density of the imaging sub-area in the actual imaging area needs to be adjusted. For example, the distribution density of the imaging sub-area of the preview image before zooming is as shown in fig. 2d (corresponding to table 1), and the imaging sub-area is controlled to move toward the actual imaging area in the direction perpendicular to the incident light ray, so that the distribution density of the imaging sub-area after zooming is as shown in fig. 2e (corresponding to table 2). Increasing the distribution density of the imaging subareas corresponding to the R2 area after zooming, wherein the adjusting the distribution density of the imaging subareas can comprise: applying an external field as shown in figure 2f to at least one of the imaging sub-regions, applying a force to said imaging sub-region with said external field to move said imaging sub-region in a direction perpendicular to the incident light rays towards said actual imaging region. It should be noted that the external field may include: at least one of a magnetic field, an electric field, and an optical field, which is not limited by the present disclosure.
Table 1: table of density distribution of image sub-areas without zooming (corresponding to fig. 2d)
Figure BDA0001405579040000101
Table 2: table of density distribution of image sub-area when zoom factor w is 2 (corresponding to fig. 2e)
Figure BDA0001405579040000102
In the above embodiment, in order to adjust the distribution density of the imaging sub-regions in the actual imaging region, constraint information for the imaging sub-regions is acquired, and the constraint information may include a position relationship between any two adjacent imaging sub-regions. The constraint information is obtained by using at least one of a customized algorithm, an evolution calculation method, and a gradient descent and penalty function, and other algorithms may also be used, which is not limited in this disclosure. Secondly, the distribution density of the imaging sub-regions can be adjusted according to the set of constraint information with the imaging sub-regions within the actual imaging region.
Fig. 3a is a flowchart of a zooming method according to another exemplary embodiment of the present disclosure. As shown in fig. 3a, the zooming method may include the steps of:
in step 301, a reflected electromagnetic wave signal is acquired.
The image sensor may include a plurality of imaging sub-regions, the imaging sub-regions may deform under the irradiation of incident light, and the reflected electromagnetic wave signal is formed by the reflection of the electromagnetic wave signal by the imaging sub-regions in the image sensor. In particular, the imaging sub-region may include a photosensitive layer and a reflective layer. The photosensitive layer can be used for receiving incident light and generating deformation corresponding to the light information of the incident light. The reflecting layer can generate deformation corresponding to the photosensitive layer and reflect the reflected electromagnetic wave signal corresponding to the incident light. The receiver receives the reflected electromagnetic wave signal for processing.
In step 302, the reflected electromagnetic wave signal is demodulated to obtain a first signal.
In step 303, light ray information of the incident light ray is recovered according to the first signal, so as to form a preview image.
The deformation of the imaging sub-region may include at least one of a shape change, an area change, a density change, and a smoothness change, and the deformation may cause a change in the reflection characteristic of the reflective layer, which may be described by a channel parameter or a scattering parameter, which is not limited by the present disclosure. Due to the change of the reflection characteristic, the frequency spectrum and amplitude characteristic of the reflected electromagnetic wave signal G are changed, the reflected electromagnetic wave signal G is demodulated by a classical signal demodulation method to obtain a first signal, and the light ray information of the incident light ray is restored according to the demodulated first signal. The reflected electromagnetic wave signal G may include several common deformation modes as shown in fig. 3b, fig. 3c, fig. 3d, and fig. 3e when the imaging sub-region receives the incident light. After the reflection layer is deformed by the irradiation of incident light, the reflected electromagnetic wave signal G carries the light information of the incident light, and the first signal containing the incident light information can be obtained by demodulating the reflected electromagnetic wave signal G, so that the first signal can be used for recovering the light information of the incident light.
It should be noted that the light ray information may include: at least one of the intensity, color, and polarization direction of the incident light.
The deformation parameters of the reflective layer and the photosensitive layer are changes based on the same incident light, and are data corresponding to each other and having synchronism. Because the photosensitive layers of different light-induced deformation materials have different deformation parameters for incident light, each light-induced deformation material has a light-induced deformation function corresponding to the light-induced deformation material, and the light information of the incident light can be calculated.
In step 304, the zoom parameters for the preview image are acquired to determine the actual imaging area corresponding to the zoom parameters.
In step 305, the positions of the imaging sub-regions are adjusted so that the actual imaging region has a higher distribution density of imaging sub-regions than other regions.
In the above embodiment, in order to adjust the distribution density of the imaging sub-regions in the actual imaging region, constraint information for the imaging sub-regions is acquired, and the constraint information may include a position relationship between any two adjacent imaging sub-regions. The constraint information is obtained by using at least one of a customized algorithm, an evolution calculation method, and a gradient descent and penalty function, and other algorithms may also be used, which is not limited in this disclosure. The distribution density of the imaging sub-regions may be adjusted according to a set of constraint information with the imaging sub-regions within the actual imaging region. The manner of adjusting the distribution density of the imaging sub-regions may include: applying an external field to at least one imaging sub-zone, applying a force to the imaging sub-zone with the external field to move the imaging sub-zone in a direction perpendicular to the incident light rays towards the actual imaging zone. It should be noted that the external field may include: at least one of a magnetic field, an electric field, and an optical field, which is not limited by the present disclosure.
According to the above embodiments, the present disclosure further proposes a zoom apparatus applied to an image sensor. Fig. 4 is a schematic structural diagram of a zoom apparatus according to an exemplary embodiment of the present disclosure, which includes an acquisition unit 41, a processing unit 42, a determination unit 43, and an execution unit 44, as shown in fig. 4.
The acquisition unit 41 is configured to acquire the reflected electromagnetic wave signal. The reflected electromagnetic wave signals are formed by reflecting electromagnetic wave signals by imaging sub-regions in the image sensor, the image sensor comprises a plurality of imaging sub-regions, and the imaging sub-regions can deform under the irradiation of incident light.
The processing unit 42 is configured to determine ray information of the incident ray from the reflected electromagnetic wave signal to form a preview image.
The determination unit 43 is configured to acquire a zoom parameter for the preview image to determine an actual imaging area corresponding to the zoom parameter.
The execution unit 44 is configured to adjust the positions of the imaging sub-areas such that the actual imaging area has a higher distribution density of imaging sub-areas than the other areas.
With respect to the operation principle of obtaining the ray information of the incident ray, a zoom apparatus is further proposed, fig. 5 is a schematic structural diagram of a processing unit according to an exemplary embodiment of the disclosure, and as shown in fig. 5, on the basis of the foregoing embodiment shown in fig. 4, the processing unit 42 may include a sending subunit 421, a receiving subunit 422, and a third processing subunit 423. Wherein:
the transmitting subunit 421 is configured to transmit the reflected electromagnetic wave signal to a monitoring model, a training sample of which includes a data pair between a reflected electromagnetic wave signal obtained in advance and a deformation parameter of the photosensitive layer;
the receiving subunit 422 is configured to receive a deformation parameter of the photosensitive layer output by the monitoring model;
the third processing subunit 423 is configured to determine ray information of the incident ray according to the deformation parameter to form a preview image.
Fig. 6 is a schematic structural diagram of a processing unit according to another exemplary embodiment of the present disclosure. As shown in fig. 6, on the basis of the aforementioned embodiment shown in fig. 4, the processing unit 42 may include a first processing subunit 424 and a second processing subunit 425. Wherein:
the first processing subunit 424 is configured to demodulate the reflected electromagnetic wave signal to obtain a first signal;
the second processing subunit 425 is configured to recover the ray information of the incident ray according to the first signal to form a preview image.
Fig. 7 is a schematic structural diagram of an execution unit according to an exemplary embodiment of the disclosure. As shown in fig. 7, on the basis of the aforementioned embodiment shown in fig. 4, the execution unit 44 may include a first execution subunit 441, a second execution subunit 442, a third execution subunit 443, and a fourth execution subunit 444. Wherein:
the first execution subunit 441 is configured to acquire constraint information matching the distribution density of the imaging sub-regions of the actual imaging region, and may include at least one of a first execution module, a second execution module, and a third execution module, where:
the first execution module 4411 is configured to obtain constraint information matching the imaging sub-region distribution density of the actual imaging region using a custom algorithm;
the second performing module 4412 is configured to acquire constraint information matching the imaging sub-region distribution density of the actual imaging region using an evolutionary computation method;
the third executive module 4413 is configured to use a gradient descent plus penalty function to obtain constraint information matching the imaging sub-region distribution density of the actual imaging region.
The second execution subunit 442 is configured to adjust the distribution density of the imaging sub-regions in accordance with the constraint information.
The third execution subunit 443 is configured to apply an external field to at least one of the imaging sub-regions.
The fourth execution subunit 444 is configured to apply a force to the imaging sub-region using the external field to adjust the position of the imaging sub-region such that the actual imaging region has a higher distribution density of imaging sub-regions than other regions.
It should be noted that the structure of the processing unit 42 in the device embodiments shown in fig. 5 and fig. 6 may also be included in the device embodiment shown in fig. 7, and the disclosure does not limit this.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
The present disclosure further proposes an electronic device, which may comprise a processor configured to implement the zooming method described above.
In an exemplary embodiment, the present disclosure also provides a non-transitory computer-readable storage medium comprising instructions. For example, a memory including instructions that, when executed by a processor of the distress device, implement the zoom method of the present disclosure. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (20)

1. A method of zooming, comprising:
acquiring a reflected electromagnetic wave signal, wherein the reflected electromagnetic wave signal is formed by the reflection of an electromagnetic wave signal by an imaging subarea in an image sensor; the image sensor comprises a plurality of imaging sub-regions, wherein the imaging sub-regions can deform under the irradiation of incident light;
determining light ray information of the incident light ray according to the reflected electromagnetic wave signal to form a preview image;
acquiring a zooming parameter aiming at the preview image to determine an actual imaging area corresponding to the zooming parameter;
adjusting the positions of the imaging sub-areas so that the distribution density of the imaging sub-areas of the actual imaging area is higher than that of other areas;
the imaging sub-region comprises:
a photosensitive layer which senses the irradiation of incident light and deforms;
and the reflecting layer returns corresponding reflected electromagnetic wave signals and can deform corresponding to the photosensitive layer.
2. The zooming method of claim 1, wherein determining ray information of the incident ray from the reflected electromagnetic wave signal comprises:
demodulating the reflected electromagnetic wave signal to obtain a first signal;
and recovering the light ray information of the incident light ray according to the first signal.
3. The zooming method of claim 1, wherein determining ray information of the incident ray from the reflected electromagnetic wave signal comprises:
sending the reflected electromagnetic wave signal to a monitoring model, wherein a training sample of the monitoring model comprises a data pair between a pre-obtained reflected electromagnetic wave signal and a deformation parameter of a photosensitive layer;
receiving deformation parameters of the photosensitive layer output by the monitoring model;
and determining the light ray information of the incident light ray according to the deformation parameters.
4. The zooming method according to claim 1, wherein:
the deformation properties of at least two of the imaging sub-regions are different;
and/or the electromagnetic wave signal reflection characteristics of at least two of the imaging sub-regions are different.
5. The zooming method of claim 1, wherein adjusting the positions of the imaging sub-regions so that the actual imaging region has a greater distribution density of imaging sub-regions than other regions comprises:
acquiring constraint information matched with the distribution density of the imaging subareas of the actual imaging area;
and adjusting the distribution density of the imaging subarea according to the constraint information.
6. Zooming method according to claim 5, characterized in that the acquisition of constraint information for the imaged subareas comprises at least one of the following methods:
acquiring constraint information matched with the distribution density of the imaging subareas of the actual imaging area by using a customized algorithm;
acquiring constraint information matched with the distribution density of the imaging subareas of the actual imaging area by using an evolution calculation method;
and acquiring constraint information matched with the distribution density of the imaging subareas of the actual imaging area by using a gradient descent and penalty function.
7. Zooming method according to claim 5, characterized in that the constraint information comprises a positional relationship between any two adjacent imaging sub-regions.
8. The zooming method of claim 1, wherein adjusting the positions of the imaging sub-regions so that the actual imaging region has a greater distribution density of imaging sub-regions than other regions comprises:
applying an external field to at least one of said imaging sub-regions;
and applying a force to the imaging subarea by using the external field to adjust the position of the imaging subarea so that the distribution density of the imaging subareas of the actual imaging area is greater than that of other areas.
9. The zooming method of claim 8, wherein the external field comprises: at least one of a magnetic field, an electric field, and an optical field.
10. A zoom apparatus, comprising:
an acquisition unit that acquires a reflected electromagnetic wave signal formed by reflection of an electromagnetic wave signal by an imaging sub-area in an image sensor; the image sensor comprises a plurality of imaging sub-regions, wherein the imaging sub-regions can deform under the irradiation of incident light;
the processing unit is used for determining the light ray information of the incident light ray according to the reflected electromagnetic wave signal so as to form a preview image;
a determination unit that acquires a zoom parameter for the preview image to determine an actual imaging area corresponding to the zoom parameter;
the execution unit is used for adjusting the position of the imaging subarea so that the distribution density of the imaging subareas of the actual imaging area is higher than that of other areas;
the imaging sub-region comprises:
a photosensitive layer which senses the irradiation of incident light and deforms;
and the reflecting layer returns corresponding reflected electromagnetic wave signals and can deform corresponding to the photosensitive layer.
11. The zoom apparatus of claim 10, wherein the processing unit comprises:
a first processing subunit, configured to demodulate the reflected electromagnetic wave signal to obtain a first signal;
and the second processing subunit recovers the light ray information of the incident light ray according to the first signal.
12. The zoom apparatus of claim 10, wherein the processing unit comprises:
the transmitting subunit is used for transmitting the reflected electromagnetic wave signal to a monitoring model, and a training sample of the monitoring model comprises a data pair between a pre-obtained reflected electromagnetic wave signal and a deformation parameter of the photosensitive layer;
the receiving subunit is used for receiving the deformation parameters of the photosensitive layer output by the monitoring model;
and the third processing subunit determines the light ray information of the incident light ray according to the deformation parameter.
13. The zoom apparatus according to claim 10, wherein:
the deformation properties of at least two of the imaging sub-regions are different;
and/or the electromagnetic wave signal reflection characteristics of at least two of the imaging sub-regions are different.
14. The zoom apparatus according to claim 10, wherein the execution unit comprises:
the first execution subunit acquires constraint information matched with the distribution density of the imaging subareas of the actual imaging area;
and the second execution subunit adjusts the distribution density of the imaging subarea according to the constraint information.
15. The zoom apparatus of claim 14, wherein the first execution subunit comprises at least one of:
the first execution module is used for acquiring constraint information matched with the distribution density of the imaging subareas of the actual imaging area by using a customized algorithm;
the second execution module is used for acquiring constraint information matched with the distribution density of the imaging subareas of the actual imaging area by using an evolution calculation method;
and the third execution module acquires constraint information matched with the distribution density of the imaging sub-regions of the actual imaging region by using a gradient descent and penalty function.
16. The zoom apparatus of claim 14, wherein the constraint information comprises a positional relationship between any two adjacent imaging sub-regions.
17. The zoom apparatus according to claim 10, wherein the execution unit comprises:
a third execution subunit for applying an external field to at least one of the imaging sub-regions;
and the fourth execution subunit applies a force to the imaging sub-area by using the external field to adjust the position of the imaging sub-area so that the distribution density of the imaging sub-area of the actual imaging area is greater than that of other areas.
18. The zoom apparatus of claim 17, wherein the external field comprises: at least one of a magnetic field, an electric field, and an optical field.
19. An electronic device, comprising:
a memory storing computer instructions;
a processor configured to execute computer instructions stored in the memory to implement the zooming method of any of claims 1-9.
20. A computer readable storage medium having computer instructions stored thereon which, when executed by a processor, implement: the steps of the zooming method of any of claims 1-9.
CN201710818041.5A 2017-09-12 2017-09-12 Zooming method, zooming device and electronic equipment Active CN107483828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710818041.5A CN107483828B (en) 2017-09-12 2017-09-12 Zooming method, zooming device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710818041.5A CN107483828B (en) 2017-09-12 2017-09-12 Zooming method, zooming device and electronic equipment

Publications (2)

Publication Number Publication Date
CN107483828A CN107483828A (en) 2017-12-15
CN107483828B true CN107483828B (en) 2020-06-19

Family

ID=60583862

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710818041.5A Active CN107483828B (en) 2017-09-12 2017-09-12 Zooming method, zooming device and electronic equipment

Country Status (1)

Country Link
CN (1) CN107483828B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2092378A2 (en) * 2006-12-15 2009-08-26 Hand Held Products, Inc. Apparatus and method comprising deformable lens element
JP2016038414A (en) * 2014-08-05 2016-03-22 キヤノン株式会社 Focus detection device, control method thereof, and imaging apparatus
CN104301605B (en) * 2014-09-03 2018-11-23 北京智谷技术服务有限公司 Image formation control method and device, the imaging device of Digital Zoom image
CN104469147B (en) * 2014-11-20 2018-09-04 北京智谷技术服务有限公司 Optical field acquisition control method and device, optical field acquisition equipment
CN106161910B (en) * 2015-03-24 2019-12-27 北京智谷睿拓技术服务有限公司 Imaging control method and device and imaging equipment

Also Published As

Publication number Publication date
CN107483828A (en) 2017-12-15

Similar Documents

Publication Publication Date Title
US11665427B2 (en) Still image stabilization/optical image stabilization synchronization in multi-camera image capture
JP5134694B2 (en) Image processing apparatus and image processing method
TWI521961B (en) Camera, system comprising a camera, method of operating a camera and method for deconvoluting a recorded image
US7046924B2 (en) Method and computer program product for determining an area of importance in an image using eye monitoring information
CN105335950B (en) Image processing method and image processing apparatus
US7912252B2 (en) Time-of-flight sensor-assisted iris capture system and method
US9438816B2 (en) Adaptive image acquisition for multiframe reconstruction
US8773550B2 (en) Range measurement using multiple coded apertures
US10466335B2 (en) Method and apparatus for generating image data by using region of interest set by position information
CN113129241B (en) Image processing method and device, computer readable medium and electronic equipment
CN108541374A (en) A kind of image interfusion method, device and terminal device
CN103491287B (en) Image capture apparatus
JP2013531268A (en) Measuring distance using coded aperture
CN107133982A (en) Depth map construction method, device and capture apparatus, terminal device
CN103986854A (en) Image processing apparatus, image capturing apparatus, and control method
CN102265627A (en) Image data obtaining method and apparatus therefor
JP2009047497A (en) Stereoscopic imaging device, control method of stereoscopic imaging device, and program
CN107483828B (en) Zooming method, zooming device and electronic equipment
CN107592455B (en) Shallow depth of field effect imaging method and device and electronic equipment
CN111935389B (en) Shot object switching method and device, shooting equipment and readable storage medium
JP2009047495A (en) Stereoscopic imaging device, control method of stereoscopic imaging device, and program
JP2020194454A (en) Image processing device and image processing method, program, and storage medium
CN107682597B (en) Imaging method, imaging device and electronic equipment
CN105323460B (en) Image processing equipment and its control method
CN107707813B (en) Super-depth-of-field effect imaging method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant