CN113676675B - Image generation method, device, electronic equipment and computer readable storage medium - Google Patents

Image generation method, device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113676675B
CN113676675B CN202110937243.8A CN202110937243A CN113676675B CN 113676675 B CN113676675 B CN 113676675B CN 202110937243 A CN202110937243 A CN 202110937243A CN 113676675 B CN113676675 B CN 113676675B
Authority
CN
China
Prior art keywords
color
pixel
full
image
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110937243.8A
Other languages
Chinese (zh)
Other versions
CN113676675A (en
Inventor
杨鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110937243.8A priority Critical patent/CN113676675B/en
Publication of CN113676675A publication Critical patent/CN113676675A/en
Application granted granted Critical
Publication of CN113676675B publication Critical patent/CN113676675B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

The application relates to an image generation method, an image generation device, a computer device and a storage medium. The method comprises the following steps: exposing each pixel point in the pixel point array to obtain single-color photosensitive data corresponding to single-color pixel points and full-color photosensitive data corresponding to full-color pixel points; in response to a first-level merging instruction, merging two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel, and merging two full-color photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first full-color pixel; wherein the first diagonal direction and the second diagonal direction are perpendicular to each other; a first target image is generated based on each of the first single color pixels and each of the first full color pixels. By adopting the method, the image can be generated more accurately.

Description

Image generation method, device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image generating method, an image generating device, an electronic device, and a computer readable storage medium.
Background
In more electronic devices, cameras are installed to realize photographing functions. The camera is provided with an image sensor, and an image is acquired and generated through the image sensor.
However, the conventional image generation method generally displays the acquired RGB image in a screen by an RGB (Red, green, blue, red, green, blue) image sensor, and there is a problem in that the generated image is inaccurate.
Disclosure of Invention
The embodiment of the application provides an image generation method, an image generation device, electronic equipment and a computer readable storage medium, which can generate images more accurately.
An image generation method is applied to an electronic device comprising an image sensor, wherein the image sensor comprises a pixel point array, the pixel point array comprises minimum pixel point repeating units, each minimum pixel point repeating unit comprises a plurality of pixel point sub-units, each pixel point sub-unit comprises a plurality of single-color pixel points and a plurality of full-color pixel points, and the single-color pixel points and the full-color pixel points in the pixel point sub-units are alternately arranged in the row direction and the column direction; the method comprises the following steps:
exposing each pixel point in the pixel point array to obtain single-color photosensitive data corresponding to single-color pixel points and full-color photosensitive data corresponding to full-color pixel points;
In response to a first-level merging instruction, merging two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel, and merging two full-color photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first full-color pixel; wherein the first diagonal direction and the second diagonal direction are perpendicular to each other;
a first target image is generated based on each of the first single color pixels and each of the first full color pixels.
An image generating apparatus applied to an electronic device including an image sensor including a pixel array including minimum pixel repeating units each including a plurality of pixel sub-units each including a plurality of single-color pixels and a plurality of full-color pixels, the single-color pixels and the full-color pixels in the pixel sub-units being alternately arranged in both a row direction and a column direction; the device comprises:
the exposure module is used for exposing each pixel point in the pixel point array to obtain single-color photosensitive data corresponding to the single-color pixel point and full-color photosensitive data corresponding to the full-color pixel point;
The merging module is used for responding to a first-level merging instruction, merging two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel, and merging two full-color photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first full-color pixel; wherein the first diagonal direction and the second diagonal direction are perpendicular to each other;
an image generation module for generating a first target image based on each of the first single color pixels and each of the first full color pixels.
An electronic device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the image generation method as described above.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of a method as described above.
The image generating method, the device, the electronic equipment and the computer readable storage medium, wherein the electronic equipment comprises an image sensor, the image sensor comprises a pixel point array, the pixel point array comprises a minimum pixel point repeating unit, each minimum pixel point repeating unit comprises a plurality of pixel point sub-units, each pixel point sub-unit comprises a plurality of single-color pixel points and a plurality of full-color pixel points, the single-color pixel points and the full-color pixel points in the pixel point sub-units are alternately arranged in the row direction and the column direction, and then higher light entering quantity can be received through the full-color pixel points. The electronic device exposes each pixel point in the pixel point array to obtain single-color photosensitive data and full-color photosensitive data, the single-color photosensitive data and the full-color photosensitive data are respectively combined to obtain a first single-color pixel and a first full-color pixel, and the first full-color pixel has higher light inlet quantity, so that the higher light inlet quantity can be obtained based on the first color pixel and the first full-color pixel, the overall brightness of the image is improved, the image quality signal-to-noise ratio of the image is improved, and the first target image is generated more accurately.
In the pixel merging process, the electronic device merges two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel, merges two full-color photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first full-color pixel, each pixel is obtained by merging the two photosensitive data, meanwhile, a target image can be output more quickly, the sensitivity of image output is improved, and the image information and the output speed of the target image are considered.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an electronic device in one embodiment;
FIG. 2 is an exploded view of an image sensor in one embodiment;
FIG. 3 is a schematic diagram of an arrangement of minimal color filter repeat units in one embodiment;
FIG. 4 is a flow chart of an image generation method in one embodiment;
FIG. 5 is a schematic diagram of a first stage combining scheme in one embodiment;
FIG. 6 is a schematic diagram of a two-stage merge mode in one embodiment;
FIG. 7 is a schematic diagram of a full output mode in one embodiment;
FIG. 8 is a schematic diagram of image generation in another embodiment;
FIG. 9 is a schematic diagram of image generation in another embodiment;
FIG. 10 is a schematic diagram of image generation in another embodiment;
FIG. 11 is a block diagram showing the structure of an image generating apparatus in one embodiment;
fig. 12 is a schematic diagram of an internal structure of an electronic device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It will be understood that the terms first, second, etc. as used herein may be used to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another element. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of the application. Both the first client and the second client are clients, but they are not the same client.
As shown in fig. 1, the electronic device includes a camera 102, the camera 102 containing an image sensor including a microlens array, a color filter array, and a pixel array.
The electronic device is described below as a mobile phone, but the electronic device is not limited to the mobile phone. The terminal comprises a camera, a processor and a shell. The camera and the processor are arranged in the shell, and the shell can be used for installing functional modules such as a power supply device and a communication device of the terminal, so that the shell provides protection such as dust prevention, falling prevention and water prevention for the functional modules.
The camera may be a front camera, a rear camera, a side camera, an under-screen camera, etc., and is not limited herein. The camera includes a lens and an image sensor, and when the camera captures an image, light passes through the lens and reaches the image sensor, and the image sensor is used for converting an optical signal irradiated onto the image sensor 21 into an electrical signal.
As shown in fig. 2, the image sensor includes a microlens array 21, a color filter array 22, and a pixel dot array 23.
The microlens array 21 includes a plurality of microlenses 211, where the microlenses 211, the color filters in the color filter array 22, and the pixel points in the pixel point array 23 are arranged in a one-to-one correspondence, the microlenses 211 are configured to collect incident light, and the collected light passes through the corresponding color filters and then projects onto the pixel points, and is received by the corresponding pixel points, where the received light is converted into an electrical signal.
The color filter array 22 includes a plurality of minimum color filter repeating units 221. Each of the minimum color filter repeating units 221 includes a plurality of color filter subunits 222. In the present embodiment, the minimum color filter repeating unit 221 includes 4 color filter sub-units 222, and the 4 color filter sub-units 222 are arranged in a matrix. Each color filter subunit 222 includes a plurality of single color filters 223 and a plurality of full color filters 224. Also included in the different color filter subunits 222 are different single color filters 223. For example, the single color filter included in the color filter subunit a is a red color filter, and the single color filter included in the color filter subunit B is a green color filter. Each color filter in the color filter array 22 corresponds to each pixel in the pixel array 23 one by one, and the light filtered by the color filters is projected to the corresponding pixel to obtain photosensitive data.
A single color filter refers to a filter that is capable of transmitting a single color light. The single color filter may be specifically a Red (Red, R) filter, a Green (G) filter, a Blue (B) filter, or the like. The red color filter can transmit red light, the green color filter can transmit green light, and the blue color filter can transmit blue light.
The full color filter means a filter capable of transmitting all color light. The full color filter may be a White (W) filter. It will be appreciated that the full color filter is transparent to white light, which is light projected together by the wavelengths of the respective bands, i.e. the full color filter is transparent to all colors of light.
Also, the pixel array 23 includes minimum pixel repeating units 231, each minimum pixel repeating unit 231 includes a plurality of pixel sub-units 232, each pixel sub-unit 232 includes a plurality of single-color pixel points 233 and a plurality of full-color pixel points 234, and the single-color pixel points 233 and the full-color pixel points 234 in the pixel sub-units 232 are alternately arranged in both the row direction and the column direction. Each pixel sub-unit 232 corresponds to one color filter sub-unit 222, and the pixels in each pixel sub-unit 232 also correspond to the color filters in the corresponding color filter sub-unit 222 one by one. The light transmitted through the panchromatic filter 224 is projected to the panchromatic pixel 234, resulting in panchromatic photosensitive data; the light transmitted through the single color filter 223 is projected to the single color pixel 233, and single color photosensitive data can be obtained. In the present embodiment, the minimum pixel repeating unit 231 includes 4 pixel sub-units 232, and the 4 pixel sub-units 232 are arranged in a matrix.
The single color pixel point refers to a pixel point that receives a single color light. The single color filter may be specifically a Red (Red, R) pixel, a Green (Green, G) pixel, a Blue (B) pixel, or the like. The full color pixel points refer to pixel points that receive all color light. The full color pixel may be a White (W) pixel.
In one embodiment, the smallest color filter repeat unit is 8 rows and 8 columns of 64 color filters arranged in the following manner:
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
wherein w1 represents a full-color filter, and a1, b1 and c1 each represent a single-color filter. The full-color filters in each minimum color filter repeating unit included in the color filter array are uniformly distributed, so that the light incoming quantity of each position can be uniformly increased, and the signal-to-noise ratio of the generated image is integrally improved; and the color filter array formed by the minimum color filter repeating units in the arrangement mode can also enable the stability of an algorithm for processing the image to be higher, and the effect of the processed image to be better.
Fig. 3 is a schematic diagram of an arrangement of minimal color filter repeating units in one embodiment. Wherein w1 represents a full-color filter, accounting for 50% of the minimum color filter repeating unit, a1, b1 and c1 each represent a single-color filter, a1 and c1 account for 12.5% of the minimum color filter repeating unit, and b1 accounts for 25% of the minimum color filter repeating unit.
In one embodiment, w1 may represent a white color filter, and a1, b1, and c1 may represent a red color filter, a green color filter, and a blue color filter, respectively. In other embodiments, w1 may represent a white color filter, and a1, b1, and c1 may represent a cyan color filter, a magenta color filter, and a yellow color filter, respectively.
For example, w1 may represent a white color filter, a1 a red color filter, b1 a green color filter, and c1 a blue color filter. As another example, w1 may represent a white color filter, a1 represents a green color filter, b1 represents a red color filter, and c1 represents a blue color filter.
The order of the full-color filter and the single-color filter may be set as needed, for example, the full-color filter may be disposed before the single-color filter or may be disposed after the single-color filter. The order of the individual color filters can also be adjusted as desired. The order of the various pixels obtained by the single color filter may be set as needed, and is not limited.
In this embodiment, the minimum color filter repeating unit includes 50% of full color filters, and the light input amount of the image sensor can be increased as much as possible, so that more photosensitive data is acquired.
FIG. 4 is a flow chart of an image generation method in one embodiment. The image generation method in the present embodiment will be described taking the example of operation on the electronic device in fig. 1. As shown in fig. 4, the image generation method includes steps 402 to 406.
And step 402, exposing each pixel point in the pixel point array to obtain single-color photosensitive data corresponding to the single-color pixel point and full-color photosensitive data corresponding to the full-color pixel point.
The photosensitive data is data obtained by receiving light rays by pixel points and converting optical signals into electric signals. The single-color photosensitive data refers to data obtained by receiving single-color light by a single-color pixel point and converting a signal of the single-color light into an electric signal. For example, the single color photosensitive data may include a brightness value, an exposure time period, a gray value, or the like of the single color light. The full-color photosensitive data refers to data obtained by receiving all color lights by the full-color pixel points and converting signals of all color lights into electric signals. For example, full color photosensitive data may include brightness values, exposure time periods, gray scale values, or the like of all color light rays.
Specifically, the electronic device obtains preset exposure parameters, and exposes each pixel point in the pixel point array according to the preset exposure parameters to obtain single-color photosensitive data corresponding to the single-color pixel point and full-color photosensitive data corresponding to the full-color pixel point. The preset exposure parameters at least comprise aperture, shutter speed and photosensitivity.
Step 404, in response to the first-level merging instruction, merging two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel, and merging two full-color photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first full-color pixel; wherein the first diagonal direction and the second diagonal direction are perpendicular to each other.
The first diagonal direction is a direction for merging the single-color photosensitive data. The second diagonal direction is a direction for merging full-color photosensitive data. In one embodiment, the first diagonal direction is a direction indicated by a line connecting the upper left corner and the lower right corner, and the second diagonal direction is a direction indicated by a line connecting the upper right corner and the lower left corner. In another embodiment, the first diagonal direction is a direction indicated by a line connecting the upper right corner and the lower left corner, and the second diagonal direction is a direction indicated by a line connecting the upper left corner and the lower right corner.
The first single-color pixel is a pixel obtained by combining two single-color photosensitive data in a first diagonal direction. The first full-color pixel is a pixel obtained by combining two full-color photosensitive data in the second diagonal direction.
The same single-color pixel points in the pixel point array are arranged in a first diagonal direction, and the full-color pixel points are arranged in a second diagonal direction.
The electronic equipment responds to the first-level merging instruction, sequentially acquires two same single-color photosensitive data in a first diagonal direction according to the arrangement sequence of the same single-color pixels in the pixel array, and merges the two same single-color photosensitive data to obtain a first single-color pixel; and sequentially acquiring two full-color photosensitive data in a second diagonal direction according to the arrangement sequence of full-color pixel points in the pixel point array, and combining the two full-color photosensitive data to obtain a first full-color pixel. The first-level merging instruction is an instruction corresponding to merging two single-color photosensitive data obtained in a first diagonal direction in the pixel point array and merging two full-color photosensitive data obtained in a second diagonal direction in the pixel point array.
A first target image is generated based on each first single color pixel and each first panchromatic pixel, step 406.
The first target image is an image generated based on each first single color pixel and each first full color pixel.
In one embodiment, the electronic device combines each first single color pixel and each first panchromatic pixel to generate a first target image.
In another embodiment, the electronic device performs a filtering process on each first single-color pixel and each first full-color pixel, and combines each first single-color pixel and each first full-color pixel after the filtering process to generate the first target image. The electronic device performs filtering processing on each first single-color pixel and each first full-color pixel, so that noise in the first single-color pixel and noise in the first full-color pixel can be filtered, and a first target image can be generated more accurately.
In another embodiment, the electronic device may also process the specified first single color pixel or the specified first panchromatic pixel to generate the first target image. The processing may include filtering processing, deleting processing, interpolation processing, pixel value adjustment processing, or the like.
In other embodiments, the electronic device may also generate the first target image in other manners, which are not limited herein.
According to the image generation method, the electronic device comprises the image sensor, the image sensor comprises the pixel point array, the pixel point array comprises the minimum pixel point repeating units, each minimum pixel point repeating unit comprises the pixel point sub-units, each pixel point sub-unit comprises the single-color pixel points and the full-color pixel points, the single-color pixel points and the full-color pixel points in the pixel point sub-units are alternately arranged in the row direction and the column direction, and then higher light incoming quantity can be received through the full-color pixel points. The electronic device exposes each pixel point in the pixel point array to obtain single-color photosensitive data and full-color photosensitive data, the single-color photosensitive data and the full-color photosensitive data are respectively combined to obtain a first single-color pixel and a first full-color pixel, and the first full-color pixel has higher light inlet quantity, so that the higher light inlet quantity can be obtained based on the first color pixel and the first full-color pixel, the overall brightness of the image is improved, the image quality signal-to-noise ratio of the image is improved, and the first target image is generated more accurately.
In the pixel merging process, the electronic device merges two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel, merges two full-color photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first full-color pixel, each pixel is obtained by merging the two photosensitive data, meanwhile, a target image can be output more quickly, the sensitivity of image output is improved, and the image information and the output speed of the target image are considered.
FIG. 5 is a schematic diagram of a first-stage merging mode in one embodiment. 502 is a first diagonal direction in the pixel point array 23, 504 is a second diagonal direction in the pixel point array 23, the electronic device combines the two single-color photosensitive data obtained in the first diagonal direction 502 in the pixel point array 23 to obtain a first single-color pixel, and combines the two full-color photosensitive data obtained in the second diagonal direction 504 in the pixel point array 23 to obtain a first full-color pixel; wherein the first diagonal direction and the second diagonal direction are perpendicular to each other; thereby generating a first target image 506 based on each first single color pixel and each first panchromatic pixel.
In one embodiment, combining two single-color photosensitive data obtained in a first diagonal direction in a pixel point array to obtain a first single-color pixel, and combining two full-color photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first full-color pixel, includes: adding or averaging two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel; and adding or averaging the two full-color photosensitive data obtained in the second diagonal direction in the pixel point array to obtain a first full-color pixel.
For example, the electronic device obtains two single-color photosensitive data in a first diagonal direction in the pixel point array, which are A1 and A2 respectively, and adds A1 and A2 to obtain a first single-color pixel as a1+a2.
For another example, the electronic device obtains two panchromatic photosensitive data in a second diagonal direction in the pixel array, which are A3 and A4 respectively, and averages the A3 and the A4 to obtain the first panchromatic pixel as (a3+a4)/2.
In another embodiment, the electronic device may further perform weighted average on two single-color photosensitive data obtained in the first diagonal direction in the pixel point array to obtain a first single-color pixel; and carrying out weighted average on the two panchromatic photosensitive data obtained in the second diagonal direction in the pixel point array to obtain a first panchromatic pixel.
For example, the electronic device obtains two single-color photosensitive data in a first diagonal direction in the pixel point array, where the two single-color photosensitive data are respectively A5 and A6, a weight factor corresponding to A5 is a, a weight factor corresponding to A6 is b, and weighted average is performed on A5 and A6, so as to obtain a first panchromatic pixel (a 5×a+a6×b)/2.
In another embodiment, the electronic device may further obtain two single-color photosensitive data obtained in a first diagonal direction in the pixel point array, and select the target single-color photosensitive data from the two single-color photosensitive data as the first single-color pixel; and acquiring two pieces of full-color photosensitive data obtained in a second diagonal direction in the pixel point array, and selecting target full-color photosensitive data from the two pieces of full-color photosensitive data as a first full-color pixel. Optionally, the target single-color photosensitive data obtaining method at least comprises random selection, manual selection, selection of a larger value or selection of a smaller value, and the like; the acquisition mode of the target full-color photosensitive data at least comprises random selection, manual selection, selection of a larger value or selection of a smaller value and the like.
In one embodiment, after generating the first target image, further comprising: sequentially determining pixels required by the current position from Bayer array images to be generated; generating a first full-color channel diagram and a plurality of first single-color channel diagrams according to a first target image, determining a target single-color channel diagram from the first single-color channel diagrams based on pixels required by the current position, and extracting first single-color pixels from the corresponding positions of the target single-color channel diagram as pixels of the current position in a Bayer array image to be generated until pixels of all positions in the Bayer array image to be generated are generated, so as to obtain a Bayer array image; the pixels in the first full-color channel diagrams are all first full-color pixels, and the pixels in each first single-color channel diagram are all same-kind first single-color pixels.
The pixels in the first full-color channel diagrams are all first full-color pixels, and the pixels in each first single-color channel diagram are all the same first single-color pixels. For example, the first panchromatic channel map is a W (White) channel map, and the pixels in the W channel map are all White pixels. The first single color channel map is an R (Red) channel map, and the pixels in the R channel map are Red pixels.
In one embodiment, the electronic device may extract the same kind of pixels from the first target image, respectively, to generate a first full-color channel map and a plurality of first single-color channel maps. In another embodiment, the electronic device may split the first target image to obtain a first full color channel map and a plurality of first single color channel maps. In another embodiment, the electronic device may further interpolate a first full color channel map and a first single color channel map corresponding to the same pixel based on each pixel in the first target image. In other embodiments, the electronic device may also generate the first full-color channel map and the plurality of first single-color channel maps in other manners, without limitation.
For example, if the first target image is an RGBW image, the electronic device may generate an R-channel map, a G-channel map, a B-channel map, and a W-channel map based on the first target image. Wherein the R channel map includes R pixels, the G channel map includes G pixels, the B channel map includes B pixels, and the W channel map includes W pixels.
The bayer array is a technology for simulating the sensitivity of human eyes to colors, and converting gray information into color information by adopting a 1 red, 2 green and 1 blue arrangement mode, and is one of main technologies for realizing that a CCD (Charge-coupled Device) or CMOS (Complementary Metal Oxide Semiconductor ) sensor shoots color images.
The target single color channel map is a single color channel map that coincides with the channel map to which the pixel required for the current position belongs. For example, if the pixel required for the current position (2, 5) in the bayer array image to be generated is a G pixel, the target single-color channel map is a G channel map, and the G pixel is extracted from the (2, 5) position of the G channel map as the pixel of the current position (2, 5) in the bayer array image to be generated.
As another example, if the pixel required for the current position (100,212) in the bayer array image to be generated is an R pixel, the target single-color channel map is an R channel map, and the R pixel is extracted from (100,212) of the R channel map as the pixel of the current position (100,212) in the bayer array image to be generated.
In the present embodiment, the pixels required for the current position are sequentially determined from the bayer array image to be generated; the target single-color channel map is determined from each first single-color channel map based on the pixel required by the current position, and the target single-color channel map is one color channel map in each first single-color channel map, then the first single-color pixel is extracted from the corresponding position of the target single-color channel map as the pixel of the current position in the Bayer array image to be generated until the pixels of all positions in the Bayer array image to be generated are generated, so that the Bayer array image can be accurately generated, and meanwhile, the Bayer array image can be input to a processor for Bayer array processing at the subsequent time, and the image sensor comprising full-color pixel points and the processor for Bayer array processing are compatible.
In another embodiment, after generating the first target image, further comprising: generating a first full-color channel map and a plurality of first single-color channel maps according to the first target image, and combining the first single-color channel maps to generate a Bayer array image; the pixels in the first full-color channel diagrams are all first full-color pixels, and the pixels in each first single-color channel diagram are all same-kind first single-color pixels.
For example, if the first target image is an RGBW image, the electronic device may generate an R channel map, a G channel map, a B channel map, and a W channel map based on the first target image, and then combine the R channel map, the G channel map, and the B channel map to generate a bayer array image, that is, an RGB image.
In one embodiment, the photosensitive data obtained by exposing each minimum pixel point repeating unit are combined to obtain a first pixel region in the first target image; the first pixel area is formed by 4 rows and 8 columns of 32 pixels, and the arrangement mode is as follows:
w2 a2 w2 a2 w2 b2 w2 b2
w2 a2 w2 a2 w2 b2 w2 b2
w2 b2 w2 b2 w2 c2 w2 c2
w2 b2 w2 b2 w2 c2 w2 c2
where w2 denotes a first full-color pixel, and a2, b2, and c2 each denote a first single-color pixel.
In one embodiment, after obtaining the single color photosensitive data corresponding to the single color pixel point and the full color photosensitive data corresponding to the full color pixel point, the method further includes: responding to a second-level merging instruction, merging all single-color photosensitive data obtained by the pixel point sub-units to obtain a second single-color pixel, and merging all full-color photosensitive data obtained by the pixel point sub-units to obtain a second full-color pixel; a second target image is generated based on each second single color pixel and each second panchromatic pixel.
The second-level merging instruction is an instruction corresponding to merging all single-color photosensitive data obtained by the pixel point sub-unit and merging all full-color photosensitive data obtained by the pixel point sub-unit.
The second single-color pixel is a pixel obtained by combining the single-color photosensitive data obtained by the pixel point subunit. The second full-color pixel is a pixel obtained by combining full-color photosensitive data obtained by the pixel point subunit. The same pixel sub-unit comprises a plurality of single-color pixels and a plurality of full-color pixels, so that the same pixel sub-unit can obtain a second single-color pixel and a second full-color pixel.
The second target image is an image generated based on each second single color pixel and each second full color pixel.
In one embodiment, the electronic device combines each second single color pixel and each second panchromatic pixel to generate a second target image.
In another embodiment, the electronic device performs a filtering process on each second single color pixel and each second panchromatic pixel, and combines each second single color pixel and each second panchromatic pixel after the filtering process to generate the second target image. The electronic device performs filtering processing on each second single-color pixel and each second full-color pixel, so that noise in each second single-color pixel and each second full-color pixel can be filtered, and a second target image can be generated more accurately.
In another embodiment, the electronic device may also process the designated second single color pixel or the designated second panchromatic pixel to generate a second target image. The processing may include filtering processing, deleting processing, interpolation processing, pixel value adjustment processing, or the like.
In other embodiments, the electronic device may also generate the second target image in other ways, which are not limited herein.
In this embodiment, the electronic device exposes each pixel in the pixel array to obtain single-color photosensitive data and full-color photosensitive data, and respectively combines to obtain a second single-color pixel and a second full-color pixel, where the second full-color pixel has a higher light incoming amount, so that the second target image can be generated more accurately by acquiring a higher light incoming amount based on the second color pixel and the second full-color pixel, and improving the overall brightness of the image, thereby improving the image quality signal-to-noise ratio of the image.
In addition, in the pixel merging process, the single-color photosensitive data obtained by the pixel point sub-unit of the electronic equipment are merged to obtain second single-color pixels, the full-color photosensitive data obtained by the pixel point sub-unit are merged to obtain second full-color pixels, and each second single-color pixel or each second full-color pixel is obtained by merging the single-color photosensitive data or the full-color photosensitive data in the pixel point sub-unit, so that a target image can be output more quickly, and the sensitivity of image output is improved.
In one embodiment, the image sensor further comprises a color filter array, the color filter array comprises a minimum color filter repeating unit, each minimum color filter repeating unit comprises a plurality of color filter subunits, each color filter subunit comprises a plurality of single-color filters and a plurality of full-color filters, each color filter in the color filter array corresponds to each pixel point in the pixel point array one by one, and light filtered by the color filter is projected to the corresponding pixel point to obtain photosensitive data; the minimum color filter repeating unit is 8 rows and 8 columns of 64 color filters, and the arrangement mode is as follows:
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
wherein w1 represents a full-color filter, and a1, b1 and c1 each represent a single-color filter;
combining the photosensitive data obtained by exposing each minimum pixel point repeating unit to obtain a second pixel region in a second target image; the second pixel area is 2 rows and 4 columns of 8 pixels, and the arrangement mode is as follows:
w3 a3 w3 b3
w3 b3 w3 c3
where w3 denotes a second full-color pixel, and a3, b3, and c3 each denote a second single-color pixel.
FIG. 6 is a schematic diagram of a two-stage merging scheme in one embodiment. The electronic device responds to a two-level merging instruction, wherein 602 is one pixel point subunit, and for the pixel point subunit 602, 8 single-color photosensitive data obtained by the pixel point subunit 602 are merged to obtain a second single-color pixel, and 8 full-color photosensitive data obtained by the pixel point subunit 602 are merged to obtain a second full-color pixel; the other pixel sub-units are also combined in this way, and a second target image 604 is generated based on each second single color pixel and each second panchromatic pixel.
In one embodiment, the merging of the single-color photosensitive data obtained by the pixel sub-unit to obtain a second single-color pixel, and merging the full-color photosensitive data obtained by the pixel sub-unit to obtain a second full-color pixel, includes: adding or averaging the single-color photosensitive data obtained by the pixel point subunit to obtain a second single-color pixel; and adding or averaging all the full-color photosensitive data obtained by the pixel point subunit to obtain a second full-color pixel.
For example, the electronic device obtains 8 single-color photosensitive data from the pixel sub-units in the pixel array, and adds the 8 single-color photosensitive data to obtain the second single-color pixel.
For another example, the electronics obtains 8 panchromatic photosensitive data from the pixel sub-units in the pixel array and adds the 8 panchromatic photosensitive data to obtain the second panchromatic pixel.
In another embodiment, the electronic device may further perform weighted average on each single-color photosensitive data obtained by the pixel sub-unit to obtain a second single-color pixel; and carrying out weighted average on all the full-color photosensitive data obtained by the pixel point subunit to obtain a second full-color pixel.
In one embodiment, after generating the second target image, further comprising: generating a second full-color channel map and a plurality of second single-color channel maps according to a second target image, and sequentially determining pixels required by the current position from a Bayer array image to be generated; determining a target single-color channel diagram from each second single-color channel diagram based on the pixels required by the current position, and extracting second single-color pixels from the corresponding positions of the target single-color channel diagram as pixels of the current position in the Bayer array image to be generated until pixels of all positions in the Bayer array image to be generated are generated, so as to obtain a Bayer array image; the pixels in the second full-color channel diagrams are all second full-color pixels, and the pixels in each second single-color channel diagram are all same-kind second single-color pixels.
The pixels in the second full-color channel diagrams are all second full-color pixels, and the pixels in each second single-color channel diagram are all same-kind second single-color pixels. For example, the second panchromatic channel map is a W (White) channel map, the pixels in the W channel map being White pixels. The second single color channel map is an R (Red) channel map, and the pixels in the R channel map are Red pixels.
In one embodiment, the electronic device may extract the same kind of pixels from the second target image, respectively, to generate a second full-color channel map and a plurality of second single-color channel maps. In another embodiment, the electronic device may split the second target image to obtain a second full color channel map and a plurality of second single color channel maps. In another embodiment, the electronic device may further interpolate a second full color channel map and a second single color channel map corresponding to the same pixel based on each pixel in the second target image. In other embodiments, the electronic device may also generate the second full-color channel map and the plurality of second single-color channel maps in other manners, without limitation.
For example, the second target image is an RGBW image, and the electronic device may generate an R-channel map, a G-channel map, a B-channel map, and a W-channel map based on the second target image. Wherein the R channel map includes R pixels, the G channel map includes G pixels, the B channel map includes B pixels, and the W channel map includes W pixels.
The target single color channel map is a single color channel map that coincides with the channel map to which the pixel required for the current position belongs.
For example, if the pixel required for the current position (3, 5) in the bayer array image to be generated is a G pixel, the target single-color channel map is a G channel map, and the G pixel is extracted from the (3, 5) position of the G channel map as the pixel of the current position (3, 5) in the bayer array image to be generated.
For another example, if the pixel required for the current position (100, 112) in the bayer array image to be generated is an R pixel, the target single-color channel map is an R channel map, and the R pixel is extracted from the (100, 112) position of the R channel map as the pixel of the current position (100, 112) in the bayer array image to be generated.
In the present embodiment, the pixels required for the current position are sequentially determined from the bayer array image to be generated; and determining a target single-color channel diagram from the second single-color channel diagrams based on the pixels required by the current position, extracting the second single-color pixels from the corresponding positions of the target single-color channel diagrams as the pixels of the current position in the Bayer array image to be generated until the pixels of all positions in the Bayer array image to be generated are generated, and accurately generating the Bayer array image.
In another embodiment, after generating the second target image, further comprising: generating a second full-color channel map and a plurality of second single-color channel maps according to the second target image, and combining the second single-color channel maps to generate a Bayer array image; the pixels in the second full-color channel diagrams are all second full-color pixels, and the pixels in each second single-color channel diagram are all same-kind second single-color pixels.
For example, if the second target image is an RGBW image, the electronic device may generate an R channel map, a G channel map, a B channel map, and a W channel map based on the second target image, and then combine the R channel map, the G channel map, and the B channel map to generate a bayer array image, that is, an RGB image.
In another embodiment, there is also provided another image generation method including: exposing each pixel point in the pixel point array to obtain single-color photosensitive data corresponding to single-color pixel points and full-color photosensitive data corresponding to full-color pixel points; taking each single-color photosensitive data as a third single-color pixel, and taking each full-color photosensitive data as a third full-color pixel; a third target image is generated based on each third single color pixel and each third panchromatic pixel.
Each single-color photosensitive data is regarded as a third single-color pixel, each full-color photosensitive data is regarded as a third full-color pixel, and then a third target image, namely, an image of a full-output mode (full size), is generated based on the third single-color pixels and the third full-color pixels.
FIG. 7 is a schematic diagram of a full output mode in one embodiment. The image sensor controls the pixel array to read out the photosensitive data in a full-output mode (full size) in a row-by-row or column-by-column order, generating a third target image.
In one embodiment, after generating the first target image, further comprising: identifying an attribute of the image sensor; acquiring a corresponding driving module identifier and an adjusting parameter based on the attribute of the image sensor; and calling a driving module corresponding to the driving module identifier to transmit the adjustment parameters to the processor, and adjusting the first target image by the processor by adopting the adjustment parameters to obtain a final output image.
It is understood that the attribute of the image sensor may specifically be a front camera or a rear camera, a main camera or a sub camera, etc. Different image sensors have different properties, and then the different sensors correspond to different drive modules and adjustment parameters. The electronic device may invoke the CSID module to identify the attributes of the image sensor.
The driving module is a module that can cause the image sensor and the processor to communicate with each other. The adjustment parameter is a parameter for adjusting an image. The adjustment parameters include at least a black level (OB) parameter, a lens shading correction (LSC, lens shading correction) parameter, a dead pixel compensation (BPC, bad pixel correction) parameter, a Demosaic (DM) parameter, a Color Correction (CC) parameter, a global tone mapping (GTM, global ToneMapping) parameter, and a Color conversion parameter.
The processor may be a central processing unit (central processing unit, CPU), an image processor (Image Signal Processing, ISP), or other processor for processing images.
Specifically, a driving module identifier and an adjustment parameter corresponding to the attribute of the image sensor are stored in a memory of the electronic equipment, and after the electronic equipment identifies the attribute of the image sensor, the corresponding driving module identifier and adjustment parameter can be obtained from the memory; and calling a driving module corresponding to the driving module identifier, sending the adjustment parameters to the processor through the driving module, and adjusting the first target image through the processor by adopting the adjustment parameters to obtain a final output image. The adjusting, by the processor, the first target image by using the adjustment parameter may specifically include sequentially performing, by the processor, black level, lens shading correction, dead point compensation, demosaicing, color correction, global tone mapping, and color conversion on the first target image by using the adjustment parameter.
Further, after the first target image is adjusted by the processor through the adjustment parameters, an adjusted first target image is obtained, and the adjusted first target image is subjected to color space conversion, so that an image in a specified color space is obtained.
The specified color space may be YUV or HSI (Hue, saturation or Chroma, intensity or bright), color tone, color saturation, brightness, or the like. Y in YUV represents brightness (Luminance or Luma), i.e., gray scale values, and U and V represent chromaticity (Chroma) to describe the image color and saturation for the color of the specified pixel.
In this embodiment, when the attribute of the image sensor is identified, the corresponding driving module identifier and the adjustment parameter can be obtained based on the attribute of the image sensor, and then the driving module corresponding to the driving module identifier is called to transmit the adjustment parameter to the processor, and the processor adjusts the first target image by adopting the adjustment parameter, so that the final output image can be accurately obtained.
In other embodiments, the first target image may be replaced by the second target image and the third target image.
In one embodiment, as shown in fig. 8, the electronic device generates a first target image through the image sensor, inputs the first target image into the CSID module, and identifies an attribute of the image sensor; acquiring a driving module identifier and an adjustment parameter corresponding to the attribute of the image sensor based on the attribute of the image sensor, calling the driving module to transmit the adjustment parameter to an image signal processor, sequentially performing black level, lens shading correction, dead pixel compensation and demosaicing on a first target image through the image signal processor by adopting the adjustment parameter to obtain an intermediate image, sequentially performing color correction, global tone mapping and color conversion on the intermediate image to obtain a processed image, performing color space conversion on the processed image, and converting the processed image into a YUV image. Wherein, the image sensor may be an RGBW sensor, and the first target image is an RGBW image.
The image sensor can be customized for a first target image of RGBW, and processing algorithms of black level, lens shading correction, dead pixel compensation and demosaicing can be customized for the first target image of RGBW, and can be compatible for processing a first target image of a primary merging mode, a second target image of a secondary merging mode and a third target image of a full-output mode.
In one embodiment, the method further includes, before the step of calling the driving module corresponding to the driving module identifier to transmit the adjustment parameter to the processor and adjusting the first target image by the processor by using the adjustment parameter to obtain the final output image: converting the first target image into a bayer array image; calling a driving module corresponding to the driving module identifier, and adjusting the first target image by adopting the adjustment parameters to obtain a final output image, wherein the method comprises the following steps: and calling a driving module corresponding to the driving module identifier to transmit the adjustment parameters to the processor, and adjusting the Bayer array image by the processor by adopting the adjustment parameters to obtain a final output image.
Converting the first target image into a Bayer array image, specifically, sequentially determining pixels required by the current position from the Bayer array image to be generated; generating a first full-color channel diagram and a plurality of first single-color channel diagrams according to a first target image, determining a target single-color channel diagram from the first single-color channel diagrams based on pixels required by the current position, and extracting first single-color pixels from the corresponding positions of the target single-color channel diagram as pixels of the current position in a Bayer array image to be generated until pixels of all positions in the Bayer array image to be generated are generated, so as to obtain a Bayer array image; the pixels in the first full-color channel diagrams are all first full-color pixels, and the pixels in each first single-color channel diagram are all same-kind first single-color pixels.
In another embodiment, the electronic device may further directly combine the first single-color channel maps to generate the bayer array image.
In other embodiments, the first target image may be replaced by the second target image and the third target image.
The electronic device inputs at least one of the first target image, the second target image and the third target image into a conversion module, and converts the at least one of the first target image, the second target image and the third target image into a bayer array image through the conversion module. The conversion module comprises three algorithms, namely an algorithm for converting a first target image into a Bayer array image, an algorithm for converting a second target image into a Bayer array image and an algorithm for converting a third target image into a Bayer array image. In one embodiment, the three algorithms may all be integrated in the conversion module. In another embodiment, the transformation module includes three independent sub-modules that store one of the three algorithms in turn. Alternatively, the three algorithms may be software schemes or may be implemented as hardware modules. The software scheme has the advantages of high flexibility, and the hardware module has high processing speed, so that video real-time preview can be realized.
If the color space of the bayer array image is RGB, black level, lens shading correction, dead pixel compensation, demosaicing, color correction, global tone mapping and color conversion are sequentially performed on the bayer array image of RGB, so that the color space of the final output image is RGB. The electronic device performs color space conversion on the final output image, and can convert the processed image of RGB to obtain an image with a specified color space.
In one embodiment, as shown in fig. 9, the electronic device generates a first target image by the image sensor, inputs the first target image into a conversion module, and converts the first target image into a bayer array image by the conversion module. Wherein, the image sensor may be an RGBW sensor, and the first target image is an RGBW image. The electronic equipment inputs the Bayer array image into a CSID module, and identifies the attribute of the image sensor; acquiring a driving module identifier and an adjustment parameter corresponding to the attribute of the image sensor based on the attribute of the image sensor, calling the driving module to transmit the adjustment parameter to an image signal processor, sequentially performing black level, lens shading correction, dead pixel compensation and demosaicing on the Bayer array image by adopting the adjustment parameter through the image signal processor to obtain an intermediate image, sequentially performing color correction, global tone mapping and color conversion on the intermediate image to obtain a processed image, performing color space conversion on the processed image, and converting the processed image into a YUV image.
In one embodiment, after generating the first target image, further comprising: generating a first panchromatic channel map and a bayer array image from the first target image; sequentially performing black level, lens shading correction, dead point compensation and demosaicing on the first full-color channel map and the Bayer array image to obtain a processed first full-color channel map and a processed Bayer array image; fusing the processed first full-color channel diagram and the processed Bayer array image to obtain a fused image; and sequentially carrying out color correction, global tone mapping and color conversion on the fusion image to obtain a final output image.
Generating a first full-color channel map and a bayer array image from a first target image, specifically splitting the target image into the first full-color channel map and a four bayer (quadrbayer) image; the four bayer image is converted into a bayer array image. The electronic device inputs the four bayer images into a Remosaic module, and can convert the four bayer images into bayer array images.
The fusion image is an image obtained by fusing the processed first full-color channel map and the processed bayer array image. In this embodiment, the electronic device uses Fusion algorithm to fuse the processed first full-color channel map and the processed bayer array image, so as to obtain a fused image. The Bayer array image after processing is an RGB image, the fusion image is also an RGB image, and the full-color channel images with higher light quantity are fused to obtain the fusion image, so that the definition and the signal to noise ratio of the RGB fusion image can be improved.
Further, after the electronic device sequentially performs color correction, global tone mapping and color conversion on the fusion image, a processed fusion image is obtained, and the processed fusion image can also be subjected to color space conversion, so that an image with a specified color space is obtained.
The specified color space may be YUV or HSI (Hue, saturation or Chroma, intensity or bright), color tone, color saturation, brightness, or the like. Y in YUV represents brightness (Luminance or Luma), i.e., gray scale values, and U and V represent chromaticity (Chroma) to describe the image color and saturation for the color of the specified pixel.
It should be noted that, in other embodiments, the first target image may be replaced by the second target image.
In one embodiment, as shown in fig. 10, the electronic device generates a first target image through the image sensor, inputs the first target image into a DDR (Double Data Rate) module, splits the first target image into a first full-color channel map and a four bayer (quad bayer) image through the DDR module, and inputs the four bayer image into a remote module, so that the four bayer image can be converted into a bayer array image. The electronic equipment respectively inputs the first full-color channel diagram and the Bayer array image into an image signal processor, black level, lens shading correction, dead pixel compensation and demosaicing are respectively carried out on the first full-color channel diagram and the Bayer array image through the image signal processor, then the processed first full-color channel diagram and the processed Bayer array image are input into a DDR module, and Fusion is carried out on the processed first full-color channel diagram and the processed Bayer array image through Fusion algorithm in a Fusion module called by the DDR module, so that a Fusion image is obtained; sequentially performing color correction, global tone mapping and color conversion on the fusion image to obtain a processed fusion image; and performing color space conversion on the processed fusion image, and converting the fusion image into a YUV image.
It should be understood that, although the steps in the flowcharts of fig. 4, 8 to 10 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps of fig. 4, 8-10 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or stages are performed necessarily occur sequentially, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
Fig. 11 is a block diagram showing the structure of an image generating apparatus according to an embodiment. As shown in fig. 11, there is provided an image generating apparatus applied to an electronic device including an image sensor including a pixel array including minimum pixel repeating units each including a plurality of pixel sub-units each including a plurality of single-color pixels and a plurality of full-color pixels, the single-color pixels and the full-color pixels in the pixel sub-units being alternately arranged in both row and column directions; the image generation device includes: an exposure module 1102, a merge module 1104, and an image generation module 1106, wherein:
And the exposure module 1102 is configured to expose each pixel in the pixel array to obtain single-color photosensitive data corresponding to the single-color pixel and full-color photosensitive data corresponding to the full-color pixel.
The merging module 1104 is configured to respond to the first-stage merging instruction, merge two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel, and merge two full-color photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first full-color pixel; wherein the first diagonal direction and the second diagonal direction are perpendicular to each other.
An image generation module 1106 is configured to generate a first target image based on each first single color pixel and each first panchromatic pixel.
The image generating device comprises an image sensor, the image sensor comprises a pixel point array, the pixel point array comprises minimum pixel point repeating units, each minimum pixel point repeating unit comprises a plurality of pixel point sub-units, each pixel point sub-unit comprises a plurality of single-color pixel points and a plurality of full-color pixel points, the single-color pixel points and the full-color pixel points in the pixel point sub-units are alternately arranged in the row direction and the column direction, and then higher light inlet quantity can be received through the full-color pixel points. The electronic device exposes each pixel point in the pixel point array to obtain single-color photosensitive data and full-color photosensitive data, the single-color photosensitive data and the full-color photosensitive data are respectively combined to obtain a first single-color pixel and a first full-color pixel, and the first full-color pixel has higher light inlet quantity, so that the higher light inlet quantity can be obtained based on the first color pixel and the first full-color pixel, the overall brightness of the image is improved, the image quality signal-to-noise ratio of the image is improved, and the first target image is generated more accurately.
In the pixel merging process, the electronic device merges two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel, merges two full-color photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first full-color pixel, each pixel is obtained by merging the two photosensitive data, meanwhile, a target image can be output more quickly, the sensitivity of image output is improved, and the image information and the output speed of the target image are considered.
In one embodiment, the combining module 1104 is further configured to add or average two single-color photosensitive data obtained in the first diagonal direction in the pixel point array to obtain a first single-color pixel; and adding or averaging the two full-color photosensitive data obtained in the second diagonal direction in the pixel point array to obtain a first full-color pixel.
In one embodiment, the image generating module 1106 is further configured to sequentially determine pixels required for the current position from the bayer array image to be generated; generating a first full-color channel diagram and a plurality of first single-color channel diagrams according to a first target image, determining a target single-color channel diagram from the first single-color channel diagrams based on pixels required by the current position, and extracting first single-color pixels from the corresponding positions of the target single-color channel diagram as pixels of the current position in a Bayer array image to be generated until pixels of all positions in the Bayer array image to be generated are generated, so as to obtain a Bayer array image; the pixels in the first full-color channel diagrams are all first full-color pixels, and the pixels in each first single-color channel diagram are all same-kind first single-color pixels.
In one embodiment, the combining module 1104 is further configured to, in response to a second-level combining instruction, combine, for each pixel subunit, the single-color photosensitive data obtained by the pixel subunit to obtain a second single-color pixel, and combine the full-color photosensitive data obtained by the pixel subunit to obtain a second full-color pixel; the image generation module 1106 is further configured to generate a second target image based on each second single color pixel and each second panchromatic pixel.
In one embodiment, the merging module 1104 is further configured to add or average the single-color photosensitive data obtained by the pixel sub-units to obtain a second single-color pixel; and adding or averaging all the full-color photosensitive data obtained by the pixel point subunit to obtain a second full-color pixel.
In one embodiment, the image sensor further comprises a color filter array, the color filter array comprises a minimum color filter repeating unit, each minimum color filter repeating unit comprises a plurality of color filter subunits, each color filter subunit comprises a plurality of single-color filters and a plurality of full-color filters, each color filter in the color filter array corresponds to each pixel point in the pixel point array one by one, and light filtered by the color filter is projected to the corresponding pixel point to obtain photosensitive data; the minimum color filter repeating unit is 8 rows and 8 columns of 64 color filters, and the arrangement mode is as follows:
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
Wherein w1 represents a full-color filter, and a1, b1 and c1 each represent a single-color filter;
combining the photosensitive data obtained by exposing each minimum pixel point repeating unit to obtain a second pixel region in a second target image; the second pixel area is 2 rows and 4 columns of 8 pixels, and the arrangement mode is as follows:
w3 a3 w3 b3
w3 b3 w3 c3
where w3 denotes a second full-color pixel, and a3, b3, and c3 each denote a second single-color pixel.
In one embodiment, the image sensor further comprises a color filter array, the color filter array comprises a minimum color filter repeating unit, each minimum color filter repeating unit comprises a plurality of color filter subunits, each color filter subunit comprises a plurality of single-color filters and a plurality of full-color filters, each color filter in the color filter array corresponds to each pixel point in the pixel point array one by one, and light filtered by the color filter is projected to the corresponding pixel point to obtain photosensitive data; the minimum color filter repeating unit is 8 rows and 8 columns of 64 color filters, and the arrangement mode is as follows:
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
wherein w1 represents a full-color filter, and a1, b1 and c1 each represent a single-color filter.
In one embodiment, the photosensitive data obtained by exposing each minimum pixel point repeating unit are combined to obtain a first pixel region in the first target image; the first pixel area is 4 rows and 8 columns of 16 pixels, and the arrangement mode is as follows:
w2 a2 w2 a2 w2 b2 w2 b2
w2 a2 w2 a2 w2 b2 w2 b2
w2 b2 w2 b2 w2 c2 w2 c2
w2 b2 w2 b2 w2 c2 w2 c2
Where w2 denotes a first full-color pixel, and a2, b2, and c2 each denote a first single-color pixel.
In one embodiment, the apparatus further comprises an adjustment module for identifying an attribute of the image sensor; acquiring a corresponding driving module identifier and an adjusting parameter based on the attribute of the image sensor; and calling a driving module corresponding to the driving module identifier to transmit the adjustment parameters to the processor, and adjusting the first target image by the processor by adopting the adjustment parameters to obtain a final output image.
In one embodiment, the adjustment module is further configured to convert the first target image into a bayer array image; and calling a driving module corresponding to the driving module identifier to transmit the adjustment parameters to the processor, and adjusting the Bayer array image by the processor by adopting the adjustment parameters to obtain a final output image.
In one embodiment, the adjustment module is further configured to generate a first panchromatic channel map and a bayer array image from the first target image; sequentially performing black level, lens shading correction, dead point compensation and demosaicing on the first full-color channel map and the Bayer array image to obtain a processed first full-color channel map and a processed Bayer array image; fusing the processed first full-color channel diagram and the processed Bayer array image to obtain a fused image; and sequentially carrying out color correction, global tone mapping and color conversion on the fusion image to obtain a final output image.
The division of the various modules in the image generation device described above is for illustration only, and in other embodiments, the image generation device may be divided into different modules as needed to perform all or part of the functions of the image generation device described above.
For specific limitations of the image generating apparatus, reference may be made to the above limitations of the image generating method, and no further description is given here. The respective modules in the above-described image generating apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Fig. 12 is a schematic diagram of an internal structure of an electronic device in one embodiment. The electronic device may be any terminal device such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a PDA (Personal Digital Assistant ), a POS (Point of Sales), a car-mounted computer, and a wearable device. The electronic device includes a processor and a memory connected by a system bus. Wherein the processor may comprise one or more processing units. The processor may be a CPU (Central Processing Unit ) or DSP (Digital Signal Processing, digital signal processor), etc. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program is executable by a processor for implementing an image generation method provided in the following embodiments. The internal memory provides a cached operating environment for operating system computer programs in the non-volatile storage medium.
The implementation of each module in the image generating apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may run on a terminal or a server. Program modules of the computer program may be stored in the memory of the electronic device. Which when executed by a processor, performs the steps of the method described in the embodiments of the application.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of an image generation method.
Embodiments of the present application also provide a computer program product containing instructions which, when run on a computer, cause the computer to perform an image generation method.
Any reference to memory, storage, database, or other medium used in the present application may include non-volatile and/or volatile memory. The nonvolatile Memory may include a ROM (Read-Only Memory), a PROM (Programmable Read-Only Memory ), an EPROM (Erasable Programmable Read-Only Memory, erasable programmable Read-Only Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory), or a flash Memory. Volatile memory can include RAM (Random Access Memory ), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as SRAM (Static Random Access Memory ), DRAM (Dynamic Random Access Memory, dynamic random access memory), SDRAM (Synchronous Dynamic Random Access Memory ), double data rate DDR SDRAM (Double Data Rate Synchronous Dynamic Random Access memory, double data rate synchronous dynamic random access memory), ESDRAM (Enhanced Synchronous Dynamic Random Access memory ), SLDRAM (Sync Link Dynamic Random Access Memory, synchronous link dynamic random access memory), RDRAM (Rambus Dynamic Random Access Memory, bus dynamic random access memory), DRDRAM (Direct Rambus Dynamic Random Access Memory, interface dynamic random access memory).
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (14)

1. An image generation method, which is applied to an electronic device including an image sensor, wherein the image sensor includes a pixel point array including minimum pixel point repeating units, each of the minimum pixel point repeating units includes a plurality of pixel point sub-units, each of the pixel point sub-units includes at least eight single-color pixel points and at least eight full-color pixel points, and the single-color pixel points and the full-color pixel points in the pixel point sub-units are alternately arranged in a row direction and a column direction; the method comprises the following steps:
exposing each pixel point in the pixel point array to obtain single-color photosensitive data corresponding to single-color pixel points and full-color photosensitive data corresponding to full-color pixel points;
If the first-level merging instruction is responded, merging two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel, and merging two full-color photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first full-color pixel; wherein the first diagonal direction and the second diagonal direction are perpendicular to each other; generating a first target image based on each of the first single color pixels and each of the first full color pixels;
if the second-level merging instruction is responded, merging all single-color photosensitive data obtained by the pixel point sub-units to obtain second single-color pixels, and merging all full-color photosensitive data obtained by the pixel point sub-units to obtain second full-color pixels; a second target image is generated based on each of the second single color pixels and each of the second full color pixels.
2. The method according to claim 1, wherein the merging the two single-color photosensitive data obtained in the first diagonal direction in the pixel point array to obtain a first single-color pixel, and merging the two full-color photosensitive data obtained in the second diagonal direction in the pixel point array to obtain a first full-color pixel, includes:
Adding or averaging two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel;
and adding or averaging the two full-color photosensitive data obtained in the second diagonal direction in the pixel point array to obtain a first full-color pixel.
3. The method of claim 1, wherein after generating the first target image, further comprising:
sequentially determining pixels required by the current position from Bayer array images to be generated;
generating a first full-color channel map and a plurality of first single-color channel maps according to the first target image, determining a target single-color channel map from each of the first single-color channel maps based on pixels required by the current position, and extracting first single-color pixels from the corresponding positions of the target single-color channel map as pixels of the current position in a Bayer array image to be generated until pixels of all positions in the Bayer array image to be generated are generated, so as to obtain a Bayer array image; the pixels in the first full-color channel diagram are all first full-color pixels, and the pixels in each first single-color channel diagram are all same-type first single-color pixels.
4. A method according to any one of claims 1 to 3, wherein the image sensor further comprises a color filter array, the color filter array comprises a minimum color filter repeating unit, each minimum color filter repeating unit comprises a plurality of color filter subunits, each color filter subunit comprises at least eight single-color filters and at least eight full-color filters, each color filter in the color filter array corresponds to each pixel point in the pixel point array one by one, and light filtered by the color filters is projected onto the corresponding pixel point to obtain photosensitive data.
5. The method of claim 4 wherein the minimum color filter repeat unit is 8 rows and 8 columns of 64 color filters arranged in a manner:
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
wherein w1 represents a full-color filter, and a1, b1 and c1 each represent a single-color filter.
6. The method of claim 5, wherein the photosensitive data obtained by exposing each of the minimum pixel point repeating units is combined to obtain a first pixel region in the first target image;
the first pixel area is formed by 4 rows, 8 columns and 16 pixels, and the arrangement mode is as follows:
w2 a2 w2 a2 w2 b2 w2 b2
w2 a2 w2 a2 w2 b2 w2 b2
w2 b2 w2 b2 w2 c2 w2 c2
w2 b2 w2 b2 w2 c2 w2 c2
where w2 denotes a first full-color pixel, and a2, b2, and c2 each denote a first single-color pixel.
7. The method of claim 1, wherein combining the single color photosensitive data obtained by the pixel sub-units to obtain a second single color pixel, and combining the full color photosensitive data obtained by the pixel sub-units to obtain a second full color pixel, comprises:
adding or averaging the single-color photosensitive data obtained by the pixel point subunit to obtain a second single-color pixel;
and adding or averaging all the full-color photosensitive data obtained by the pixel point subunit to obtain a second full-color pixel.
8. The method of claim 4 wherein the minimum color filter repeat unit is 8 rows and 8 columns of 64 color filters arranged in a manner:
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
wherein w1 represents a full-color filter, and a1, b1 and c1 each represent a single-color filter;
combining the photosensitive data obtained by exposing the minimum pixel point repeating unit to obtain a second pixel region in the second target image; the second pixel area is 2 rows, 4 columns and 8 pixels, and the arrangement mode is as follows:
w3 a3 w3 b3
w3 b3 w3 c3
where w3 denotes a second full-color pixel, and a3, b3, and c3 each denote a second single-color pixel.
9. The method of claim 1, wherein after generating the first target image, further comprising:
Identifying an attribute of the image sensor;
acquiring a corresponding driving module identifier and an adjusting parameter based on the attribute of the image sensor;
and calling a driving module corresponding to the driving module identifier to transmit the adjustment parameters to a processor, and adjusting the first target image by the processor by adopting the adjustment parameters to obtain a final output image.
10. The method of claim 9, wherein the calling the driving module corresponding to the driving module identifier transmits the adjustment parameter to a processor, and the processor adjusts the first target image by using the adjustment parameter, and before obtaining the final output image, the method further comprises:
converting the first target image into a bayer array image;
the step of calling the driving module corresponding to the driving module identifier, and adjusting the first target image by adopting the adjustment parameters to obtain a final output image, comprising the following steps:
and calling a driving module corresponding to the driving module identifier to transmit the adjustment parameters to a processor, and adjusting the Bayer array image by the processor by adopting the adjustment parameters to obtain a final output image.
11. The method of claim 1, wherein after generating the first target image, further comprising:
generating a first panchromatic channel map and a bayer array image from the first target image;
sequentially performing black level, lens shading correction, dead point compensation and demosaicing on the first full-color channel map and the Bayer array image respectively to obtain a processed first full-color channel map and a processed Bayer array image;
fusing the processed first full-color channel diagram and the processed Bayer array image to obtain a fused image;
and sequentially carrying out color correction, global tone mapping and color conversion on the fusion image to obtain a final output image.
12. An image generating apparatus, characterized in that it is applied to an electronic device including an image sensor including a pixel array including minimum pixel repeating units each including a plurality of pixel sub-units each including at least eight single-color pixels and at least eight full-color pixels, the single-color pixels and the full-color pixels in the pixel sub-units being alternately arranged in both row and column directions; the device comprises:
The exposure module is used for exposing each pixel point in the pixel point array to obtain single-color photosensitive data corresponding to the single-color pixel point and full-color photosensitive data corresponding to the full-color pixel point;
the merging module is used for merging the two single-color photosensitive data obtained in the first diagonal direction in the pixel point array to obtain a first single-color pixel and merging the two full-color photosensitive data obtained in the second diagonal direction in the pixel point array to obtain a first full-color pixel if the first-level merging instruction is responded; wherein the first diagonal direction and the second diagonal direction are perpendicular to each other;
an image generation module for generating a first target image based on each of the first single color pixels and each of the first full color pixels;
the merging module is further configured to, if the second-level merging instruction is responded, merge each single-color photosensitive data obtained by the pixel point subunit for each pixel point subunit to obtain a second single-color pixel, and merge each full-color photosensitive data obtained by the pixel point subunit to obtain a second full-color pixel;
the image generation module is further configured to generate a second target image based on each of the second single color pixels and each of the second panchromatic pixels.
13. An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to perform the steps of the image generation method of any of claims 1 to 11.
14. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 11.
CN202110937243.8A 2021-08-16 2021-08-16 Image generation method, device, electronic equipment and computer readable storage medium Active CN113676675B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110937243.8A CN113676675B (en) 2021-08-16 2021-08-16 Image generation method, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110937243.8A CN113676675B (en) 2021-08-16 2021-08-16 Image generation method, device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113676675A CN113676675A (en) 2021-11-19
CN113676675B true CN113676675B (en) 2023-08-15

Family

ID=78542975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110937243.8A Active CN113676675B (en) 2021-08-16 2021-08-16 Image generation method, device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113676675B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040084A (en) * 2021-12-01 2022-02-11 Oppo广东移动通信有限公司 Image sensor, camera module, electronic equipment, image generation method and device
CN114693580B (en) * 2022-05-31 2022-10-18 荣耀终端有限公司 Image processing method and related device
CN115442573B (en) * 2022-08-23 2024-05-07 深圳市汇顶科技股份有限公司 Image processing method and device and electronic equipment
CN115696063A (en) * 2022-09-13 2023-02-03 荣耀终端有限公司 Photographing method and electronic equipment
CN115866422A (en) * 2022-11-24 2023-03-28 威海华菱光电股份有限公司 Pixel data determination method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101233762A (en) * 2005-07-28 2008-07-30 伊斯曼柯达公司 Image sensor with improved lightsensitivity
CN105578080A (en) * 2015-12-18 2016-05-11 广东欧珀移动通信有限公司 Imaging method, image sensor, imaging device and electronic device
CN111586323A (en) * 2020-05-07 2020-08-25 Oppo广东移动通信有限公司 Image sensor, control method, camera assembly and mobile terminal
CN112118378A (en) * 2020-10-09 2020-12-22 Oppo广东移动通信有限公司 Image acquisition method and device, terminal and computer readable storage medium
CN112261391A (en) * 2020-10-26 2021-01-22 Oppo广东移动通信有限公司 Image processing method, camera assembly and mobile terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101233762A (en) * 2005-07-28 2008-07-30 伊斯曼柯达公司 Image sensor with improved lightsensitivity
CN105578080A (en) * 2015-12-18 2016-05-11 广东欧珀移动通信有限公司 Imaging method, image sensor, imaging device and electronic device
CN111586323A (en) * 2020-05-07 2020-08-25 Oppo广东移动通信有限公司 Image sensor, control method, camera assembly and mobile terminal
CN112118378A (en) * 2020-10-09 2020-12-22 Oppo广东移动通信有限公司 Image acquisition method and device, terminal and computer readable storage medium
CN112261391A (en) * 2020-10-26 2021-01-22 Oppo广东移动通信有限公司 Image processing method, camera assembly and mobile terminal

Also Published As

Publication number Publication date
CN113676675A (en) 2021-11-19

Similar Documents

Publication Publication Date Title
CN113676675B (en) Image generation method, device, electronic equipment and computer readable storage medium
WO2021208593A1 (en) High dynamic range image processing system and method, electronic device, and storage medium
CN112261391B (en) Image processing method, camera assembly and mobile terminal
WO2021196554A1 (en) Image sensor, processing system and method, electronic device, and storage medium
CN213279832U (en) Image sensor, camera and terminal
US10136107B2 (en) Imaging systems with visible light sensitive pixels and infrared light sensitive pixels
JP6134879B2 (en) Imaging system with transparent filter pixels
WO2021212763A1 (en) High-dynamic-range image processing system and method, electronic device and readable storage medium
CN111741277B (en) Image processing method and image processing device
KR102287944B1 (en) Apparatus for outputting image and method thereof
CN111479071B (en) High dynamic range image processing system and method, electronic device, and readable storage medium
CN113676636B (en) Method and device for generating high dynamic range image, electronic equipment and storage medium
CN112118378A (en) Image acquisition method and device, terminal and computer readable storage medium
WO2021223364A1 (en) High-dynamic-range image processing system and method, electronic device, and readable storage medium
CN114125242A (en) Image sensor, camera module, electronic equipment, image generation method and device
CN113676635B (en) Method and device for generating high dynamic range image, electronic equipment and storage medium
CN114040084A (en) Image sensor, camera module, electronic equipment, image generation method and device
CN114338988A (en) Image generation method and device, electronic equipment and computer-readable storage medium
CN111970460B (en) High dynamic range image processing system and method, electronic device, and readable storage medium
WO2022073364A1 (en) Image obtaining method and apparatus, terminal, and computer readable storage medium
CN111970461B (en) High dynamic range image processing system and method, electronic device, and readable storage medium
JP7298020B2 (en) Image capture method, camera assembly and mobile terminal
CN114554046A (en) Image sensor, camera module, electronic equipment, image generation method and device
JP2013219452A (en) Color signal processing circuit, color signal processing method, color reproduction evaluation method, imaging apparatus, electronic apparatus and testing apparatus
TWI536765B (en) Imaging systems with clear filter pixels

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant