WO2018098978A1 - 控制方法、控制装置、电子装置和计算机可读存储介质 - Google Patents

控制方法、控制装置、电子装置和计算机可读存储介质 Download PDF

Info

Publication number
WO2018098978A1
WO2018098978A1 PCT/CN2017/081724 CN2017081724W WO2018098978A1 WO 2018098978 A1 WO2018098978 A1 WO 2018098978A1 CN 2017081724 W CN2017081724 W CN 2017081724W WO 2018098978 A1 WO2018098978 A1 WO 2018098978A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
unit
color
interpolation algorithm
Prior art date
Application number
PCT/CN2017/081724
Other languages
English (en)
French (fr)
Inventor
韦怡
Original Assignee
广东欧珀移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东欧珀移动通信有限公司 filed Critical 广东欧珀移动通信有限公司
Publication of WO2018098978A1 publication Critical patent/WO2018098978A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/44Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3871Composing, repositioning or otherwise geometrically modifying originals the composed originals being of different kinds, e.g. low- and high-resolution originals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4015Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/62Detection or reduction of noise due to excess charges produced by the exposure, e.g. smear, blooming, ghost image, crosstalk or leakage between pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2209/00Details of colour television systems
    • H04N2209/04Picture signal generators
    • H04N2209/041Picture signal generators using solid-state devices
    • H04N2209/042Picture signal generators using solid-state devices having a single pick-up sensor
    • H04N2209/045Picture signal generators using solid-state devices having a single pick-up sensor using mosaic colour filter
    • H04N2209/046Colour interpolation to calculate the missing colour values

Definitions

  • the present invention relates to image processing techniques, and more particularly to control methods, control devices, electronic devices, and computer readable storage media.
  • An existing image sensor includes a pixel unit array and a filter unit array disposed on the pixel unit array, each filter unit array covering a corresponding one of the photosensitive pixel units, and each of the photosensitive pixel units includes a plurality of photosensitive pixels.
  • the image sensor exposure output merged image may be controlled, and the merged image includes a merged pixel array, and the plurality of photosensitive pixels of the same pixel unit are combined and output as one merged pixel. In this way, the signal-to-noise ratio of the merged image can be improved, however, the resolution of the merged image is lowered.
  • the image sensor may also be controlled to output a high-pixel patch image
  • the patch image includes an original pixel array, and each photosensitive pixel corresponds to one original pixel.
  • the resolution of the patch image cannot be improved. Therefore, it is necessary to convert a high-pixel patch image into a high-pixel pseudo-image by interpolation calculation, and the pseudo-image may include a pseudo-primary pixel arranged in a Bayer array.
  • the original image can be converted into a true color image by a control method and saved. Interpolation calculations can improve the sharpness of true color images, but they are resource intensive and time consuming, resulting in longer shooting times and poor user experience. On the other hand, in specific applications, users tend to focus only on the sharpness of the main part of the true color image.
  • Embodiments of the present invention provide a control method, a control device, an electronic device, and a computer readable storage medium.
  • a control method of an embodiment of the present invention for controlling an electronic device, the electronic device comprising an imaging device and a touch screen, the imaging device comprising an image sensor, the image sensor comprising an array of photosensitive pixel units and an array disposed on the photosensitive pixel unit An array of filter cells, each of the filter cell arrays covering a corresponding one of the photosensitive pixel units, each of the photosensitive pixel units comprising a plurality of photosensitive pixels, the control method comprising the steps of:
  • the step of converting the patch image into the pseudo original image comprises the following steps:
  • the pixel value of the associated pixel is used as the pixel value of the current pixel
  • a control device for controlling an electronic device, the electronic device comprising an imaging device and a touch screen, the imaging device comprising an image sensor, the image sensor comprising an array of photosensitive pixel units and an array disposed on the photosensitive pixel unit An array of filter cells, each of the filter cell arrays covering a corresponding one of the photosensitive pixel units, each of the photosensitive pixel units comprising a plurality of photosensitive pixels, the control device comprising an output module, a selected module , the first conversion module, the merge module.
  • the selected module is configured to determine a predetermined area on the merged image according to a user input; the first conversion module is configured to convert the color patch image into a pseudo original image by using a first interpolation algorithm, the pseudo original
  • the image includes an array of original pixels, the original pixel including a current pixel, the original pixel including an associated pixel corresponding to the current pixel, the patch image includes a predetermined area, and the first conversion module includes a first determining module, a second determining module, a first calculating module, a second calculating module, and a third calculating module, wherein the first determining module is configured to determine whether the associated pixel is located in the predetermined area; The determining module is configured to determine, when the associated pixel is located in the predetermined area, whether a color of the current pixel is the same as a color of the associated pixel; the first calculating module is configured to use a color and a color of the current pixel When the colors of the associated pixels are the same, the pixel value of the associated
  • An electronic device includes an imaging device, a touch screen, and the above-described control device.
  • An electronic device includes a housing, a processor, a memory, a circuit board, and a power supply circuit.
  • the circuit board is disposed inside a space enclosed by the casing, the processor and the memory are disposed on the circuit board; and the power circuit is configured to supply power to each circuit or device of the electronic device;
  • the memory is for storing executable program code; the processor runs a program corresponding to the executable program code by reading executable program code stored in the memory for executing the control method described above.
  • a computer readable storage medium in accordance with an embodiment of the present invention has instructions stored therein.
  • the processor of the electronic device executes the instruction, the electronic device performs the control method described above.
  • the control method, the control device and the electronic device of the embodiment of the present invention process the image in the predetermined area in the patch image by using a first interpolation algorithm to improve the resolution and resolution of the image of the predetermined area, and the complexity of the combined image is less than
  • the second interpolation algorithm of the first interpolation algorithm performs processing to reduce the signal-to-noise ratio, resolution, and resolution of the main portion of the image, while reducing the data and processing time required to improve the user experience.
  • FIG. 1 is a schematic flow chart of a control method according to an embodiment of the present invention.
  • FIG. 2 is another schematic flowchart of a control method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of functional modules of a control device according to an embodiment of the present invention.
  • FIG. 4 is a schematic block diagram of an image sensor according to an embodiment of the present invention.
  • FIG. 5 is a circuit diagram of an image sensor according to an embodiment of the present invention.
  • FIG. 6 is a schematic view of a filter unit according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of an image sensor according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a merged image state according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram showing a state of a patch image according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram showing a state of a control method according to an embodiment of the present invention.
  • FIG. 11 is a schematic flow chart of a control method according to some embodiments of the present invention.
  • FIG. 12 is a schematic diagram of functional modules of a second computing module according to some embodiments of the present invention.
  • FIG. 13 is a schematic flow chart of a control method according to some embodiments of the present invention.
  • FIG. 14 is a schematic diagram of functional blocks of a control device according to some embodiments of the present invention.
  • 15 is a schematic diagram of a state of image merging of a control method according to some embodiments of the present invention.
  • 16 is a schematic flow chart of a control method according to some embodiments of the present invention.
  • 17 is a functional block diagram of selected modules of some embodiments of the present invention.
  • 19 is a schematic diagram of functional modules of a second processing module according to some embodiments of the present invention.
  • 20 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
  • 21 is a schematic flow chart of a control method according to some embodiments of the present invention.
  • 22 is a schematic diagram of functional modules of a second processing module according to some embodiments of the present invention.
  • FIG. 23 is a schematic flow chart of a control method according to some embodiments of the present invention.
  • 24 is a schematic diagram of functional modules of an extension unit according to some embodiments of the present invention.
  • 25 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
  • 26 is a schematic diagram of functional modules of an electronic device according to some embodiments of the present invention.
  • 27 is a schematic diagram of functional modules of an electronic device according to some embodiments of the present invention.
  • a control method of an embodiment of the present invention is used to control an electronic device 100 .
  • the electronic device 100 includes an imaging device 20 and a touch screen 30 .
  • the imaging device 20 includes an image sensor 21 and an image sensor 21 .
  • a photosensitive pixel unit array 212 and a filter unit array 211 disposed on the photosensitive pixel unit array 212 are included.
  • Each of the filter unit arrays 211 covers a corresponding one of the photosensitive pixel units 212a, and each of the photosensitive pixel units 212a includes a plurality of photosensitive pixels.
  • the control method includes the following steps:
  • the control image sensor 21 outputs a merged image and a patch image of the same scene, the merged image includes a merged pixel array, and the plurality of photosensitive pixels of the same photosensitive pixel unit are combined and output as one merged pixel, and the patch image includes a predetermined array of images.
  • a pixel unit the image pixel unit includes a plurality of original pixels, and each of the photosensitive pixels corresponds to one original pixel;
  • the pixel value of the current pixel is calculated according to the pixel value of the associated pixel unit by using a first interpolation algorithm, where the image pixel unit includes an associated pixel unit, and the color of the associated pixel unit is related to the current pixel. Same and adjacent to the current pixel; and
  • S139 Convert the merged image into a restored image corresponding to the original image by using a second interpolation algorithm, where the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm;
  • control method of the embodiment of the present invention may be implemented by the control device 10.
  • the control device 10 is for controlling the electronic device 100.
  • the electronic device includes an imaging device 20 and a touch screen 30.
  • the imaging device 20 includes an image sensor 21 including a photosensitive pixel unit array 212 and a filter disposed on the photosensitive pixel unit array 212.
  • the unit array 211, each of the filter unit arrays 212 covers a corresponding one of the photosensitive pixel units 212a, each of the photosensitive pixel units 212a includes a plurality of photosensitive pixels 2121, and the control device 10 includes an output module 11, a selected module 12, and a first conversion module. 13. Merge module 14.
  • the selection module 11 is configured to determine a predetermined area on the merged image according to a user input; the selection module 12 is configured to determine a predetermined area on the merged image according to a user input; the first conversion module 13 is configured to utilize the first interpolation algorithm to use the color block
  • the image is converted into a pseudo original image, and the original image includes an original pixel arranged in an array, the original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel, the color block image includes a predetermined area, and the first conversion module 13 includes The first determining module 131, the second determining module 133, the first calculating module 135, the second calculating module 137, and the third calculating module 139, the first determining module 131 is configured to determine whether the associated pixel is located in the predetermined area; the second determining module 133, configured to determine whether the color of the current pixel is the same as the color of the associated pixel when the associated pixel is located in the predetermined area; the first
  • the control method of the embodiment of the present invention processes the image in the predetermined region of the patch image by using the first interpolation algorithm, and processes the merged image by using the second interpolation algorithm.
  • the predetermined area can be input and selected by the user.
  • the complexity of the second interpolation algorithm includes time complexity and spatial complexity. Compared with the first interpolation algorithm, the time complexity and spatial complexity of the second interpolation algorithm are smaller than the first interpolation algorithm.
  • the image sensor 21 outputs the patch image and the merged image, respectively, and performs image processing only on the image in the user-specified area of the patch image by using the first interpolation algorithm with a large complexity, which can effectively reduce the image.
  • the processed data and the required time can also improve the resolution of the image of the main part of the user's attention, that is, the predetermined area, and improve the user experience.
  • the image sensor 21 of the embodiment of the present invention includes a photosensitive pixel unit array 212 and a filter unit array 211 disposed on the photosensitive pixel unit array 212.
  • the photosensitive pixel unit array 212 includes a plurality of photosensitive pixel units 212a, each of which includes a plurality of adjacent photosensitive pixels 2121.
  • Each of the photosensitive pixels 2121 includes a photosensitive device 21211 and a transfer tube 21212, wherein the photosensitive device 21211 can be a photodiode, and the transfer tube 21212 can be a MOS transistor.
  • the filter unit array 211 includes a plurality of filter units 211a, each of which covers a corresponding one of the photosensitive pixel units 212a.
  • the filter cell array 211 includes a Bayer array, that is, the adjacent four filter units 211a are respectively a red filter unit and a blue filter unit. And two green filter units.
  • Each of the photosensitive pixel units 212a corresponds to the filter 211a of the same color. If one photosensitive pixel unit 212a includes a total of n adjacent photosensitive devices 21211, one filter unit 211a covers n of one photosensitive pixel unit 212a.
  • the photosensitive device 21211 may have an integral structure or may be assembled and connected by n independent sub-filters.
  • each photosensitive pixel unit 212a includes four adjacent photosensitive pixels 2121, and two adjacent photosensitive pixels 2121. together constitute one photosensitive pixel sub-unit 2120.
  • the photosensitive pixel sub-unit 2120 further includes a source follower.
  • the photosensitive pixel unit 212a further includes an adder 2122. Wherein one end electrode of each of the one of the photosensitive pixel sub-units 2120 is connected to the cathode electrode of the corresponding photosensitive device 21211, and the other end of each of the transfer tubes 21212 is commonly connected to the gate electrode of the source follower 21213. And connected to an analog to digital converter 21214 via a source follower 21213 source electrode.
  • the source follower 21213 may be a MOS transistor.
  • the two photosensitive pixel subunits 2120 are connected to the adder 2122 through respective source followers 21213 and analog to digital converters 21214.
  • the adjacent four photosensitive devices 21211 of one photosensitive pixel unit 212a of the image sensor 21 of the embodiment of the present invention share a filter unit 211a of the same color, and each photosensitive device 21211 is connected to a transmission tube 21212.
  • the two adjacent photosensitive devices 21211 share a source follower 21213 and an analog to digital converter 21214, and the adjacent four photosensitive devices 21211 share an adder 2122.
  • adjacent four photosensitive devices 21211 are arranged in a 2*2 array.
  • the two photosensitive devices 21211 in one photosensitive pixel subunit 2120 may be in the same column.
  • the pixels may be combined to output a combined image.
  • the photosensitive device 21211 is configured to convert illumination into electric charge, and the generated electric charge is proportional to the illumination intensity, and the transmission tube 21212 is configured to control the on or off of the circuit according to the control signal.
  • the source follower 21213 is configured to convert the charge signal generated by the light-sensing device 21211 into a voltage signal.
  • Analog to digital converter 21214 is used to convert the voltage signal to a digital signal.
  • the adder 2122 is for summing the two digital signals for common output for processing by the image processing module connected to the image sensor 21.
  • the image sensor 21 of the embodiment of the present invention can combine 16M photosensitive pixels into 4M, or output a combined image.
  • the size of the photosensitive pixels is equivalent to change. It is 4 times the original size, which improves the sensitivity of the photosensitive pixels.
  • the noise in the image sensor 21 is mostly random noise, it is possible that for the photosensitive pixels before the combination, there is a possibility that noise is present in one or two pixels, and the four photosensitive pixels are combined into one large photosensitive light. After the pixel, the influence of the noise on the large pixel is reduced, that is, the noise is weakened, and the signal-to-noise ratio is improved.
  • the resolution of the merged image will also decrease as the pixel value decreases.
  • the patch image can be output through image processing.
  • the photosensitive device 21211 is used to convert illumination into electric charge, and the generated electric charge is proportional to the intensity of the illumination.
  • the transmission tube 21212 is configured to control the conduction or disconnection of the circuit according to the control signal.
  • the source follower 21213 is configured to convert the charge signal generated by the light-sensing device 21211 into a voltage signal.
  • Analog to digital converter 21214 is used to convert the voltage signal to a digital signal for processing by an image processing module coupled to image sensor 21.
  • the image sensor 21 of the embodiment of the present invention can also maintain a 16M photosensitive pixel output, or an output patch image, and the patch image includes an image pixel unit, and an image pixel unit.
  • the original pixel is arranged in a 2*2 array, the size of the original pixel is the same as the size of the photosensitive pixel, but since the filter unit 211a covering the adjacent four photosensitive devices 21211 is the same color, that is, although four The photosensitive devices 21211 are respectively exposed, but the filter units 211a covered by them are of the same color. Therefore, the adjacent four original pixels of each image pixel unit output are the same color, and the resolution of the image cannot be improved.
  • the control method of the embodiment of the present invention is configured to process the output patch image to obtain a pseudo original image.
  • the processing module receives the processing to output a true color image.
  • the color patch image is outputted separately for each photosensitive pixel at the time of output. Since the adjacent four photosensitive pixels have the same color, the four adjacent original pixels of one image pixel unit have the same color and are atypical Bayer arrays.
  • the image processing module cannot directly process the atypical Bayer array, that is, when the image sensor 21 adopts the unified image processing mode, the true color image output in the merge mode is compatible with the two modes of true color image output and The true color image output in the color block mode needs to convert the color block image into a pseudo original image, or convert the image pixel unit of the atypical Bayer array into a pixel arrangement of a typical Bayer array.
  • the original image includes imitation original pixels arranged in a Bayer array.
  • the pseudo original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel.
  • the control method of the embodiment of the present invention outputs a patch image and a merged image, respectively.
  • a predetermined area of the patch image is first converted into a Bayer image array, and image processing is performed using a first interpolation algorithm.
  • the current pixels are R3'3' and R5'5', and the corresponding associated pixels are R33 and R55, respectively.
  • the pixel values above and below should be broadly understood as the color attribute values of the pixel, such as color values.
  • the associated pixel unit includes a plurality of, for example, four, original pixels in the image pixel unit that are the same color as the current pixel and are adjacent to the current pixel.
  • the associated pixel corresponding to R5'5' is B55, which is adjacent to the image pixel unit where B55 is located and has the same color as R5'5'.
  • the image pixel units in which the associated pixel unit is located are image pixel units in which R44, R74, R47, and R77 are located, and are not other red image pixel units that are spatially farther from the image pixel unit in which B55 is located.
  • red original pixels closest to the B55 are R44, R74, R47 and R77, respectively, that is, the associated pixel unit of R5'5' is composed of R44, R74, R47 and R77, R5'5'
  • the colors are the same as and adjacent to R44, R74, R47 and R77.
  • the original pixel is converted into the original pixel in different ways, thereby converting the color block image into the original image, and a special Bayer array structure filter is adopted when the image is captured.
  • the image signal-to-noise ratio is improved, and in the image processing process, the color block image is interpolated by the first interpolation algorithm, thereby improving the resolution and resolution of the image.
  • step S137 includes the following steps:
  • S1373 Calculate the pixel value of the current pixel according to the amount of the gradient and the weight.
  • the second calculation module 137 includes a first calculation unit 1371, a second calculation unit 1372, and a third calculation unit 1373.
  • the first calculating unit 1371 is configured to calculate the amount of gradation in each direction of the associated pixel; the second calculating unit 1372 is configured to calculate the weight in each direction of the associated pixel; and the third calculating unit 1373 is configured to calculate the pixel of the current pixel according to the gradation amount and the weight value.
  • step S1371 can be implemented by the first calculating unit 1371
  • step S1372 can be implemented by the second calculating unit 1372
  • step S1373 can be implemented by the third calculating unit 1373.
  • the first interpolation algorithm is an energy gradation of the reference image in different directions, and the color corresponding to the current pixel is the same and the adjacent associated pixel unit is calculated by linear interpolation according to the gradation weight in different directions.
  • the pixel value of the current pixel in the direction in which the amount of change in energy is small, the reference specific gravity is large, and therefore, the weight at the time of interpolation calculation is large.
  • R5'5' is interpolated from R44, R74, R47 and R77, and there are no original pixels of the same color in the horizontal and vertical directions, so the components of the color in the horizontal and vertical directions are first calculated from the associated pixel unit.
  • the components in the horizontal direction are R45 and R75
  • the components in the vertical direction are R54 and R57 which can be calculated by R44, R74, R47 and R77, respectively.
  • R45 R44*2/3+R47*1/3
  • R75 2/3*R74+1/3*R77
  • R54 2/3*R44+1/3*R74
  • R57 2/3 *R47+1/3*R77.
  • the amount of gradation and the weight in the horizontal and vertical directions are respectively calculated, that is, the gradation amount in different directions according to the color is determined to determine the reference weights in different directions at the time of interpolation, and the weight is smaller in the direction of the gradation amount. Large, and in the direction of larger gradient, the weight is smaller.
  • the gradient amount X1
  • the gradient amount X2
  • W1 X1/(X1+X2)
  • W2 X2/(X1+X2) .
  • R5'5' (2/3*R45+1/3*R75)*W2+(2/3*R54+1/3*R57)*W1. It can be understood that if X1 is greater than X2, W1 is greater than W2, so the weight in the horizontal direction is W2 when calculating, and the weight in the vertical direction is W1, and vice versa.
  • the pixel value of the current pixel can be calculated according to the first interpolation algorithm.
  • the original pixel can be converted into a pseudo original pixel arranged in a typical Bayer array, that is, the adjacent original pixels of the four 2*2 arrays include a red original pixel. , two green imitation original pixels and one blue imitation original pixel.
  • the first interpolation algorithm includes, but is not limited to, a manner in which only pixel values of the same color in both the vertical and horizontal directions are considered in the calculation, and for example, reference may also be made to pixel values of other colors.
  • step S137 the steps are included before step S137:
  • Step S137 includes steps:
  • S138a Perform white balance compensation and restoration on the original image.
  • the first conversion module 13 includes a white balance compensation module 136a and a white balance compensation reduction module 138a.
  • the white balance compensation module 136a is configured to perform white balance compensation on the patch image
  • the white balance compensation and restoration module 138a is configured to perform white balance compensation and restoration on the original image.
  • step S136a can be implemented by the white balance compensation module 136a
  • step S138a can be implemented by the white balance compensation restoration module 138a.
  • the red and blue imitation original pixels often refer not only to the color of the original pixel of the channel whose color is the same, but also refer to The color weight of the original pixels of the green channel, therefore, white balance compensation is required before interpolation to eliminate the effects of white balance in the interpolation calculation.
  • white balance compensation In order not to destroy the white balance of the patch image, it is necessary to perform white balance compensation reduction after the interpolation, and restore according to the gain values of red, green and blue in the compensation.
  • the step S137 includes steps:
  • the first conversion module 13 includes a dead point compensation module 136b.
  • step S136b can be implemented by the dead point compensation module 136b.
  • the image sensor 21 may have a dead pixel.
  • the bad point usually does not always show the same color as the sensitivity changes, and the presence of the dead pixel will affect the image quality. Therefore, in order to ensure accurate interpolation, The effect of the dead point requires bad point compensation before interpolation.
  • the original pixel may be detected.
  • the pixel compensation may be performed according to the pixel value of the other original image of the image pixel unit in which it is located.
  • the step S137 includes steps:
  • the first conversion module 13 includes a crosstalk compensation module 136c.
  • step S136c can be implemented by the crosstalk module 136c.
  • the four photosensitive pixels in one photosensitive pixel unit cover the filter of the same color, and there may be a difference in sensitivity between the photosensitive pixels, so that the solid color region in the true color image converted by the original image is Fixed spectral noise occurs, affecting the quality of the image. Therefore, it is necessary to perform crosstalk compensation on the patch image.
  • step S137 includes steps:
  • S138b Perform lens shading correction, demosaicing, noise reduction, and edge sharpening on the original image.
  • the first conversion module 13 further includes a first processing module 138b.
  • step S138b can be implemented by the first processing module 138b.
  • the original pixel is arranged as a typical Bayer array, and the first processing module 138b can be used for processing, including lens shadow correction, demosaicing, noise reduction and edge processing. Sharpening processing, in this way, after processing, the true color image can be output to the user.
  • the image sensor 21 combines the 16M photosensitive pixels into a 4M direct output.
  • the merged image is first stretched and enlarged by the second interpolation algorithm to be converted into a restored image having the same size as the patch image.
  • the image in the predetermined area of the patch image is processed into a pseudo original image by using a first interpolation algorithm, and the merged image is processed into a restored image by using a second interpolation algorithm, and then the two images are combined to obtain a merged original. image. Merge the original image in the predetermined area The image has a higher resolution.
  • step S12 includes the following steps:
  • S121 Convert the merged image into a preview image by using a third interpolation algorithm, where the third interpolation algorithm includes a second interpolation algorithm;
  • S123 Processing user input on the touch screen determines a predetermined area.
  • the selected module 12 includes a second conversion module 121 , a display module 122 , and a second processing module 123 .
  • the second conversion module 121 is configured to convert the merged image into a preview image by using a third interpolation algorithm
  • the third interpolation algorithm includes a second interpolation algorithm
  • the display module 122 is configured to control the touch screen to display the preview image
  • the second processing module 123 is configured to process the touch screen. The user input on it determines the predetermined area.
  • step S121 can be implemented by the second conversion module 121
  • step S122 can be implemented by the display module 122
  • step S123 can be implemented by the second processing module 123.
  • the second conversion module 121 converts the merged image into a preview image and is displayed by the display module 122.
  • the process of converting the merged image into a preview image is performed by using a third interpolation algorithm.
  • the third interpolation algorithm includes a second interpolation algorithm and a bilinear interpolation method.
  • the second interpolation algorithm is first used to convert the merged image into a restored image of the Bayer array, and then the bilinear interpolation method is used to convert the original image into a true color image. In this way, the user can preview and facilitate the user to select a predetermined area.
  • the algorithm for converting the restored image into a true color image is not limited to the bilinear interpolation method, and other interpolation algorithms may be used for calculation.
  • step S123 includes the following steps:
  • S1232 processing user input to identify a touch location
  • S1233 determining an origin extension unit where the touch location is located, the extension unit including an origin extension unit;
  • the second processing module 123 includes a dividing unit 1231, a first identifying unit 1232, a pointing unit 1233, a fourth calculating unit 1234, a first processing unit 1235, and a second processing unit 1236.
  • the dividing unit 1231 is configured to divide the preview image into an extended unit of the array arrangement
  • the first identifying unit 1232 is used to identify a user input to identify a touch location
  • a fixed point unit 1233 is used to determine an origin extension unit where the touch location is located
  • the extension unit includes an origin extension unit
  • the fourth calculation unit 1234 is configured to calculate the scale expansion unit with the origin extension unit as a center.
  • the first processing unit 1235 is configured to determine that the corresponding extension unit is an edge extension unit when the contrast value exceeds a predetermined threshold, the extension unit includes an edge extension unit, and the second processing unit 1236 is configured to determine an edge extension.
  • the area enclosed by the unit is a predetermined area.
  • step S1231 can be implemented by the dividing unit 1231
  • step S1232 can be implemented by the first identifying unit
  • step S1233 can be implemented by the pointing unit 1233
  • step S1234 can be implemented by the fourth calculating unit 1234
  • step S1235 can be implemented by the first The processing unit 1235 is implemented
  • the step S1236 can be implemented by the second processing unit 1236.
  • the black dot is the touch position of the user, and the touch location is expanded outward by the origin expansion unit, and each block in the figure is an expansion unit.
  • the contrast value of each expansion unit is compared with the preset threshold, and the expansion unit whose inverse value is greater than the preset threshold is the edge expansion unit.
  • the contrast difference of the extension unit where the edge portion of the face shown in FIG. 19 is greater than the preset threshold, that is, the extension unit where the edge portion of the face is located is the edge extension unit, and the gray box in the figure is Edge extension unit.
  • the area enclosed by the plurality of edge expansion units is a predetermined area
  • the predetermined area is a processing area designated by the user, that is, a main body part of the user's attention
  • the image in the area is processed by the first interpolation algorithm to enhance the attention of the user.
  • the resolution of the image of the main part enhances the user experience.
  • step S123 includes the following steps:
  • S128 expanding an area of a predetermined shape centering on the touch position as a predetermined area.
  • the second processing module 123 includes a second identification unit 1237 and an expansion unit 1238.
  • the second identification unit 1237 is for processing a user input to identify a touched position
  • the expansion unit 1238 is for expanding an area of a predetermined shape outwardly from the touch position as a predetermined area.
  • step S1237 can be implemented by the second identifying unit 1237
  • step S1238 can be implemented by the expanding unit 1238.
  • step S1238 includes the following steps:
  • S12381 Process user input to determine a predetermined shape.
  • the expansion unit 1238 includes a third processing unit 12381 for processing user input to determine a predetermined shape.
  • step S12381 can be implemented by the third processing unit 12381.
  • the black dot in the figure is the touch position of the user. Focusing on the touch location, The circular area is expanded outward to generate a predetermined area.
  • the predetermined area expanded in a circle as shown in the figure includes the entire face portion, and the first interpolation algorithm can be applied to the face portion to improve the resolution of the face portion.
  • the expanded predetermined shape may also be a rectangle, a square, or other shapes, and the user may adjust and drag the size of the expanded predetermined shape according to actual needs.
  • the manner in which the user specifies the predetermined area further includes that the user directly draws an arbitrary shape on the touch screen as a predetermined area, or the user selects several points on the touch screen, and the area enclosed by the several points is used as the predetermined area. Image processing of the first interpolation algorithm. In this way, the user is allowed to independently select the main body part of the focus to improve the definition of the main body part, thereby further improving the user experience.
  • an electronic device 100 includes a control device 10, a touch screen 30, and an imaging device 20.
  • electronic device 100 includes a cell phone and a tablet.
  • Both the mobile phone and the tablet computer have a camera, that is, an imaging device 20.
  • the control method of the embodiment of the present invention can be used to obtain a high-resolution picture.
  • the electronic device 100 also includes other electronic devices having a photographing function.
  • the control method of the embodiment of the present invention is one of the designated processing modes in which the electronic device 100 performs image processing. That is to say, when the user performs the photographing by the electronic device 100, it is necessary to select various designated processing modes included in the electronic device 100.
  • the user selects the designated processing mode of the embodiment of the present invention, the user can select the predetermined region autonomously.
  • the electronic device 100 performs image processing using the control method of the embodiment of the present invention.
  • imaging device 20 includes a front camera and a rear camera.
  • many electronic devices 100 include a front camera and a rear camera.
  • the front camera and the rear camera can implement image processing by using the control method of the embodiment of the present invention to improve the user experience.
  • an electronic device 100 includes a processor 40, a memory 50, a circuit board 60, a power supply circuit 70, and a housing 80.
  • the circuit board 60 is disposed inside the space enclosed by the housing 80, the processor 40 and the memory 50 are disposed on the circuit board; the power circuit 70 is used to supply power to the various circuits or devices of the electronic device 100; and the memory 50 is used for storing
  • the program code is executed; the processor 40 executes a program corresponding to the executable program code by reading the executable code stored in the memory 50 to implement the control method of any of the above embodiments of the present invention.
  • processor 40 can be used to perform the following steps:
  • the control image sensor 21 outputs a merged image and a patch image of the same scene, the merged image includes a merged pixel array, and the plurality of photosensitive pixels of the same photosensitive pixel unit are combined and output as one merged pixel, and the patch image includes a predetermined array of images.
  • a pixel unit the image pixel unit includes a plurality of original pixels, and each of the photosensitive pixels corresponds to one original pixel;
  • the pixel value of the current pixel is calculated according to the pixel value of the associated pixel unit by using a first interpolation algorithm, where the image pixel unit includes an associated pixel unit, and the color of the associated pixel unit is related to the current pixel. Same and adjacent to the current pixel; and
  • S139 Convert the merged image into a restored image corresponding to the original image by using a second interpolation algorithm, where the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm;
  • a computer readable storage medium in accordance with an embodiment of the present invention has instructions stored in a computer readable storage medium.
  • the processor 40 of the electronic device 100 executes an instruction, the electronic device 100 performs the control method of any of the embodiments of the present invention described above.
  • the electronic device 100 can perform the following steps:
  • the control image sensor 21 outputs a merged image and a patch image of the same scene, the merged image includes a merged pixel array, and the plurality of photosensitive pixels of the same photosensitive pixel unit are combined and output as one merged pixel, and the patch image includes a predetermined array of images.
  • a pixel unit the image pixel unit includes a plurality of original pixels, and each of the photosensitive pixels corresponds to one original pixel;
  • the pixel value of the current pixel is calculated according to the pixel value of the associated pixel unit by using a first interpolation algorithm, where the image pixel unit includes an associated pixel unit, and the color of the associated pixel unit is related to the current pixel. Same and adjacent to the current pixel; and
  • S139 Convert the merged image into a restored image corresponding to the original image by using a second interpolation algorithm, where the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm;
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program may be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, in other suitable manners. Processing to obtain the program electronically and then storing it in computer memory.
  • portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be performed by software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if performed in hardware, as in another embodiment, it can be used in the art.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be executed in the form of hardware or in the form of software functional modules.
  • An integrated module can also be stored on a computer readable storage medium if it is executed as a software function module and sold or used as a standalone product.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Human Computer Interaction (AREA)
  • Color Television Image Signal Generators (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种控制方法、控制装置、电子装置和计算机可读存储介质。首先,控制图像传感器输出同一场景的合并图像和色块图像。其次根据用户输入确定预定区域;然后,利用第一插值算法处理色块图像的预定区域内的图像,利用第二插值算法处理合并图像;最后将两帧经插值处理后的图像进行合并。本发明实施方式的控制方法、控制装置和电子装置根据用户选择的预定区域,对色块图像中的预定区域内的图像采用第一插值算法机型主力,从而获得高质量的图像,同时避免了对整帧图像进行图像处理带来的大量工作,提高了工作效率。

Description

控制方法、控制装置、电子装置和计算机可读存储介质
优先权信息
本申请请求2016年11月29日向中国国家知识产权局提交的、专利申请号为201611078974.7的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本发明涉及图像处理技术,尤其涉及控制方法、控制装置、电子装置和计算机可读存储介质。
背景技术
现有的一种图像传感器包括像素单元阵列和设置在像素单元阵列上的滤光片单元阵列,每个滤光片单元阵列覆盖对应一个感光像素单元,每个感光像素单元包括多个感光像素。工作时,可以控制图像传感器曝光输出合并图像,合并图像包括合并像素阵列,同一像素单元的多个感光像素合并输出作为一个合并像素。如此,可以提高合并图像的信噪比,然而,合并图像的解析度降低。当然,也可以控制图像传感器曝光输出高像素的色块图像,色块图像包括原始像素阵列,每个感光像素对应一个原始像素。然而,由于同一滤光片单元对应的多个原始像素颜色相同,同样无法提高色块图像的解析度。因此,需要通过插值计算的方式将高像素的色块图像转换成高像素的仿原图像,仿原图像可以包括呈拜耳阵列排布的仿原像素。仿原图像可以通过控制方法转换成真彩图像并保存下来。插值计算可以提高真彩图像的清晰度,然而耗费资源且耗时,导致拍摄时间加长,用户体验差。另一方面,具体应用时,用户往往只关注真彩图像中的主体部分的清晰度。
发明内容
本发明的实施例提供一种控制方法、控制装置、电子装置和计算机可读存储介质。
本发明实施方式的控制方法,用于控制电子装置,所述电子装置包括成像装置和触摸屏,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制方法包括以下步骤:
控制所述图像传感器输出同一场景的合并图像和色块图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所 述感光像素对应一个所述原始像素;
根据用户输入在所述合并图像上确定预定区域;
利用第一插值算法将所述色块图像转换成仿原图像,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述色块图像包括预定区域,所述将所述色块图像转换成所述仿原图像的步骤包括以下步骤:
判断所述关联像素是否位于所述预定区域内;
在所述关联像素位于所述预定区域内时判断所述当前像素的颜色与所述关联像素的颜色是否相同;
在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;和
在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
将所述合并图像通过第二插值算法转换成与所述仿原图像对应的还原图像,所述第二插值算法的复杂度小于所述第一插值算法;和
合并所述仿原图像及所述还原图像以得到所述合并仿原图像。
本发明实施方式的控制装置,用于控制电子装置,所述电子装置包括成像装置和触摸屏,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制装置包括输出模块、选定模块、第一转换模块、合并模块。所述选定模块用于根据用户输入在所述合并图像上确定预定区域;所述第一转换模块用于利用利用第一插值算法将所述色块图像转换成仿原图像,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述色块图像包括预定区域,所述第一转换模块包括第一判断模块、第二判断模块、第一计算模块、第二计算模块、第三计算模块,所述第一判断模块用于判断所述关联像素是否位于所述预定区域内;所述第二判断模块用于在所述关联像素位于所述预定区域内时判断所述当前像素的颜色与所述关联像素的颜色是否相同;所述第一计算模块用于在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;所述第二计算模块用于在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与 所述当前像素相同且与所述当前像素相邻;所述第三计算模块用于将所述合并图像通过第二插值算法转换成与所述仿原图像对应的还原图像,所述第二插值算法的复杂度小于所述第一插值算法;所述合并模块用于合并所述仿原图像及所述还原图像以得到所述合并仿原图像。
本发明实施方式的电子装置包括成像装置、触摸屏和上述的控制装置。
本发明实施方式的电子装置包括壳体、处理器、存储器、电路板和电源电路。所述电路板安置在所述壳体围成的空间内部,所述处理器和所述存储器设置在所述电路板上;所述电源电路用于为所述电子装置的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行上述的控制方法。
本发明实施方式的计算机可读存储介质,具有存储于其中的指令。当电子装置的处理器执行所述指令时,所述电子装置执行上述的控制方法。
本发明实施方式的控制方法、控制装置和电子装置,对色块图像中预定区域内的图像采用第一插值算法处理以提高预定区域图像的分辨率及解析度,且对合并图像采用复杂度小于第一插值算法的第二插值算法进行处理,在提高图像主体部分的信噪比、分辨率和解析度的同时,减少所需处理的数据和处理时间,提升用户体验。
本发明的实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实施方式的实践了解到。
附图说明
本发明的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本发明实施方式的控制方法的流程示意图;
图2是本发明实施方式的控制方法的另一流程示意图;
图3是本发明实施方式的控制装置的功能模块示意图;
图4是本发明实施方式的图像传感器的模块示意图;
图5是本发明实施方式的图像传感器的电路示意图;
图6是本发明实施方式的滤光片单元的示意图;
图7是本发明实施方式的图像传感器的结构示意图;
图8是本发明实施方式的合并图像状态示意图;
图9是本发明实施方式的色块图像的状态示意图;
图10是本发明实施方式的控制方法的状态示意图;
图11是本发明某些实施方式的控制方法的流程示意图;
图12是本发明某些实施方式的第二计算模块的功能模块示意图;
图13是本发明某些实施方式的控制方法的流程示意图;
图14是本发明某些实施方式的控制装置的功能模块示意图;
图15是本发明某些实施方式的控制方法的图像合并的状态示意图;
图16是本发明某些实施方式的控制方法的流程示意图;
图17是本发明某些实施方式的选定模块的功能模块示意图;
图18是本发明某些实施方式的控制方法的流程示意图;
图19是本发明某些实施方式的第二处理模块的功能模块示意图;
图20是本发明某些实施方式的控制方法的状态示意图;
图21是本发明某些实施方式的控制方法的流程示意图;
图22是本发明某些实施方式的第二处理模块的功能模块示意图;
图23是本发明某些实施方式的控制方法的流程示意图;
图24是本发明某些实施方式的扩展单元的功能模块示意图;
图25是本发明某些实施方式的控制方法的状态示意图;
图26是本发明某些实施方式的电子装置的功能模块示意图;
图27是本发明某些实施方式的电子装置的功能模块示意图。
具体实施方式
下面详细描述本发明的实施方式,实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。
请一并参阅图1、图2和图26,本发明实施方式的控制方法,用于控制电子装置100,电子装置100包括成像装置20和触摸屏30,成像装置20包括图像传感器21,图像传感器21包括感光像素单元阵列212和设置在感光像素单元阵列212上的滤光片单元阵列211,每个滤光片单元阵列211覆盖对应一个感光像素单元212a,每个感光像素单元212a包括多个感光像素,控制方法包括以下步骤:
S11:控制图像传感器21输出同一场景的合并图像和色块图像,合并图像包括合并像素阵列,同一感光像素单元的多个感光像素合并输出作为一个合并像素,色块图像包括预定阵列排布的图像像素单元,图像像素单元包括多个原始像素,每个感光像素对应一个原始像素;
S12:根据用户输入在合并图像上确定预定区域;
S13:利用第一插值算法将色块图像转换成仿原图像,仿原图像包括阵列排布的仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素,色块图像包括预定区域,将色块图像转换成仿原图像的步骤包括以下步骤:
S131:判断关联像素是否位于预定区域内;
S133:在关联像素位于预定区域内时判断当前像素的颜色与关联像素的颜色是否相同;
S135:在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;和
S137:在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;和
S139:将合并图像通过第二插值算法转换成与仿原图像对应的还原图像,第二插值算法的复杂度小于第一插值算法;和
S14:合并仿原图像及还原图像以得到合并仿原图像。
请参阅图3,本发明实施方式的控制方法可以由控制装置10实现。
控制装置10用于控制电子装置100,电子装置包括成像装置20和触摸屏30,成像装置20包括图像传感器21,图像传感器21包括感光像素单元阵列212和设置在感光像素单元阵列212上的滤光片单元阵列211,每个滤光片单元阵列212覆盖对应一个感光像素单元212a,每个感光像素单元212a包括多个感光像素2121,控制装置10包括输出模块11、选定模块12、第一转换模块13、合并模块14。选定模块11用于根据用户输入在合并图像上确定预定区域;选定模块12用于根据用户输入在合并图像上确定预定区域;第一转换模块13用于利用利用第一插值算法将色块图像转换成仿原图像,仿原图像包括阵列排布的仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素,色块图像包括预定区域,第一转换模块13包括第一判断模块131、第二判断模块133、第一计算模块135、第二计算模块137、第三计算模块139,第一判断模块131用于判断关联像素是否位于预定区域内;第二判断模块133用于在关联像素位于预定区域内时判断当前像素的颜色与关联像素的颜色是否相同;第一计算模块135用于在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;第二计算模块137用于在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;第三计 算模块139用于将合并图像通过第二插值算法转换成与仿原图像对应的还原图像,第二插值算法的复杂度小于第一插值算法;合并模块14用于合并仿原图像及还原图像以得到合并仿原图像。
也即是说,步骤S11可以由输出模块11实现;步骤S12可以由选定模块12实现;步骤S13可以由第一转换模块13实现;步骤S131可以由第一判断模块131实现;步骤S133可以由第二判断模块133实现;步骤S135可以由第一计算模块135实现;步骤S137可以由第二计算模块137实现;步骤S139可以由第三计算模块139实现;步骤S14可以由合并模块14实现。
可以理解,本发明实施方式的控制方法采用第一插值算法处理色块图像的预定区域内的图像,采用第二插值算法处理合并图像。其中,预定区域可由用户进行输入和选择。第二插值算法的复杂度包括时间复杂度和空间复杂度,相较于第一插值算法,第二插值算法的时间复杂度和空间复杂度均小于第一插值算法。如此,在实际拍摄中,图像传感器21分别输出色块图像和合并图像,并且仅对色块图像的用户指定区域内的图像采用复杂度较大的第一插值算法进行图像处理,可以有效减少图像处理的数据和所需时间,还能提高用户关注的主体部分即预定区域内的图像的解析度,提升了用户体验。
请一并参阅图4至7,本发明实施方式的图像传感器21包括感光像素单元阵列212和设置在感光像素单元阵列212上的滤光片单元阵列211。
进一步地,感光像素单元阵列212包括多个感光像素单元212a,每个感光像素单元212a包括多个相邻的感光像素2121。每个感光像素2121包括一个感光器件21211和一个传输管21212,其中,感光器件21211可以是光电二极管,传输管21212可以是MOS晶体管。
滤光片单元阵列211包括多个滤光片单元211a,每个滤光片单元211a覆盖对应一个感光像素单元212a。
具体地,在某些示例中,滤光片单元阵列211包括拜耳阵列,也即是说,相邻的四个滤光片单元211a分别为一个红色滤光片单元、一个蓝色滤光片单元和两个绿色滤光片单元。
每一个感光像素单元212a对应同一颜色的滤光片211a,若一个感光像素单元212a中一共包括n个相邻的感光器件21211,那么一个滤光片单元211a覆盖一个感光像素单元212a中的n个感光器件21211,该滤光片单元211a可以是一体构造,也可以由n个独立的子滤光片组装连接在一起。
在某些实施方式中,每个感光像素单元212a包括四个相邻的感光像素2121,相邻两个感光像素2121共同构成一个感光像素子单元2120,感光像素子单元2120还包括一个源极跟随器21213和一个模数转换器21214。感光像素单元212a还包括一个加法器2122。其中,一个感光像素子单元2120中的每个传输管21212的一端电极被连接到对应感光器件21211的阴极电极,每个传输管21212的另一端被共同连接至源极跟随器21213的闸极电极,并通过源极跟随器21213源极电极连接至一个模数转换器21214。其中,源极跟随器21213可以是MOS晶体管。两个感光像素子单元2120通过各自的源极跟随器21213及模数转换器21214连接至加法器2122。
也即是说,本发明实施方式的图像传感器21的一个感光像素单元212a中相邻的四个感光器件21211共用一个同颜色的滤光片单元211a,每个感光器件21211对应连接一个传输管21212,相邻两个感光器件21211共用一个源极跟随器21213和一个模数转换器21214,相邻四个感光器件21211共用一个加法器2122。
进一步地,相邻四个感光器件21211呈2*2阵列排布。其中,一个感光像素子单元2120中的两个感光器件21211可以处于同一列。
在成像时,当同一滤光片单元211a下覆盖的两个感光像素子单元2120或者说四个感光器件21211同时曝光时,可以对像素进行合并进而可输出合并图像。
具体地,感光器件21211用于将光照转换为电荷,且产生的电荷与光照强度成比例关系,传输管21212用于根据控制信号来控制电路的导通或断开。当电路导通时,源极跟随器21213用于将感光器件21211经光照产生的电荷信号转换为电压信号。模数转换器21214用于电压信号转换为数字信号。加法器2122用于将两路数字信号相加共同输出,以供与图像传感器21相连的图像处理模块处理。
请参阅图8,以16M的图像传感器21举例来说,本发明实施方式的图像传感器21可以将16M的感光像素合并成4M,或者说,输出合并图像,合并后,感光像素的大小相当于变成了原来大小的4倍,从而提升了感光像素的感光度。此外,由于图像传感器21中的噪声大部分都是随机噪声,对于合并之前的感光像素来说,有可能其中一个或两个像素中存在噪点,而在将四个感光像素合并成一个大的感光像素后,减小了噪点对该大像素的影响,也即是减弱了噪声,提高了信噪比。
但在感光像素大小变大的同时,由于像素值降低,合并图像的解析度也将降低。
在成像时,当同一滤光片单元211a覆盖的四个感光器件21211依次曝光时,经过图像处理可以输出色块图像。
具体地,感光器件21211用于将光照转换为电荷,且产生的电荷与光照的强度成比 例关系,传输管21212用于根据控制信号来控制电路的导通或断开。当电路导通时,源极跟随器21213用于将感光器件21211经光照产生的电荷信号转换为电压信号。模数转换器21214用于将电压信号转换为数字信号以供与图像传感器21相连的图像处理模块处理。
请参阅图9,以16M的图像传感器21举例来说,本发明实施方式的图像传感器21还可以保持16M的感光像素输出,或者说输出色块图像,色块图像包括图像像素单元,图像像素单元包括2*2阵列排布的原始像素,该原始像素的大小与感光像素大小相同,然而由于覆盖相邻四个感光器件21211的滤光片单元211a为同一颜色,也即是说,虽然四个感光器件21211分别曝光,但其覆盖的滤光片单元211a颜色相同,因此,输出的每个图像像素单元的相邻四个原始像素颜色相同,仍然无法提高图像的解析度。
本发明实施方式的控制方法,用于对输出的色块图像进行处理,以得到仿原图像。
可以理解,合并图像在输出时,四个相邻的同色的感光像素以合并像素输出,如此,合并图像中的四个相邻的合并像素仍可看作是典型的拜耳阵列,可以直接被图像处理模块接收进行处理以输出真彩图像。而色块图像在输出时每个感光像素分别输出,由于相邻四个感光像素颜色相同,因此,一个图像像素单元的四个相邻原始像素的颜色相同,是非典型的拜耳阵列。而图像处理模块无法对非典型拜耳阵列直接进行处理,也即是说,在图像传感器21采用统一图像处理模式时,为兼容两种模式的真彩图像输出即合并模式下的真彩图像输出及色块模式下的真彩图像输出,需将色块图像转换为仿原图像,或者说将非典型拜耳阵列的图像像素单元转换为典型的拜耳阵列的像素排布。
仿原图像包括呈拜耳阵列排布的仿原像素。仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素。
本发明实施方式的控制方法分别输出色块图像和合并图像。
对于一帧色块图像的预定区域内的图像,先将该色块图像的预定区域转换成拜耳图像阵列,再利用第一插值算法进行图像处理。具体地,请参阅图10,以图10为例,当前像素为R3’3’和R5’5’,对应的关联像素分别为R33和R55。
在获取当前像素R3’3’时,由于R3’3’与对应的关联像素R33的颜色相同,因此在转换时直接将R33的像素值作为R3’3’的像素值。
在获取当前像素R5’5’时,由于R5’5’与对应的关联像素B55的颜色不相同,显然不能直接将B55的像素值作为R5’5’的像素值,需要根据R5’5’的关联像素单元通过插值的方式计算得到。
需要说明的是,以上及下文中的像素值应当广义理解为该像素的颜色属性数值,例如色彩值。
关联像素单元包括多个,例如4个,颜色与当前像素相同且与当前像素相邻的图像像素单元中的原始像素。
需要说明的是,此处相邻应做广义理解,以图10为例,R5’5’对应的关联像素为B55,与B55所在的图像像素单元相邻的且与R5’5’颜色相同的关联像素单元所在的图像像素单元分别为R44、R74、R47、R77所在的图像像素单元,而并非在空间上距离B55所在的图像像素单元更远的其他的红色图像像素单元。其中,与B55在空间上距离最近的红色原始像素分别为R44、R74、R47和R77,也即是说,R5’5’的关联像素单元由R44、R74、R47和R77组成,R5’5’与R44、R74、R47和R77的颜色相同且相邻。
如此,针对不同情况的当前像素,采用不同方式的将原始像素转换为仿原像素,从而将色块图像转换为仿原图像,由于拍摄图像时,采用了特殊的拜耳阵列结构的滤光片,提高了图像信噪比,并且在图像处理过程中,通过第一插值算法对色块图像进行插值处理,提高了图像的分辨率及解析度。
请参阅图11,在某些实施方式中,步骤S137包括以下步骤:
S1371:计算关联像素各个方向上的渐变量;
S1372:计算关联像素各个方向上的权重;和
S1373:根据渐变量及权重计算当前像素的像素值。
请参阅图12,在某些实施方式中,第二计算模块137包括第一计算单元1371、第二计算单元1372和第三计算单元1373。第一计算单元1371用于计算关联像素各个方向上的渐变量;第二计算单元1372用于计算关联像素各个方向上的权重;第三计算单元1373用于根据渐变量及权重计算当前像素的像素值。
也即是说,步骤S1371可以由第一计算单元1371实现,步骤S1372可以由第二计算单元1372实现,步骤S1373可以由第三计算单元1373实现。
具体地,第一插值算法是参考图像在不同方向上的能量渐变,将与当前像素对应的颜色相同且相邻的关联像素单元依据在不同方向上的渐变权重大小,通过线性插值的方式计算得到当前像素的像素值。其中,在能量变化量较小的方向上,参考比重较大,因此,在插值计算时的权重较大。
在某些示例中,为方便计算,仅考虑水平和垂直方向。
R5’5’由R44、R74、R47和R77插值得到,而在水平和垂直方向上并不存在颜色相同的原始像素,因此需首根据关联像素单元计算在水平和垂直方向上该颜色的分量。 其中,水平方向上的分量为R45和R75、垂直方向的分量为R54和R57可以分别通过R44、R74、R47和R77计算得到。
具体地,R45=R44*2/3+R47*1/3,R75=2/3*R74+1/3*R77,R54=2/3*R44+1/3*R74,R57=2/3*R47+1/3*R77。
然后,分别计算在水平和垂直方向的渐变量及权重,也即是说,根据该颜色在不同方向的渐变量,以确定在插值时不同方向的参考权重,在渐变量小的方向,权重较大,而在渐变量较大的方向,权重较小。其中,在水平方向的渐变量X1=|R45-R75|,在垂直方向上的渐变量X2=|R54-R57|,W1=X1/(X1+X2),W2=X2/(X1+X2)。
如此,根据上述可计算得到,R5’5’=(2/3*R45+1/3*R75)*W2+(2/3*R54+1/3*R57)*W1。可以理解,若X1大于X2,则W1大于W2,因此计算时水平方向的权重为W2,而垂直方向的权重为W1,反之亦反。
如此,可根据第一插值算法计算得到当前像素的像素值。依据上述对关联像素的处理方式,可将原始像素转换为呈典型拜耳阵列排布的仿原像素,也即是说,相邻的四个2*2阵列的仿原像素包括一个红色仿原像素,两个绿色仿原像素和一个蓝色仿原像素。
需要说明的是,第一插值算法包括但不限于本实施例中公开的在计算时仅考虑垂直和水平两个方向相同颜色的像素值的方式,例如还可以参考其他颜色的像素值。
请参阅图13,在某些实施方式中,在步骤S137前包括步骤:
S136a,对色块图像做白平衡补偿;
步骤S137后包括步骤:
S138a:对仿原图像做白平衡补偿还原。
请参阅图14,在某些实施方式中,第一转换模块13包括白平衡补偿模块136a和白平衡补偿还原模块138a。白平衡补偿模块136a用于对色块图像做白平衡补偿,白平衡补偿还原模块138a用于对仿原图像做白平衡补偿还原。
也即是说,步骤S136a可以由白平衡补偿模块136a实现,步骤S138a可以由白平衡补偿还原模块138a实现。
具体地,在一些示例中,在将色块图像转换为仿原图像的过程中,在插值时,红色和蓝色仿原像素往往不仅参考与其颜色相同的通道的原始像素的颜色,还会参考绿色通道的原始像素的颜色权重,因此,在插值前需要进行白平衡补偿,以在插值计算中排除白平衡的影响。为了不破坏色块图像的白平衡,因此,在插值之后需要将仿原图像进行白平衡补偿还原,还原时根据在补偿中红色、绿色及蓝色的增益值进行还原。
如此,可排除在插值过程中白平衡的影响,并且能够使得插值后得到的仿原图像保 持色块图像的白平衡。
请再参阅图13,在某些实施方式中,步骤S137前包括步骤:
S136b:对色块图像做坏点补偿。
请再参阅图14,在某些实施方式中,第一转换模块13包括坏点补偿模块136b。
也即是说,步骤S136b可以由坏点补偿模块136b实现。
可以理解,受限于制造工艺,图像传感器21可能会存在坏点,坏点通常不随感光度变化而始终呈现同一颜色,坏点的存在将影响图像质量,因此,为保证插值的准确,不受坏点的影响,需要在插值前进行坏点补偿。
具体地,坏点补偿过程中,可以对原始像素进行检测,当检测到某一原始像素为坏点时,可根据其所在的图像像素单元的其他原始像的像素值进行坏点补偿。
如此,可排除坏点对插值处理的影响,提高图像质量。
请再参阅图13,在某些实施方式中,步骤S137前包括步骤:
S136c:对色块图像做串扰补偿。
请再参阅图14,在某些实施方式中,第一转换模块13包括串扰补偿模块136c。
也即是说,步骤S136c可以由串扰模块136c实现。
具体的,一个感光像素单元中的四个感光像素覆盖同一颜色的滤光片,而感光像素之间可能存在感光度的差异,以至于以仿原图像转换输出的真彩图像中的纯色区域会出现固定型谱噪声,影响图像的质量。因此,需要对色块图像进行串扰补偿。
请再参阅图13,在某些实施方式中,步骤S137后包括步骤:
S138b:对仿原图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
请再参阅图14,在某些实施方式中,第一转换模块13还包括第一处理模块138b。
也即是说,步骤S138b可以由第一处理模块138b实现。
可以理解,将色块图像转换为仿原图像后,仿原像素排布为典型的拜耳阵列,可采用第一处理模块138b进行处理,处理过程中包括镜片阴影校正、去马赛克、降噪和边缘锐化处理,如此,处理后即可得到真彩图像输出给用户。
对于一帧合并图像的预定区域外的图像,请参阅图8,以图8为例,由于合并图像为非典型的拜耳阵列,图像传感器21将16M的感光像素合并成4M直接输出,因此,为便于后续的色块图像与合并图像的合成,需要先利用第二插值算法将合并图像拉伸放大以转换成大小与色块图像相同的还原图像。请参阅图15,对色块图像的预定区域内图像利用第一插值算法处理成为仿原图像,对合并图像利用第二插值算法处理成为还原图像后,将两幅图像进行合成以得到合并仿原图像。合并仿原图像中预定区域内 的图像具有较高的解析度。
请参阅图16,在某些实施方式中,步骤S12包括以下步骤:
S121:利用第三插值算法将合并图像转换成预览图像,第三插值算法包括第二插值算法;
S122:控制触摸屏显示预览图像;和
S123:处理触摸屏上的用户输入确定预定区域。
请参阅图17,在某些实施方式中,选定模块12包括第二转换模块121、显示模块122及第二处理模块123。第二转换模块121用于利用第三插值算法将合并图像转换成预览图像,第三插值算法包括第二插值算法,显示模块122用于控制触摸屏显示预览图像,第二处理模块123用于处理触摸屏上的用户输入确定预定区域。
也即是说,步骤S121可以由第二转换模块121实现,步骤S122可以由显示模块122实现,步骤S123可以由第二处理模块123实现。
可以理解,用户选择预定区域时,需要在预览图像中进行选取。第二转换模块121将合并图像转换成预览图像并由显示模块122显示。其中,合并图像转换成预览图像的过程采用第三插值算法进行计算。第三插值算法包括第二插值算法和双线性插值法。转换过程中,首先利用第二插值算法将合并图像转换成拜耳阵列的还原图像,再利用双线性插值法将仿原图像转换成真彩图像。如此,用户可进行预览,便于用户选取预定区域。
需要说明的是,还原图像转换成真彩图像的算法不限定于双线性插值法,也可采用其他插值算法进行计算。
请参阅图18,在某些实施方式中,步骤S123包括以下步骤:
S1231:将预览图像划分为阵列排布的扩展单元;
S1232:处理用户输入以识别触摸位置;
S1233:确定触摸位置所在的原点扩展单元,扩展单元包括原点扩展单元;
S1234:计算以原点扩展单元为中心向外依次扩展的每个扩展单元的反差值;
S1235:当反差值超过预定阈值时确定对应的扩展单元为边缘扩展单元,扩展单元包括边缘扩展单元;和
S1236:确定边缘扩展单元围成的区域为预定区域。
请参阅图19,在某些实施方式中,第二处理模块123包括划分单元1231、第一识别单元1232、定点单元1233、第四计算单元1234、第一处理单元1235、第二处理单元1236。划分单元1231用于将预览图像划分为阵列排布的扩展单元,第一识别单元 1232用于识别用户输入以识别触摸位置,定点单元1233用于确定触摸位置所在的原点扩展单元,扩展单元包括原点扩展单元,第四计算单元1234用于计算以原点扩展单元为中心依次向外扩展的每个扩展单元的反差值,第一处理单元1235用于当反差值超过预定阈值时确定对应的扩展单元为边缘扩展单元,扩展单元包括边缘扩展单元,第二处理单元1236用于确定边缘扩展单元围成的区域为预定区域。
也即是说,步骤S1231可以由划分单元1231实现,步骤S1232可以由第一识别单元实现,步骤S1233可以由定点单元1233实现,步骤S1234可以由第四计算单元1234实现,步骤S1235可以由第一处理单元1235实现,步骤S1236可以由第二处理单元1236实现。
具体地,请参阅图20,以图20为例,黑色圆点为用户的触摸位置,以该触摸位置为原点扩展单元向外扩展,图中每个方框即为一个扩展单元。随后比较每个扩展单元的反差值与预设阈值的大小,反差值大于预设阈值的扩展单元即为边缘扩展单元。图19中所示人脸的边缘部分所在的扩展单元的反差值均大于预设阈值,也即是说,图中人脸边缘部分所在的扩展单元即为边缘扩展单元,图中灰色方框为边缘扩展单元。如此,将多个边缘扩展单元围成的区域作为预定区域,预定区域是由用户指定的处理区域即用户关注的主体部分,采用第一插值算法对该区域内的图像进行处理,提升用户关注的主体部分的图像的解析度,提升用户体验。
请参阅图21,在某些实施方式中,步骤S123包括以下步骤:
S127:处理用户输入以识别触摸位置;和
S128:以触摸位置为中心向外扩展预定形状的区域作为预定区域。
请参阅图22,在某些实施方式中,第二处理模块123包括第二识别单元1237和扩展单元1238。第二识别单元1237用于处理用户输入以识别触摸位置,扩展单元1238用于以触摸位置为中心向外扩展预定形状的区域作为预定区域。
也即是说,步骤S1237可以由第二识别单元1237实现,步骤S1238可以由扩展单元1238实现。
请参阅图23,在某些实施方式中,步骤S1238包括以下步骤:
S12381:处理用户输入以确定预定形状。
请参阅图24,在某些实施方式中,扩展单元1238包括第三处理单元12381,第三处理单元12381用于处理用户输入以确定预定形状。
也即是说,步骤S12381可以由第三处理单元12381实现。
具体地,请参阅图25,图中的黑色圆点为用户的触摸位置。以该触摸位置为中心, 向外扩展圆形的区域以生成预定区域。图中所示以圆形扩展的预定区域包括了整个人脸部分,可以对人脸部分采用第一插值算法以提高人脸部分的解析度。
需要说明的是,在其它具体实施例中,扩展的预定形状还可以是矩形、方形或其他形状,用户可以根据实际所需对扩展的预定形状进行大小的调整以及拖动。此外,用户指定预定区域的方式还包括用户直接在触摸屏上勾画出一个任意形状作为预定区域,或者用户在触摸屏上选取几个点,这几个点连线后围成的区域作为预定区域后进行第一插值算法的图像处理。如此,便于用户自主选择关注的主体部分以提高主体部分的清晰度,从而进一步提升用户体验。
请参阅图26,本发明实施方式的电子装置100包括控制装置10、触摸屏30和成像装置20。
在某些实施方式中,电子装置100包括手机和平板电脑。
手机和平板电脑均带有摄像头即成像装置20,用户使用手机或平板电脑进行拍摄时,可以采用本发明实施方式的控制方法,以得到高解析度的图片。
需要说明的是,电子装置100也包括其他具有拍摄功能的电子设备。本发明实施方式的控制方法是电子装置100进行图像处理的指定处理模式之一。也即是说,用户利用电子装置100进行拍摄时,需要对电子装置100中包含的各种指定处理模式进行选择,当用户选择本发明实施方式的指定处理模式时,用户可以自主选择预定区域,电子装置100采用本发明实施方式的控制方法进行图像处理。
在某些实施方式中,成像装置20包括前置相机和后置相机。
可以理解,许多电子装置100包括前置相机和后置相机,前置相机和后置相机均可采用本发明实施方式的控制方法实现图像处理,以提升用户体验。
请参阅图27,本发明实施方式的电子装置100包括处理器40、存储器50、电路板60、电源电路70和壳体80。其中,电路板60安置在壳体80围成的空间内部,处理器40和存储器50设置在电路板上;电源电路70用于为电子装置100的各个电路或器件供电;存储器50用于存储可执行程序代码;处理器40通过读取存储器50中存储的可执行代码来运行与可执行程序代码对应的程序以实现上述中本发明任一实施方式的控制方法。
例如,处理器40可以用于执行以下步骤:
S11:控制图像传感器21输出同一场景的合并图像和色块图像,合并图像包括合并像素阵列,同一感光像素单元的多个感光像素合并输出作为一个合并像素,色块图像包括预定阵列排布的图像像素单元,图像像素单元包括多个原始像素,每个感光像素对应一个原始像素;
S12:根据用户输入在合并图像上确定预定区域;
S13:利用第一插值算法将色块图像转换成仿原图像,仿原图像包括阵列排布的仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素,色块图像包括预定区域,将色块图像转换成仿原图像的步骤包括以下步骤:
S131:判断关联像素是否位于预定区域内;
S133:在关联像素位于预定区域内时判断当前像素的颜色与关联像素的颜色是否相同;
S135:在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;和
S137:在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;和
S139:将合并图像通过第二插值算法转换成与仿原图像对应的还原图像,第二插值算法的复杂度小于第一插值算法;和
S14:合并仿原图像及还原图像以得到合并仿原图像。
本发明实施方式的计算机可读存储介质,具有存储于计算机可读存储介质中的指令。当电子装置100的处理器40执行指令时,电子装置100执行上述中本发明任一实施方式的控制方法。
例如,电子装置100可以执行以下步骤:
S11:控制图像传感器21输出同一场景的合并图像和色块图像,合并图像包括合并像素阵列,同一感光像素单元的多个感光像素合并输出作为一个合并像素,色块图像包括预定阵列排布的图像像素单元,图像像素单元包括多个原始像素,每个感光像素对应一个原始像素;
S12:根据用户输入在合并图像上确定预定区域;
S13:利用第一插值算法将色块图像转换成仿原图像,仿原图像包括阵列排布的仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素,色块图像包括预定区域,将色块图像转换成仿原图像的步骤包括以下步骤:
S131:判断关联像素是否位于预定区域内;
S133:在关联像素位于预定区域内时判断当前像素的颜色与关联像素的颜色是否相同;
S135:在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;和
S137:在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;和
S139:将合并图像通过第二插值算法转换成与仿原图像对应的还原图像,第二插值算法的复杂度小于第一插值算法;和
S14:合并仿原图像及还原图像以得到合并仿原图像。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合实施方式或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于执行特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的执行,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于执行逻辑功能的可执行指令的定序列表,可以具体执行在任何计算机可读介质中,以供指令执行***、装置或设备(如基于计算机的***、包括处理器的***或其他可以从指令执行***、装置或设备取指令并执行指令的***)使用,或结合这些指令执行***、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行***、装置或设备或结合这些指令执行***、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得程序,然后将其存储在计算机存储器中。
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来执行。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行***执行的软件或固件来执行。例如,如果用硬件来执行,和在另一实施方式中一样,可用本领域公知的下 列技术中的任一项或他们的组合来执行:具有用于对数据信号执行逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解执行上述实施方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式执行,也可以采用软件功能模块的形式执行。集成的模块如果以软件功能模块的形式执行并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (29)

  1. 一种控制方法,用于控制电子装置,其特征在于,所述电子装置包括成像装置和触摸屏,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制方法包括以下步骤:
    控制所述图像传感器输出同一场景的合并图像和色块图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素对应一个所述原始像素;
    根据用户输入在所述合并图像上确定预定区域;
    利用第一插值算法将所述色块图像转换成仿原图像,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述色块图像包括预定区域,所述将所述色块图像转换成所述仿原图像的步骤包括以下步骤:
    判断所述关联像素是否位于所述预定区域内;
    在所述关联像素位于所述预定区域内时判断所述当前像素的颜色与所述关联像素的颜色是否相同;
    在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;和
    在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
    将所述合并图像通过第二插值算法转换成与所述仿原图像对应的还原图像,所述第二插值算法的复杂度小于所述第一插值算法;和
    合并所述仿原图像及所述还原图像以得到所述合并仿原图像。
  2. 根据权利要求1所述的控制方法,其特征在于,所述预定阵列包括拜耳阵列。
  3. 根据权利要求1所述的控制方法,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
  4. 根据权利要求1所述的控制方法,其特征在于,所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤包括以下步骤:
    计算所述关联像素各个方向上的渐变量;
    计算所述关联像素各个方向上的权重;和
    根据所述渐变量及所述权重计算所述当前像素的像素值。
  5. 根据权利要求1所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前包括以下步骤:
    对所述色块图像做白平衡补偿;
    所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤后包括以下步骤:
    对所述仿原图像做白平衡补偿还原。
  6. 根据权利要求1所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前包括以下步骤:
    对所述色块图像做坏点补偿。
  7. 根据权利要求1所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前包括以下步骤:
    对所述色块图像做串扰补偿。
  8. 根据权利要求1所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤后包括如下步骤:
    对所述仿原图像进行镜片形状校正、去马赛克、降噪和边缘锐化处理。
  9. 根据权利要求1所述的控制方法,其特征在于,所述根据用户输入在所述合并图像上确定预定区域的步骤包括以下步骤:
    利用第三插值算法将所述合并图像转换成预览图像,所述第三插值算法包括所述第二插值算法;
    控制所述触摸屏显示所述预览图像;和
    处理所述触摸屏上的用户输入确定所述预定区域。
  10. 根据权利要求9所述的控制方法,其特征在于,所述处理所述触摸屏上的用户输入确定所述预定区域的步骤包括以下步骤:
    将所述预览图像划分为阵列排布的扩展单元;
    处理所述用户输入以识别触摸位置;
    确定所述触摸位置所在的原点扩展单元,所述扩展单元包括所述原点扩展单元;
    计算以所述原点扩展单元为中心向外依次扩展的每个所述扩展单元的反差值;
    当所述反差值超过预定阈值时确定对应的所述扩展单元为边缘扩展单元,所述扩展单元包括所述边缘扩展单元;和
    确定所述边缘扩展单元围成的区域为所述预定区域。
  11. 根据权利要求9所述的控制方法,其特征在于,所述处理所述触摸屏上的用户输入确定所述预定区域的步骤包括以下步骤:
    处理所述用户输入以识别触摸位置;和
    以所述触摸位置为中心向外扩展预定形状的区域作为所述预定区域。
  12. 根据权利要求11所述的控制方法,其特征在于,所述以所述触摸位置为中心向外扩展预定形状的区域作为所述预定区域的步骤包括以下步骤:
    处理所述用户输入以确定所述预定形状。
  13. 一种控制装置,用于控制电子装置,其特征在于,所述电子装置包括成像装置和触摸屏,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制装置包括:
    输出模块,所述输出模块用于控制所述图像传感器输出同一场景的合并图像和色块图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素对应一个所述原始像素;
    选定模块,所述选定模块用于根据用户输入在所述合并图像上确定预定区域;
    第一转换模块,所述第一转换模块用于利用利用第一插值算法将所述色块图像转换成仿原图像,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述色块图像包括预定区域,所述第一转换模块包括:
    第一判断模块,所述第一判断模块用于判断所述关联像素是否位于所述预定区域内;
    第二判断模块,所述第二判断模块用于在所述关联像素位于所述预定区域内时判断所述当前像素的颜色与所述关联像素的颜色是否相同;
    第一计算模块,所述第一计算模块用于在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;和
    第二计算模块,所述第二计算模块用于在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
    第三计算模块,所述第三计算模块用于将所述合并图像通过第二插值算法转换成与所述仿原图像对应的还原图像,所述第二插值算法的复杂度小于所述第一插值算法;和
    合并模块,所述合并模块用于合并所述仿原图像及所述还原图像以得到所述合并仿原图像。
  14. 根据权利要求13所述的控制装置,其特征在于,所述预定阵列包括拜耳阵列。
  15. 根据权利要求13所述的控制装置,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
  16. 如权力要求13所述的控制装置,其特征在于,所述第二计算模块包括:
    第一计算单元,所述第一计算单元用于计算所述关联像素各个方向上的渐变量;
    第二计算单元,所述第二计算单元用于计算所述关联像素各个方向上的权重;和
    第三计算单元,所述第三计算单元用于根据所述渐变量及所述权重计算所述当前像素的像素值。
  17. 根据权利要求13所述的控制装置,其特征在于,所述第一转换模块包括:
    白平衡补偿模块,所述白平衡补偿模块用于对所述色块图像做白平衡补偿;
    白平衡补偿还原模块,所述白平衡补偿还原模块用于对所述仿原图像做白平衡补偿还原。
  18. 根据权利要求13所述的控制装置,其特征在于,所述第一转换模块包括:
    坏点补偿模块,所述坏点补偿模块用于对所述色块图像做坏点补偿。
  19. 根据权利要求13所述的控制装置,其特征在于,所述第一转换模块包括:
    串扰补偿模块,所述串扰补偿模快用于对所述色块图像做串扰补偿。
  20. 根据权利要求13所述的控制装置,其特征在于,所述第一转换模块包括:
    第一处理模块,所述第一处理模块用于对所述仿原图像进行镜片形状校正、去马赛克、降噪和边缘锐化处理。
  21. 根据权利要求13所述的控制装置,其特征在于,所述选定模块包括:
    第二转换模块,所述第二转换模块用于利用第三插值算法将所述合并图像转换成预览图像,所述第三插值算法包括所述第二插值算法;
    显示模块,所述显示模块用于控制所述触摸屏显示所述预览图像;和
    第二处理模块,所述第二处理模块用于处理所述触摸屏上的用户输入确定所述预定区域。
  22. 根据权利要求21所述的控制装置,其特征在于,所述第二处理模块包括:
    划分单元,所述划分单元用于将所述预览图像划分为阵列排布的扩展单元;
    第一识别单元,所述第一识别单元用于处理所述用户输入以识别触摸位置;
    定点单元,所述顶点单元用于确定所述触摸位置所在的原点扩展单元,所述扩展单元包括所述原点扩展单元;
    第四计算单元,所述第四计算单元用于计算以所述原点扩展单元为中心向外依次扩展的每个所述扩展单元的反差值;
    第一处理单元,所述第一处理单元用于当所述反差值超过预定阈值时确定对应的所述扩展单元为边缘扩展单元,所述扩展单元包括所述边缘扩展单元;和
    第二处理单元,所述第二处理单元用于确定所述边缘扩展单元围成的区域为所述预定区域。
  23. 根据权利要求21所述的控制装置,其特征在于,所述第二处理模块包括:
    第二识别单元,所述第二识别单元用于处理所述用户输入以识别触摸位置;和
    扩展单元,所述扩展单元用于以所述触摸位置为中心向外扩展预定形状的区域作为所述预定区域。
  24. 根据权利要求23所述的控制装置,其特征在于,所述扩展单元包括:
    第三处理单元,所述第三处理单元用于处理所述用户输入以确定所述预定形状。
  25. 一种电子装置,其特征在于包括:
    成像装置;
    触摸屏;和
    权利要求13-24任意一项所述的控制装置。
  26. 根据权利要求25所述的电子装置,其特征在于,所述电子装置包括手机和平板电脑。
  27. 根据权利要求25所述的电子装置,其特征在于,所述成像装置包括前置相机和后置相机。
  28. 一种电子装置,包括壳体、处理器、存储器、电路板和电源电路,其特征在于,所述电路板安置在所述壳体围成的空间内部,所述处理器和所述存储器设置在所述电路板上;所述电源电路用于为所述电子装置的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行权利要求1至12中任意一项所述的控制方法。
  29. 一种计算机可读存储介质,具有存储于其中的指令。当电子装置的处理器执行所述指令时,所述电子装置执行权利要求1至12中任意一项所述的控制方法。
PCT/CN2017/081724 2016-11-29 2017-04-24 控制方法、控制装置、电子装置和计算机可读存储介质 WO2018098978A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611078974.7A CN106357967B (zh) 2016-11-29 2016-11-29 控制方法、控制装置和电子装置
CN201611078974.7 2016-11-29

Publications (1)

Publication Number Publication Date
WO2018098978A1 true WO2018098978A1 (zh) 2018-06-07

Family

ID=57862867

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/081724 WO2018098978A1 (zh) 2016-11-29 2017-04-24 控制方法、控制装置、电子装置和计算机可读存储介质

Country Status (5)

Country Link
US (2) US10277803B2 (zh)
EP (1) EP3327666B1 (zh)
CN (1) CN106357967B (zh)
ES (1) ES2731695T3 (zh)
WO (1) WO2018098978A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713790B (zh) 2016-11-29 2019-05-10 Oppo广东移动通信有限公司 控制方法、控制装置及电子装置
CN106507068B (zh) 2016-11-29 2018-05-04 广东欧珀移动通信有限公司 图像处理方法及装置、控制方法及装置、成像及电子装置
CN106357967B (zh) * 2016-11-29 2018-01-19 广东欧珀移动通信有限公司 控制方法、控制装置和电子装置
CN106604001B (zh) 2016-11-29 2018-06-29 广东欧珀移动通信有限公司 图像处理方法、图像处理装置、成像装置及电子装置
CN107370917B (zh) * 2017-06-30 2020-01-10 Oppo广东移动通信有限公司 控制方法、电子装置和计算机可读存储介质
US11282168B2 (en) * 2017-10-23 2022-03-22 Sony Interactive Entertainment Inc. Image processing apparatus, image processing method, and program
CN108419022A (zh) * 2018-03-06 2018-08-17 广东欧珀移动通信有限公司 控制方法、控制装置、计算机可读存储介质和计算机设备
CN111785220B (zh) * 2019-04-03 2022-02-08 名硕电脑(苏州)有限公司 显示器校正方法与***
CN112785533B (zh) * 2019-11-07 2023-06-16 RealMe重庆移动通信有限公司 图像融合方法、图像融合装置、电子设备与存储介质
JP2023010159A (ja) * 2021-07-09 2023-01-20 株式会社ソシオネクスト 画像処理装置および画像処理方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100128039A1 (en) * 2008-11-26 2010-05-27 Kwang-Jun Cho Image data processing method, image sensor, and integrated circuit
CN102547080A (zh) * 2010-12-31 2012-07-04 联想(北京)有限公司 摄像模组以及包含该摄像模组的信息处理设备
CN104423946A (zh) * 2013-08-30 2015-03-18 联想(北京)有限公司 一种图像处理方法以及电子设备
CN105611258A (zh) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 图像传感器的成像方法、成像装置和电子装置
CN106357967A (zh) * 2016-11-29 2017-01-25 广东欧珀移动通信有限公司 控制方法、控制装置和电子装置

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1273930C (zh) * 2001-06-06 2006-09-06 皇家菲利浦电子有限公司 转换器和转换方法以及图像处理设备
JP5012333B2 (ja) * 2007-08-30 2012-08-29 コニカミノルタアドバンストレイヤー株式会社 画像処理装置および画像処理方法ならびに撮像装置
JP5068158B2 (ja) * 2007-12-28 2012-11-07 イーストマン コダック カンパニー 撮像装置
US7745779B2 (en) 2008-02-08 2010-06-29 Aptina Imaging Corporation Color pixel arrays having common color filters for multiple adjacent pixels for use in CMOS imagers
US8130278B2 (en) * 2008-08-01 2012-03-06 Omnivision Technologies, Inc. Method for forming an improved image using images with different resolutions
CN101815157B (zh) 2009-02-24 2013-01-23 虹软(杭州)科技有限公司 图像及视频的放大方法与相关的图像处理装置
JP5644177B2 (ja) * 2010-05-07 2014-12-24 ソニー株式会社 固体撮像装置、および、その製造方法、電子機器
DE102011100350A1 (de) 2011-05-03 2012-11-08 Conti Temic Microelectronic Gmbh Bildsensor mit einstellbarer Auflösung
JP2013066146A (ja) 2011-08-31 2013-04-11 Sony Corp 画像処理装置、および画像処理方法、並びにプログラム
WO2014045689A1 (ja) * 2012-09-19 2014-03-27 富士フイルム株式会社 画像処理装置、撮像装置、プログラム及び画像処理方法
US9479695B2 (en) 2014-07-31 2016-10-25 Apple Inc. Generating a high dynamic range image using a temporal filter
JP6369233B2 (ja) * 2014-09-01 2018-08-08 ソニー株式会社 固体撮像素子及びその信号処理方法、並びに電子機器
CN104580910B (zh) 2015-01-09 2018-07-24 宇龙计算机通信科技(深圳)有限公司 基于前、后摄像头的图像合成方法及***
US20170019615A1 (en) * 2015-07-13 2017-01-19 Asustek Computer Inc. Image processing method, non-transitory computer-readable storage medium and electrical device thereof
CN105578078B (zh) * 2015-12-18 2018-01-19 广东欧珀移动通信有限公司 图像传感器、成像装置、移动终端及成像方法
CN106713790B (zh) * 2016-11-29 2019-05-10 Oppo广东移动通信有限公司 控制方法、控制装置及电子装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100128039A1 (en) * 2008-11-26 2010-05-27 Kwang-Jun Cho Image data processing method, image sensor, and integrated circuit
CN102547080A (zh) * 2010-12-31 2012-07-04 联想(北京)有限公司 摄像模组以及包含该摄像模组的信息处理设备
CN104423946A (zh) * 2013-08-30 2015-03-18 联想(北京)有限公司 一种图像处理方法以及电子设备
CN105611258A (zh) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 图像传感器的成像方法、成像装置和电子装置
CN106357967A (zh) * 2016-11-29 2017-01-25 广东欧珀移动通信有限公司 控制方法、控制装置和电子装置

Also Published As

Publication number Publication date
US20180255234A1 (en) 2018-09-06
US10469736B2 (en) 2019-11-05
US10277803B2 (en) 2019-04-30
EP3327666B1 (en) 2019-05-29
CN106357967B (zh) 2018-01-19
EP3327666A1 (en) 2018-05-30
CN106357967A (zh) 2017-01-25
ES2731695T3 (es) 2019-11-18
US20180152625A1 (en) 2018-05-31

Similar Documents

Publication Publication Date Title
WO2018098978A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2018099009A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
US10531019B2 (en) Image processing method and apparatus, and electronic device
WO2018098981A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2018098982A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
US10110809B2 (en) Control method and apparatus, and electronic device
WO2018098983A1 (zh) 图像处理方法及装置、控制方法及装置、成像及电子装置
US8305487B2 (en) Method and apparatus for controlling multiple exposures
WO2017101451A1 (zh) 成像方法、成像装置及电子装置
US10339632B2 (en) Image processing method and apparatus, and electronic device
WO2018099007A1 (zh) 控制方法、控制装置及电子装置
US10249021B2 (en) Image processing method and apparatus, and electronic device
WO2018099005A1 (zh) 控制方法、控制装置及电子装置
US10165205B2 (en) Image processing method and apparatus, and electronic device
US10262395B2 (en) Image processing method and apparatus, and electronic device
WO2018099031A1 (zh) 控制方法和电子装置
WO2018098977A1 (zh) 图像处理方法、图像处理装置、成像装置、制造方法和电子装置
WO2018099006A1 (zh) 控制方法、控制装置及电子装置
WO2018196704A1 (zh) 双核对焦图像传感器及其对焦控制方法和成像装置
US9848116B2 (en) Solid-state image sensor, electronic device, and auto focusing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17877014

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17877014

Country of ref document: EP

Kind code of ref document: A1