WO2018098983A1 - 图像处理方法及装置、控制方法及装置、成像及电子装置 - Google Patents

图像处理方法及装置、控制方法及装置、成像及电子装置 Download PDF

Info

Publication number
WO2018098983A1
WO2018098983A1 PCT/CN2017/081920 CN2017081920W WO2018098983A1 WO 2018098983 A1 WO2018098983 A1 WO 2018098983A1 CN 2017081920 W CN2017081920 W CN 2017081920W WO 2018098983 A1 WO2018098983 A1 WO 2018098983A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
image processing
color
interpolation algorithm
Prior art date
Application number
PCT/CN2017/081920
Other languages
English (en)
French (fr)
Inventor
韦怡
Original Assignee
广东欧珀移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东欧珀移动通信有限公司 filed Critical 广东欧珀移动通信有限公司
Publication of WO2018098983A1 publication Critical patent/WO2018098983A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4015Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/68Noise processing, e.g. detecting, correcting, reducing or removing noise applied to defects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2209/00Details of colour television systems
    • H04N2209/04Picture signal generators
    • H04N2209/041Picture signal generators using solid-state devices
    • H04N2209/042Picture signal generators using solid-state devices having a single pick-up sensor
    • H04N2209/045Picture signal generators using solid-state devices having a single pick-up sensor using mosaic colour filter
    • H04N2209/046Colour interpolation to calculate the missing colour values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators

Definitions

  • the present invention relates to image processing technologies, and in particular, to an image processing method and device, a control method and device, an imaging and an electronic device.
  • An existing image sensor includes a pixel unit array and a filter unit array disposed on the pixel unit array, each filter unit array covering a corresponding one of the photosensitive pixel units, and each of the photosensitive pixel units includes a plurality of photosensitive pixels.
  • the image sensor exposure output merged image may be controlled, and the merged image includes a merged pixel array, and the plurality of photosensitive pixels of the same pixel unit are combined and output as one merged pixel. In this way, the signal-to-noise ratio of the merged image can be improved, however, the resolution of the merged image is lowered.
  • the image sensor may also be controlled to output a high-pixel patch image
  • the patch image includes an original pixel array, and each photosensitive pixel corresponds to one original pixel.
  • the resolution of the patch image cannot be improved. Therefore, it is necessary to convert the high-pixel patch image into a high-pixel pseudo-image by interpolation calculation, and the pseudo-image may include a pseudo-origin pixel arranged in a Bell array.
  • the original image can be converted into a true color image by image processing and saved. Interpolation calculations can improve the sharpness of true color images, but they are resource intensive and time consuming, resulting in longer shooting times and poor user experience. On the other hand, in specific applications, users tend to focus only on the sharpness of the main part of the true color image.
  • Embodiments of the present invention provide an image processing method, a control method, an image processing device, a control device, an imaging device, and an electronic device.
  • An image processing method is for converting a patch image into a pseudo original image, the patch image including an image pixel unit arranged in a predetermined array, the image pixel unit including a plurality of original pixels, the imitation
  • the original image includes an array of original pixels, the original pixel including a current pixel, the original pixel including an associated pixel corresponding to the current pixel, the patch image includes a fixed area, and the image processing method includes The following steps:
  • the pixel value of the associated pixel is used as the pixel value of the current pixel
  • the pixel value of the current pixel is calculated by a second interpolation algorithm, and the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm.
  • a control method of an embodiment of the present invention for controlling an electronic device, the electronic device comprising an imaging device, the imaging device comprising an image sensor, the image sensor comprising an array of photosensitive pixel units and an array disposed on the photosensitive pixel unit An array of filter cells, each of the filter cell arrays covering a corresponding one of the photosensitive pixel units, each of the photosensitive pixel units comprising a plurality of photosensitive pixels, the control method comprising the steps of:
  • the patch image is converted into a pseudo original image by the image processing method described above.
  • An image processing apparatus is configured to convert a patch image into a pseudo original image, the patch image including an image pixel unit arranged in a predetermined array, the image pixel unit including a plurality of original pixels, the imitation
  • the original image includes an array of original pixels, the original pixel including a current pixel, the original pixel including an associated pixel corresponding to the current pixel, the patch image includes a fixed area, and the image processing apparatus includes The first determining module, the second determining module, the first calculating module, the second calculating module, and the third calculating module.
  • the first determining module is configured to determine whether the associated pixel is located in the fixed area, and the second determining module is configured to determine a color and a color of the current pixel when the associated pixel is located in the fixed area Whether the color of the associated pixel is the same, the first calculating module is configured to use the pixel value of the associated pixel as the pixel value of the current pixel when the color of the current pixel is the same as the color of the associated pixel.
  • the second computing module is configured to calculate, according to a pixel value of the associated pixel unit, a pixel value of the current pixel according to a pixel value of the associated pixel unit when the color of the current pixel is different from a color of the associated pixel, the image
  • the pixel unit includes the associated pixel unit, the color of the associated pixel unit is the same as the current pixel and adjacent to the current pixel, and the third computing module is configured to locate the associated pixel in the fixed area
  • the pixel value of the current pixel is calculated by a second interpolation algorithm, and the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm.
  • a control device for controlling an electronic device, the electronic device comprising an imaging device, the imaging device comprising an image sensor, the image sensor comprising an array of photosensitive pixel units and an array disposed on the photosensitive pixel unit array An array of filter cells, each of the array of filter cells covering a corresponding one of the photosensitive pixel units, each of the photosensitive pixel units comprising a plurality of photosensitive pixels, the control device comprising an output module and an image processing device.
  • the output module is configured to control the image sensor to output a patch image
  • the image processing device is configured to convert the patch image into a pseudo original image by using the image processing method described above.
  • An electronic device includes the above-described imaging device, touch panel, and the above-described control device.
  • An electronic device includes a housing, a processor, a memory, a circuit board, and a power supply circuit.
  • the circuit board is disposed inside a space enclosed by the casing, the processor and the memory are disposed on the circuit board; and the power circuit is configured to supply power to each circuit or device of the electronic device;
  • the memory is for storing an executable program a code; the processor runs a program corresponding to the executable program code by reading executable program code stored in the memory for executing the image processing method and the control method described above.
  • the image processing method, the control method, the image processing device, the control device, the imaging device, and the electronic device use a first interpolation algorithm capable of improving image resolution and resolution for an image in a fixed area, outside the fixed area
  • the image adopts a second interpolation algorithm with less complexity than the first interpolation algorithm.
  • the signal-to-noise ratio, resolution and resolution of the main part of the image are improved, the user experience is improved, and the image processing data is reduced. The time required for processing.
  • FIG. 1 is a schematic flow chart of an image processing method according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of functional blocks of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of functional modules of an image forming apparatus according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an image sensor module according to an embodiment of the present invention.
  • FIG. 5 is a circuit diagram of an image sensor according to an embodiment of the present invention.
  • Figure 6 is a schematic view of a filter unit of an embodiment of the invention.
  • FIG. 7 is a schematic structural diagram of an image sensor according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram showing a state of a merged image according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram showing a state of a patch image according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram showing a state of an image processing method according to an embodiment of the present invention.
  • FIG. 11 is a schematic flow chart of an image processing method according to some embodiments of the present invention.
  • FIG. 12 is a schematic diagram of functional blocks of an image processing apparatus according to some embodiments of the present invention.
  • FIG. 13 is a schematic flow chart of an image processing method according to some embodiments of the present invention.
  • FIG. 14 is a schematic diagram of functional blocks of an image processing apparatus according to some embodiments of the present invention.
  • 15 is a schematic diagram showing the state of an image processing method according to some embodiments of the present invention.
  • 16 is a schematic flow chart of a control method according to an embodiment of the present invention.
  • 17 is a schematic diagram of functional blocks of a control device according to an embodiment of the present invention.
  • 19 is a schematic diagram of functional blocks of a control device according to some embodiments of the present invention.
  • 20 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
  • 21 is a schematic diagram of functional modules of an electronic device according to some embodiments of the present invention.
  • 22 is a schematic diagram of functional modules of an electronic device according to some embodiments of the present invention.
  • an image processing method is configured to convert a patch image into a pseudo original image, where the patch image includes an image pixel unit arranged in a predetermined array, and the image pixel unit includes a plurality of original pixels.
  • the image includes an original pixel arranged in an array, the original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel, and the color block image includes a fixed area, and the image processing method includes the following steps:
  • an image processing method according to an embodiment of the present invention may be implemented by the image processing apparatus 10.
  • the image processing apparatus 10 includes a first judging module 11, a second judging module 13, a first calculating module 15, a second calculating module 17, and a third calculating module 19.
  • the first determining module 11 is configured to determine whether the associated pixel is located in the fixed area;
  • the second determining module 13 is configured to determine whether the color of the current pixel is the same as the color of the associated pixel when the associated pixel is located in the fixed area;
  • the second computing module 17 is configured to associate according to the color of the current pixel and the color of the associated pixel.
  • the pixel value of the pixel unit is calculated by the first interpolation algorithm, and the image pixel unit includes an associated pixel unit.
  • the color of the associated pixel unit is the same as the current pixel and adjacent to the current pixel; the third computing module 19 is configured to When the associated pixel is outside the fixed area, the pixel value of the current pixel is calculated by the second interpolation algorithm, and the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm.
  • the step S11 can be implemented by the first determining module 11
  • the step S13 can be implemented by the second determining module 13
  • the step S15 can be implemented by the first calculating module 15
  • the step S17 can be implemented by the second calculating module 17
  • the step S19 can be implemented by the third calculation module 19.
  • an imaging apparatus 100 includes an image processing apparatus 10 and an image sensor 20 for outputting a patch image.
  • control method of the embodiment of the present invention is processed by using the first interpolation algorithm and the second interpolation algorithm, respectively.
  • the fixed area is an un-dragable and fixed-size area presented on the touch screen of the imaging device when the preview is taken, and the part of the main body that needs to be processed by the first interpolation algorithm is changed by changing the image pickup position of the imaging apparatus 100 when the user photographs. Place it in this fixed area for image processing.
  • the first interpolation algorithm is used to process the image in the fixed area to improve the resolution of the partial image
  • the second interpolation algorithm with less complexity than the first interpolation algorithm is used to process the image outside the fixed area to reduce the entire frame image processing. The time required to improve the user experience while improving image quality.
  • the image sensor 20 of the embodiment of the present invention includes a photosensitive pixel unit array 212 and a filter unit array 211 disposed on the array of photosensitive pixel units 212.
  • the photosensitive pixel unit array 212 includes a plurality of photosensitive pixel units 212a, each of which includes a plurality of adjacent photosensitive pixels 2121.
  • Each of the photosensitive pixels 2121 includes a photosensitive device 21211 and a transfer tube 21212, wherein the photosensitive device 21211 can be a photodiode, and the transfer tube 21212 can be a MOS transistor.
  • the filter unit array 211 includes a plurality of filter units 211a, each of which covers a corresponding one of the photosensitive pixel units 212a.
  • the filter cell array 211 includes a Bayer array, that is, the adjacent four filter units 211a are respectively a red filter unit and a blue filter unit. And two green filter units.
  • Each of the photosensitive pixel units 212a corresponds to the filter 211a of the same color. If one photosensitive pixel unit 212a includes a total of n adjacent photosensitive devices 21211, one filter unit 211a covers n of one photosensitive pixel unit 212a.
  • the photosensitive device 21211 may have an integral structure or may be assembled and connected by n independent sub-filters.
  • each photosensitive pixel unit 212a includes four adjacent photosensitive pixels 2121, and two adjacent photosensitive pixels 2121. together constitute one photosensitive pixel sub-unit 2120.
  • the photosensitive pixel sub-unit 2120 further includes a source follower.
  • the photosensitive pixel unit 212a further includes an adder 2122. Wherein one end electrode of each of the one of the photosensitive pixel sub-units 2120 is connected to the cathode electrode of the corresponding photosensitive device 21211, and the other end of each of the transfer tubes 21212 is commonly connected to the gate electrode of the source follower 21213. And connected to an analog to digital converter 21214 via a source follower 21213 source electrode.
  • the source follower 21213 may be a MOS transistor.
  • the two photosensitive pixel subunits 2120 are connected to the adder 2122 through respective source followers 21213 and analog to digital converters 21214.
  • the adjacent four photosensitive devices 21211 of one photosensitive pixel unit 212a of the image sensor 20 of the embodiment of the present invention share a filter unit 211a of the same color, and each photosensitive device 21211 is connected to a transmission tube 21212.
  • the two adjacent photosensitive devices 21211 share a source follower 21213 and an analog to digital converter 21214, and the adjacent four photosensitive devices 21211 share an adder 2122.
  • adjacent four photosensitive devices 21211 are arranged in a 2*2 array.
  • the two photosensitive devices 21211 in one photosensitive pixel subunit 2120 may be in the same column.
  • the pixels may be combined to output a combined image.
  • the photosensitive device 21211 is configured to convert illumination into electric charge, and the generated electric charge is proportional to the illumination intensity, and the transmission tube 21212 is configured to control the on or off of the circuit according to the control signal.
  • the source follower 21213 is configured to convert the charge signal generated by the light-sensing device 21211 into a voltage signal.
  • Analog to digital converter 21214 is used to convert the voltage signal to a digital signal.
  • the adder 2122 is for summing the two digital signals for common output for processing by the image processing module connected to the image sensor 20.
  • the image sensor 20 of the embodiment of the present invention can combine 16M photosensitive pixels into 4M, or output a combined image.
  • the size of the photosensitive pixels is equivalent to change. It is 4 times the original size, which improves the sensitivity of the photosensitive pixels.
  • the noise in the image sensor 20 is mostly random noise, it is possible that for the photosensitive pixels before the combination, there is a possibility that noise is present in one or two pixels, and the four photosensitive pixels are combined into one large photosensitive light. After the pixel, the influence of the noise on the large pixel is reduced, that is, the noise is weakened, and the signal-to-noise ratio is improved.
  • the resolution of the merged image will also decrease as the pixel value decreases.
  • the patch image can be output through image processing.
  • the photosensitive device 21211 is for converting illumination into electric charge, and the generated electric charge is proportional to the intensity of the illumination, and the transmission tube 21212 is for controlling the on or off of the circuit according to the control signal.
  • the source follower 21213 is configured to convert the charge signal generated by the light-sensing device 21211 into a voltage signal.
  • Analog to digital converter 21214 is used to convert the voltage signal to a digital signal for processing by an image processing module coupled to image sensor 20.
  • the image sensor 20 of the embodiment of the present invention can also maintain a 16M photosensitive pixel output, or an output patch image, and the patch image includes an image pixel unit, and an image pixel unit.
  • the original pixel is arranged in a 2*2 array, the size of the original pixel is the same as the size of the photosensitive pixel, but since the filter unit 211a covering the adjacent four photosensitive devices 21211 is the same color, that is, although four The photosensitive devices 21211 are respectively exposed, but the filter units 211a covered by them are of the same color. Therefore, the adjacent four original pixels of each image pixel unit output are the same color, and the resolution of the image cannot be improved.
  • An image processing method is configured to process an output patch image to obtain a pseudo original image.
  • the processing module receives the processing to output a true color image.
  • the color patch image is outputted in decibels for each photosensitive pixel at the time of output. Since the adjacent four photosensitive pixels have the same color, the four adjacent original pixels of one image pixel unit have the same color and are atypical Bayer arrays.
  • the image processing module cannot directly process the atypical Bayer array, that is, when the image sensor 20 adopts the unified image processing mode, it is compatible with the two modes of true color image transmission.
  • the true color image output in the merge mode and the true color image output in the patch mode the color block image needs to be converted into a pseudo original image, or the image pixel unit of the atypical Bayer array is converted into a typical Bayer array. Pixel arrangement.
  • the original image includes imitation original pixels arranged in a Bayer array.
  • the pseudo original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel.
  • the fixed area of the patch image is first converted into a Bayer image array, and then image processing is performed using a first interpolation algorithm.
  • image processing is performed using a first interpolation algorithm.
  • the current pixels are R3'3' and R5'5', and the corresponding associated pixels are R33 and R55, respectively.
  • the pixel values above and below should be broadly understood as the color attribute values of the pixel, such as color values.
  • the associated pixel unit includes a plurality of, for example, four, original pixels in the image pixel unit that are the same color as the current pixel and are adjacent to the current pixel.
  • the associated pixel corresponding to R5'5' is B55, which is adjacent to the image pixel unit where B55 is located and has the same color as R5'5'.
  • the image pixel units in which the associated pixel unit is located are image pixel units in which R44, R74, R47, and R77 are located, and are not other red image pixel units that are spatially farther from the image pixel unit in which B55 is located.
  • red original pixels closest to the B55 are R44, R74, R47 and R77, respectively, that is, the associated pixel unit of R5'5' is composed of R44, R74, R47 and R77, R5'5'
  • the colors are the same as and adjacent to R44, R74, R47 and R77.
  • the original pixel is converted into the original pixel in different ways, thereby converting the color block image into the original image, and a special Bayer array structure filter is adopted when the image is captured.
  • the image signal-to-noise ratio is improved, and in the image processing process, the color block image is interpolated by the first interpolation algorithm, thereby improving the resolution and resolution of the image.
  • step S17 includes the following steps:
  • S173 Calculate the pixel value of the current pixel according to the amount of the gradient and the weight.
  • the second calculation module 17 includes a first calculation unit 171 , a second calculation unit 172 , and a third calculation unit 173 .
  • the first calculating unit 171 is configured to calculate the amount of gradation in each direction of the associated pixel
  • the second calculating unit 172 is configured to calculate the weight in each direction of the associated pixel
  • the third calculating unit 173 is configured to calculate the pixel of the current pixel according to the gradation amount and the weight. value.
  • step S171 can be implemented by the first calculation unit 171
  • step S172 can be implemented by the second calculation unit 172
  • step S173 can be implemented by the third calculation unit 173.
  • the first interpolation algorithm is an energy gradation of the reference image in different directions, and the color corresponding to the current pixel is the same and the adjacent associated pixel unit is calculated by linear interpolation according to the gradation weight in different directions.
  • the pixel value of the current pixel in the direction in which the amount of change in energy is small, the reference specific gravity is large, and therefore, the weight at the time of interpolation calculation is large.
  • R5'5' is interpolated from R44, R74, R47 and R77, and there are no original pixels of the same color in the horizontal and vertical directions, so the components of the color in the horizontal and vertical directions are calculated from the associated pixel unit.
  • the components in the horizontal direction are R45 and R75
  • the components in the vertical direction are R54 and R57 which can be calculated by R44, R74, R47 and R77, respectively.
  • R45 R44*2/3+R47*1/3
  • R75 2/3*R74+1/3*R77
  • R54 2/3*R44+1/3*R74
  • R57 2/3 *R47+1/3*R77.
  • the amount of gradation and the weight in the horizontal and vertical directions are respectively calculated, that is, the gradation amount in different directions according to the color is determined to determine the reference weights in different directions at the time of interpolation, and the weight is smaller in the direction of the gradation amount. Large, and in the direction of larger gradient, the weight is smaller.
  • the gradient amount X1
  • the gradient amount X2
  • W1 X1/(X1+X2)
  • W2 X2/(X1+X2) .
  • R5'5' (2/3*R45+1/3*R75)*W2+(2/3*R54+1/3*R57)*W1. It can be understood that if X1 is greater than X2, W1 is greater than W2, so the weight in the horizontal direction is W2 when calculating, and the weight in the vertical direction is W1, and vice versa.
  • the pixel value of the current pixel can be calculated according to the first interpolation algorithm.
  • the original pixel can be converted into a pseudo original pixel arranged in a typical Bayer array, that is, the adjacent original pixels of the four 2*2 arrays include a red original pixel. , two green imitation original pixels and one blue imitation original pixel.
  • the first interpolation algorithm includes, but is not limited to, a manner in which only pixel values of the same color in both the vertical and horizontal directions are considered in the calculation, and for example, reference may also be made to pixel values of other colors.
  • step S17 the steps are included before step S17:
  • Step S17 includes steps:
  • S18a Perform white balance compensation and restoration on the original image.
  • image processing apparatus 10 includes a white balance compensation module 16a and a white balance compensation reduction module 18a.
  • the white balance compensation module 16a is configured to perform white balance compensation on the patch image
  • the white balance compensation and restoration module 18a is configured to perform white balance compensation and restoration on the original image.
  • step S16a can be implemented by the white balance compensation module 16a
  • step S18a can be implemented by the white balance compensation restoration module 18a.
  • red And blue imitation original pixels in the process of converting the patch image to the original image, when interpolating, red And blue imitation original pixels often refer not only to the color of the original pixel of the channel with the same color, but also to the color weight of the original pixel of the green channel. Therefore, white balance compensation is needed before interpolation to exclude white in the interpolation calculation. The impact of balance. In order not to destroy the white balance of the patch image, it is necessary to perform white balance compensation reduction after the interpolation, and restore according to the gain values of red, green and blue in the compensation.
  • step of step S17 includes:
  • S16b Performs dead pixel compensation for the patch image.
  • image processing device 10 includes a dead pixel compensation module 16b.
  • step S16b can be implemented by the dead point compensation module 16b.
  • the image sensor 20 may have a dead pixel.
  • the bad point usually does not always show the same color as the sensitivity changes, and the presence of the dead pixel will affect the image quality. Therefore, in order to ensure accurate interpolation, The effect of the dead point requires bad point compensation before interpolation.
  • the original pixel may be detected.
  • the pixel compensation may be performed according to the pixel value of the other original image of the image pixel unit in which it is located.
  • step of step S17 includes:
  • image processing device 10 includes a crosstalk compensation module 16c.
  • step S16c can be implemented by the dead-point crosstalk module 16c.
  • the four photosensitive pixels in one photosensitive pixel unit 212a cover the filters of the same color, and there may be a difference in sensitivity between the photosensitive pixels, so that the solid color region in the true color image outputted by the original image is converted. Fixed spectral noise can occur, affecting the quality of the image. Therefore, it is necessary to perform crosstalk compensation on the patch image.
  • step S17 includes steps:
  • S18b Perform lens shading correction, demosaicing, noise reduction and edge sharpening on the original image.
  • image processing apparatus 10 further includes a processing module 18b.
  • step S18b can be implemented by the processing module 18b.
  • the original pixel is arranged as a typical Bayer array, and the processing module 18b can be used for processing, including lens shadow correction, demosaicing, noise reduction and edge sharpening. Processing, in this way, the true color image can be output to the user after processing.
  • the second interpolation algorithm For an image outside the fixed area of one frame of image, the second interpolation algorithm is used for image processing.
  • the interpolation process of the second interpolation algorithm is: taking the average value of the pixel values of all the original pixels in each image pixel unit outside the fixed area, and then determining whether the color of the current pixel and the associated pixel are the same, and the color of the current pixel and the associated pixel are At the same time, the pixel value of the associated pixel is used as the pixel value of the current pixel.
  • the pixel value of the original pixel in the image pixel unit that is the same as the current pixel color is taken as the pixel of the current pixel. value.
  • the pixel values of R11, R12, R21, and R22 are all Ravg, and the pixel values of Gr31, Gr32, Gr41, and Gr42 are all Gravg, and the pixel values of Gb13, Gb14, Gb23, and Gb24 are all Gbavg, B33, B34, and B43.
  • the pixel value of B44 is Bavg.
  • the associated pixel corresponding to the current pixel B22 is R22. Since the color of the current pixel B22 is different from the color of the associated pixel R22, the pixel value of the current pixel B22 should be the corresponding blue filter of the nearest neighbor.
  • the pixel value is the value of any Bavg of B33, B34, B43, B44.
  • other colors are also calculated using a second interpolation algorithm to obtain pixel values for individual pixels.
  • the second interpolation algorithm is relatively simple, the data to be processed is less than the first interpolation algorithm, and the complexity of the second interpolation algorithm includes time complexity and spatial complexity, so the complexity of the second interpolation algorithm It is lower than the first interpolation algorithm.
  • the second interpolation algorithm can also improve the resolution of the original image, but the original effect of the image is slightly worse than the original effect of the first interpolation algorithm. Therefore, the first interpolation algorithm is used to process the image in the fixed area, and the second interpolation algorithm is used to process the color patch image outside the fixed area, thereby improving the resolution and the original effect of the main part of the user's attention, thereby improving the user experience and reducing the user experience.
  • the time required for image processing is used to process the image in the fixed area, and the second interpolation algorithm is used to process the color patch image outside the fixed area, thereby improving the resolution and the original effect of the main part of the user's attention, thereby improving the user experience and reducing the user experience.
  • the electronic device 1000 includes an imaging device 100.
  • the imaging device includes an image sensor 20, and the image sensor 20 includes a photosensitive pixel unit array 212 and is disposed at The filter unit array 211 on the photosensitive pixel unit array 212, each of the filter unit 211 arrays covers a corresponding one of the photosensitive pixel units 212a, and each of the photosensitive pixel units 212a includes a plurality of photosensitive pixels 2121.
  • the control method includes the following steps:
  • control method of the embodiment of the present invention may be implemented by the control device 300 of the embodiment of the present invention.
  • the control device 300 includes an output module 30 and an image processing device 10.
  • the output module 30 is for controlling the image sensor to output a patch image
  • the image processing device 10 is for converting the patch image into a pseudo original image.
  • step S31 can be implemented by the output module 30, which can be implemented by the image processing apparatus 10.
  • the electronic device 1000 includes a touch screen 200, and the control method includes the following steps:
  • S321 Convert a color block image into a preview image by using a third interpolation algorithm, where the third interpolation algorithm includes a second interpolation algorithm;
  • S322 Control the touch screen 200 to display a preview image
  • S323 Control the touch screen 200 to display a prompt graphic to display a fixed area.
  • the control device 300 further includes a conversion module 40 , a first display module 50 , and a second display module 60 .
  • the conversion module 40 is configured to convert the color block image into a preview by using a third interpolation algorithm
  • the image, the third interpolation algorithm includes a second interpolation algorithm
  • the first display module 50 is configured to control the touch screen 200 to display a preview image
  • the second display module 60 is configured to control the touch screen 200 to display the prompt graphic to display the fixed area.
  • step S321 can be implemented by the conversion module 40
  • step S322 can be implemented by the first display module 50
  • step S323 can be implemented by the second display module 60.
  • the third interpolation algorithm may be used to convert the color block image into a true color image.
  • the interpolation process of the third interpolation algorithm is: first converting the color block image into a typical original image of a Bayer array by using a second interpolation algorithm, and then The pseudo original image is interpolated by further interpolation, such as bilinear interpolation, to obtain a true color image, and displayed on the touch screen 200.
  • the touch screen 200 displays a true color image to be photographed, and the fixed area is defined by a square.
  • the dotted square frame in the figure is a fixed area enclosed by the user, and the user needs to utilize the first through the mobile electronic device 1000.
  • the image processed by the interpolation algorithm is moved into the dashed box. In this way, the images in the fixed area are processed to have higher definition and the user has a better visual experience.
  • the fixed area is not limited to the area definition by using a box.
  • the fixed area on the touch screen displays a true color image, that is, a preview image, and the portion outside the fixed area is blurred, for example, outside the fixed area.
  • the part is processed into a frosted glass covering effect.
  • an electronic device 1000 includes an imaging device 100, a touch screen 200, and a control device 300.
  • electronic device 1000 includes a cell phone and a tablet.
  • Both the mobile phone and the tablet computer have a camera, that is, the imaging device 100.
  • the image processing method and the control method of the embodiment of the present invention can be used to obtain a high-resolution picture.
  • the electronic device 1000 also includes other electronic devices having a photographing function.
  • the image processing method according to the embodiment of the present invention is one of the designated processing modes in which the electronic device 1000 performs image processing. That is to say, when the user uses the electronic device 1000 to perform shooting, it is necessary to select various designated processing modes included in the electronic device 1000.
  • the electronic device 1000 adopts the implementation of the present invention.
  • the image processing method of the mode performs image processing.
  • imaging device 100 includes a front camera and a rear camera.
  • imaging devices 100 include a front camera and a rear camera. Both the front camera and the rear camera can implement image processing using the image processing method and control method of the embodiments of the present invention to enhance the user experience.
  • an electronic device 1000 includes a processor 400, a memory 500, a circuit board 600, a power supply circuit 700, and a housing 800.
  • the circuit board 600 is disposed inside the space enclosed by the housing 800, the processor 400 and the memory 500 are disposed on the circuit board 600;
  • the power supply circuit 700 is used to supply power to various circuits or devices of the electronic device 1000;
  • the memory 500 is used for storing
  • the program code is executable by the processor 400 by executing the executable code stored in the memory 500 to execute the program corresponding to the executable program code to implement the image processing method and the control method of any of the embodiments of the present invention described above.
  • processor 400 can be used to perform the following steps:
  • processor 400 can also be configured to perform the following steps:
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be performed by software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if executed in hardware, as in another embodiment, it can be performed by any one of the following techniques or combinations thereof known in the art: having logic gates for performing logic functions on data signals Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be executed in the form of hardware or in the form of software functional modules.
  • the integrated modules, if executed in the form of software functional modules and sold or used as separate products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Color Television Image Signal Generators (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

本发明公开了一种图像处理方法、图像处理装置、控制方法、控制装置、成像装置和电子装置。图像处理方法包括:判断关联像素是否位于固定区域内;在关联像素位于固定区域内时判断当前像素的颜色与关联像素是否相同;在当前像素的颜色与关联像素相同时将关联像素的像素值作为当前像素的像素值;在当前像素的颜色与关联像素不同时,采用第一插值算法计算当前像素的像素值;在关联像素位于固定区域外时利用第二插值算法计算当前像素的像素值。本发明实施方式的图像处理方法、图像处理装置、控制方法、控制装置、成像装置和电子装置对固定区域内、外的图像分别采用第一、第二插值算法进行处理,提高图像质量的同时减少图像处理的时间,提升了用户体验。

Description

图像处理方法及装置、控制方法及装置、成像及电子装置
优先权信息
本申请请求2016年11月29日向中国国家知识产权局提交的、专利申请号为201611079583.7的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本发明涉及图像处理技术,尤其涉及一种图像处理方法及装置、控制方法及装置、成像及电子装置。
背景技术
现有的一种图像传感器包括像素单元阵列和设置在像素单元阵列上的滤光片单元阵列,每个滤光片单元阵列覆盖对应一个感光像素单元,每个感光像素单元包括多个感光像素。工作时,可以控制图像传感器曝光输出合并图像,合并图像包括合并像素阵列,同一像素单元的多个感光像素合并输出作为一个合并像素。如此,可以提高合并图像的信噪比,然而,合并图像的解析度降低。当然,也可以控制图像传感器曝光输出高像素的色块图像,色块图像包括原始像素阵列,每个感光像素对应一个原始像素。然而,由于同一滤光片单元对应的多个原始像素颜色相同,同样无法提高色块图像的解析度。因此,需要通过插值计算的方式将高像素色块图像转换成高像素的仿原图像,仿原图像可以包括呈贝尔阵列排布的仿原像素。仿原图像可以通过图像处理方法转换成真彩图像并保存下来。插值计算可以提高真彩图像的清晰度,然而耗费资源且耗时,导致拍摄时间加长,用户体验差。另一方面,具体应用时,用户往往只关注真彩图像中的主体部分的清晰度。
发明内容
本发明的实施例提供一种图像处理方法、控制方法、图像处理装置、控制装置、成像装置和电子装置。
本发明实施方式的图像处理方法,用于将色块图像转换成仿原图像,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述色块图像包括固定区域,所述图像处理方法包括以下步骤:
判断所述关联像素是否位于所述固定区域内;
在所述关联像素位于所述固定区域内时判断所述当前像素的颜色与所述关联像素的颜色是否相同;
在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;
在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
在所述关联像素位于所述固定区域外时,通过第二插值算法计算所述当前像素的像素值,所述第二插值算法的复杂度小于所述第一插值算法。
本发明实施方式的控制方法,用于控制电子装置,所述电子装置包括成像装置,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制方法包括以下步骤:
控制所述图像传感器输出色块图像;和
采用上述的图像处理方法将所述色块图像转换成仿原图像。
本发明实施方式的图像处理装置,用于将色块图像转换成仿原图像,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述色块图像包括固定区域,所述图像处理装置包括第一判断模块、第二判断模块、第一计算模块、第二计算模块和第三计算模块。所述第一判断模块用于判断所述关联像素是否位于所述固定区域内,所述第二判断模块用于在所述关联像素位于所述固定区域内时判断所述当前像素的颜色与所述关联像素的颜色是否相同,所述第一计算模块用于在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值,所述第二计算理模块用于在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻,所述第三计算理模块用于在所述关联像素位于所述固定区域外时,通过第二插值算法计算所述当前像素的像素值,所述第二插值算法的复杂度小于所述第一插值算法。
本发明实施方式的控制装置,用于控制电子装置,所述电子装置包括成像装置,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制装置包括输出模块和图像处理装置。所述输出模块用于控制所述图像传感器输出色块图像,所述图像处理装置用于采用上述的图像处理方法将所述色块图像转换成仿原图像。
本发明实施方式的电子装置包括上述的成像装置、触摸屏和上述的控制装置。
本发明实施方式的电子装置包括壳体、处理器、存储器、电路板和电源电路。所述电路板安置在所述壳体围成的空间内部,所述处理器和所述存储器设置在所述电路板上;所述电源电路用于为所述电子装置的各个电路或器件供电;所述存储器用于存储可执行程序 代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行上述的图像处理方法和控制方法。
本发明实施方式的图像处理方法、控制方法、图像处理装置、控制装置、成像装置和电子装置,对固定区域内的图像采用能够提高图像分辨率及解析度的第一插值算法,对固定区域外的图像采用复杂度小于第一插值算法的第二插值算法,一方面提高了图像的主体部分的信噪比、分辨率及解析度,提升了用户体验,另一方面减少了图像处理的数据和处理所需的时间。
本发明的实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实施方式的实践了解到。
附图说明
本发明的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本发明实施方式的图像处理方法的流程示意图;
图2是本发明实施方式的图像处理装置的功能模块示意图;
图3是本发明实施方式的成像装置的功能模块示意图;
图4是本发明实施方式的图像传感器模块示意图;
图5是本发明实施方式的图像传感器的电路示意图;
图6是发明实施方式的滤光片单元示意图;
图7是本发明实施方式的图像传感器结构示意图;
图8是本发明实施方式的合并图像的状态示意图;
图9是本发明实施方式的色块图像的状态示意图;
图10是本发明实施方式的图像处理方法的状态示意图;
图11是本发明某些实施方式的图像处理方法的流程示意图;
图12是本发明某些实施方式的图像处理装置的功能模块示意图;
图13是本发明某些实施方式的图像处理方法的流程示意图;
图14是本发明某些实施方式的图像处理装置的功能模块示意图;
图15是本发明某些实施方式的图像处理方法的状态示意图;
图16是本发明实施方式的控制方法的流程示意图;
图17是本发明实施方式的控制装置的功能模块示意图;
图18是本发明某些实施方式的控制方法的流程示意图;
图19是本发明某些实施方式的控制装置的功能模块示意图;
图20是本发明某些实施方式的控制方法的状态示意图;
图21是本发明某些实施方式的电子装置的功能模块示意图;
图22是本发明某些实施方式的电子装置的功能模块示意图。
具体实施方式
下面详细描述本发明的实施方式,实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。
请参阅图1,本发明实施方式的图像处理方法,用于将色块图像转换成仿原图像,色块图像包括预定阵列排布的图像像素单元,图像像素单元包括多个原始像素,仿原图像包括阵列排布的仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素,色块图像包括固定区域,图像处理方法包括以下步骤:
S11:判断关联像素是否位于固定区域内;
S13:在关联像素位于固定区域内时判断当前像素的颜色与关联像素的颜色是否相同;
S15:在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;
S17:在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;和
S19:在关联像素位于固定区域外时,通过第二插值算法计算当前像素的像素值,第二插值算法的复杂度小于第一插值算法。
请参阅图2,本发明实施方式的图像处理方法可以由图像处理装置10实现。
图像处理装置10包括第一判断模块11、第二判断模块13、第一计算模块15、第二计算模块17和第三计算模块19。第一判断模块11用于判断关联像素是否位于固定区域内;第二判断模块13用于在关联像素位于固定区域内时判断当前像素的颜色与关联像素的颜色是否相同;第一计算模块15用于在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;第二计算理模块17用于在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;第三计算理模块19用于在关联像素位于固定区域外时,通过第二插值算法计算当前像素的像素值,第二插值算法的复杂度小于第一插值算法。
也即是说,步骤S11可以由第一判断模块11实现,步骤S13可以由第二判断模块13实现,步骤S15可以由第一计算模块15实现,步骤S17可以由第二计算模块17实现,步骤S19可以由第三计算模块19实现。
请参阅图3,本发明实施方式的成像装置100包括图像处理装置10及图像传感器20,图像传感器20用于输出色块图像。
可以理解,本发明实施方式的控制方法分别采用第一插值算法和第二插值算法处理 固定区域内的图像和固定区域外的图像。其中,固定区域是在拍摄预览时呈现在成像装置触摸屏上的一个无法拖动且大小固定的区域,用户拍摄时通过改变成像装置100的图像摄取位置以将需要利用第一插值算法处理的主体部分放入该固定区域内以进行图像处理。对固定区域内的图像采用第一插值算法进行处理以提高该部分图像的解析度,利用复杂度小于第一插值算法的第二插值算法对固定区域外的图像进行处理以减小整帧图像处理时所需的时间,从而在提升图像质量的同时提升用户体验。
请一并参阅图4至7,本发明实施方式的图像传感器20包括感光像素单元阵列212和设置在感光像素单元212阵列上的滤光片单元阵列211。
进一步地,感光像素单元阵列212包括多个感光像素单元212a,每个感光像素单元212a包括多个相邻的感光像素2121。每个感光像素2121包括一个感光器件21211和一个传输管21212,其中,感光器件21211可以是光电二极管,传输管21212可以是MOS晶体管。
滤光片单元阵列211包括多个滤光片单元211a,每个滤光片单元211a覆盖对应一个感光像素单元212a。
具体地,在某些示例中,滤光片单元阵列211包括拜耳阵列,也即是说,相邻的四个滤光片单元211a分别为一个红色滤光片单元、一个蓝色滤光片单元和两个绿色滤光片单元。
每一个感光像素单元212a对应同一颜色的滤光片211a,若一个感光像素单元212a中一共包括n个相邻的感光器件21211,那么一个滤光片单元211a覆盖一个感光像素单元212a中的n个感光器件21211,该滤光片单元211a可以是一体构造,也可以由n个独立的子滤光片组装连接在一起。
在某些实施方式中,每个感光像素单元212a包括四个相邻的感光像素2121,相邻两个感光像素2121共同构成一个感光像素子单元2120,感光像素子单元2120还包括一个源极跟随器21213和一个模数转换器21214。感光像素单元212a还包括一个加法器2122。其中,一个感光像素子单元2120中的每个传输管21212的一端电极被连接到对应感光器件21211的阴极电极,每个传输管21212的另一端被共同连接至源极跟随器21213的闸极电极,并通过源极跟随器21213源极电极连接至一个模数转换器21214。其中,源极跟随器21213可以是MOS晶体管。两个感光像素子单元2120通过各自的源极跟随器21213及模数转换器21214连接至加法器2122。
也即是说,本发明实施方式的图像传感器20的一个感光像素单元212a中相邻的四个感光器件21211共用一个同颜色的滤光片单元211a,每个感光器件21211对应连接一个传输管21212,相邻两个感光器件21211共用一个源极跟随器21213和一个模数转换器21214,相邻四个感光器件21211共用一个加法器2122。
进一步地,相邻四个感光器件21211呈2*2阵列排布。其中,一个感光像素子单元2120中的两个感光器件21211可以处于同一列。
在成像时,当同一滤光片单元211a下覆盖的两个感光像素子单元2120或者说四个感光器件21211同时曝光时,可以对像素进行合并进而可输出合并图像。
具体地,感光器件21211用于将光照转换为电荷,且产生的电荷与光照强度成比例关系,传输管21212用于根据控制信号来控制电路的导通或断开。当电路导通时,源极跟随器21213用于将感光器件21211经光照产生的电荷信号转换为电压信号。模数转换器21214用于电压信号转换为数字信号。加法器2122用于将两路数字信号相加共同输出,以供与图像传感器20相连的图像处理模块处理。
请参阅图8,以16M的图像传感器20举例来说,本发明实施方式的图像传感器20可以将16M的感光像素合并成4M,或者说,输出合并图像,合并后,感光像素的大小相当于变成了原来大小的4倍,从而提升了感光像素的感光度。此外,由于图像传感器20中的噪声大部分都是随机噪声,对于合并之前的感光像素来说,有可能其中一个或两个像素中存在噪点,而在将四个感光像素合并成一个大的感光像素后,减小了噪点对该大像素的影响,也即是减弱了噪声,提高了信噪比。
但在感光像素大小变大的同时,由于像素值降低,合并图像的解析度也将降低。
在成像时,当同一滤光片单元211a覆盖的四个感光器件21211依次曝光时,经过图像处理可以输出色块图像。
具体地,感光器件21211用于将光照转换为电荷,且产生的电荷与光照的强度成比例关系,传输管21212用于根据控制信号来控制电路的导通或断开。当电路导通时,源极跟随器21213用于将感光器件21211经光照产生的电荷信号转换为电压信号。模数转换器21214用于将电压信号转换为数字信号以供与图像传感器20相连的图像处理模块处理。
请参阅图9,以16M的图像传感器20举例来说,本发明实施方式的图像传感器20还可以保持16M的感光像素输出,或者说输出色块图像,色块图像包括图像像素单元,图像像素单元包括2*2阵列排布的原始像素,该原始像素的大小与感光像素大小相同,然而由于覆盖相邻四个感光器件21211的滤光片单元211a为同一颜色,也即是说,虽然四个感光器件21211分别曝光,但其覆盖的滤光片单元211a颜色相同,因此,输出的每个图像像素单元的相邻四个原始像素颜色相同,仍然无法提高图像的解析度。
本发明实施方式的图像处理方法,用于对输出的色块图像进行处理,以得到仿原图像。
可以理解,合并图像在输出时,四个相邻的同色的感光像素以合并像素输出,如此,合并图像中的四个相邻的合并像素仍可看作是典型的拜耳阵列,可以直接被图像处理模块接收进行处理以输出真彩图像。而色块图像在输出时每个感光像素分贝输出,由于相邻四个感光像素颜色相同,因此,一个图像像素单元的四个相邻原始像素的颜色相同,是非典型的拜耳阵列。而图像处理模块无法对非典型拜耳阵列直接进行处理,也即是说,在图像传感器20采用统一图像处理模式时,为兼容两种模式的真彩图像输 出即合并模式下的真彩图像输出及色块模式下的真彩图像输出,需将色块图像转换为仿原图像,或者说将非典型拜耳阵列的图像像素单元转换为典型的拜耳阵列的像素排布。
仿原图像包括呈拜耳阵列排布的仿原像素。仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素。
对于一帧色块图像的固定区域内的图像,先将该色块图像的固定区域转换成拜耳图像阵列,再利用第一插值算法进行图像处理。具体地,请参阅图10,以图10为例,当前像素为R3’3’和R5’5’,对应的关联像素分别为R33和R55。
在获取当前像素R3’3’时,由于R3’3’与对应的关联像素R33的颜色相同,因此在转换时直接将R33的像素值作为R3’3’的像素值。
在获取当前像素R5’5’时,由于R5’5’与对应的关联像素B55的颜色不相同,显然不能直接将B55的像素值作为R5’5’的像素值,需要根据R5’5’的关联像素单元通过插值的方式计算得到。
需要说明的是,以上及下文中的像素值应当广义理解为该像素的颜色属性数值,例如色彩值。
关联像素单元包括多个,例如4个,颜色与当前像素相同且与当前像素相邻的图像像素单元中的原始像素。
需要说明的是,此处相邻应做广义理解,以图10为例,R5’5’对应的关联像素为B55,与B55所在的图像像素单元相邻的且与R5’5’颜色相同的关联像素单元所在的图像像素单元分别为R44、R74、R47、R77所在的图像像素单元,而并非在空间上距离B55所在的图像像素单元更远的其他的红色图像像素单元。其中,与B55在空间上距离最近的红色原始像素分别为R44、R74、R47和R77,也即是说,R5’5’的关联像素单元由R44、R74、R47和R77组成,R5’5’与R44、R74、R47和R77的颜色相同且相邻。
如此,针对不同情况的当前像素,采用不同方式的将原始像素转换为仿原像素,从而将色块图像转换为仿原图像,由于拍摄图像时,采用了特殊的拜耳阵列结构的滤光片,提高了图像信噪比,并且在图像处理过程中,通过第一插值算法对色块图像进行插值处理,提高了图像的分辨率及解析度。
请参阅图11,在某些实施方式中,步骤S17包括以下步骤:
S171:计算关联像素各个方向上的渐变量;
S172:计算关联像素各个方向上的权重;和
S173:根据渐变量及权重计算当前像素的像素值。
请参阅图12,在某些实施方式中,第二计算模块17包括第一计算单元171、第二计算单元172、第三计算单元173。第一计算单元171用于计算关联像素各个方向上的渐变量,第二计算单元172用于计算关联像素各个方向上的权重,第三计算单元173用于根据渐变量及权重计算当前像素的像素值。
也即是说,步骤S171可以由第一计算单元171实现,步骤S172可以由第二计算单元172实现,步骤S173可以由第三计算单元173实现。
具体地,第一插值算法是参考图像在不同方向上的能量渐变,将与当前像素对应的颜色相同且相邻的关联像素单元依据在不同方向上的渐变权重大小,通过线性插值的方式计算得到当前像素的像素值。其中,在能量变化量较小的方向上,参考比重较大,因此,在插值计算时的权重较大。
在某些示例中,为方便计算,仅考虑水平和垂直方向。
R5’5’由R44、R74、R47和R77插值得到,而在水平和垂直方向上并不存在颜色相同的原始像素,因此需根据关联像素单元计算在水平和垂直方向上该颜色的分量。其中,水平方向上的分量为R45和R75、垂直方向的分量为R54和R57可以分别通过R44、R74、R47和R77计算得到。
具体地,R45=R44*2/3+R47*1/3,R75=2/3*R74+1/3*R77,R54=2/3*R44+1/3*R74,R57=2/3*R47+1/3*R77。
然后,分别计算在水平和垂直方向的渐变量及权重,也即是说,根据该颜色在不同方向的渐变量,以确定在插值时不同方向的参考权重,在渐变量小的方向,权重较大,而在渐变量较大的方向,权重较小。其中,在水平方向的渐变量X1=|R45-R75|,在垂直方向上的渐变量X2=|R54-R57|,W1=X1/(X1+X2),W2=X2/(X1+X2)。
如此,根据上述可计算得到,R5’5’=(2/3*R45+1/3*R75)*W2+(2/3*R54+1/3*R57)*W1。可以理解,若X1大于X2,则W1大于W2,因此计算时水平方向的权重为W2,而垂直方向的权重为W1,反之亦反。
如此,可根据第一插值算法计算得到当前像素的像素值。依据上述对关联像素的处理方式,可将原始像素转换为呈典型拜耳阵列排布的仿原像素,也即是说,相邻的四个2*2阵列的仿原像素包括一个红色仿原像素,两个绿色仿原像素和一个蓝色仿原像素。
需要说明的是,第一插值算法包括但不限于本实施例中公开的在计算时仅考虑垂直和水平两个方向相同颜色的像素值的方式,例如还可以参考其他颜色的像素值。
请参阅图13,在某些实施方式中,在步骤S17前包括步骤:
S16a:对色块图像做白平衡补偿;
步骤S17后包括步骤:
S18a:对仿原图像做白平衡补偿还原。
请参与图14,在某些实施方式中,图像处理装置10包括白平衡补偿模块16a和白平衡补偿还原模块18a。白平衡补偿模块16a用于对色块图像做白平衡补偿,白平衡补偿还原模块18a用于对仿原图像做白平衡补偿还原。
也即是说,步骤S16a可以由白平衡补偿模块16a实现,步骤S18a可以由白平衡补偿还原模块18a实现。
具体地,在一些示例中,在将色块图像转换为仿原图像的过程中,在插值时,红色 和蓝色仿原像素往往不仅参考与其颜色相同的通道的原始像素的颜色,还会参考绿色通道的原始像素的颜色权重,因此,在插值前需要进行白平衡补偿,以在插值计算中排除白平衡的影响。为了不破坏色块图像的白平衡,因此,在插值之后需要将仿原图像进行白平衡补偿还原,还原时根据在补偿中红色、绿色及蓝色的增益值进行还原。
请再参阅图13,在某些实施方式中,步骤S17前包括步骤:
S16b:对色块图像做坏点补偿。
请再参阅图14,在某些实施方式中,图像处理装置10包括坏点补偿模块16b。
也即是说,步骤S16b可以由坏点补偿模块16b实现。
可以理解,受限于制造工艺,图像传感器20可能会存在坏点,坏点通常不随感光度变化而始终呈现同一颜色,坏点的存在将影响图像质量,因此,为保证插值的准确,不受坏点的影响,需要在插值前进行坏点补偿。
具体地,坏点补偿过程中,可以对原始像素进行检测,当检测到某一原始像素为坏点时,可根据其所在的图像像素单元的其他原始像的像素值进行坏点补偿。
如此,可排除坏点对插值处理的影响,提高图像质量。
请再参阅图13,在某些实施方式中,步骤S17前包括步骤:
S16c:对色块图像做串扰补偿。
请再参阅图14,在某些实施方式中,图像处理装置10包括串扰补偿模块16c。
也即是说,步骤S16c可以由坏点串扰模块16c实现。
具体的,一个感光像素单元212a中的四个感光像素覆盖同一颜色的滤光片,而感光像素之间可能存在感光度的差异,以至于以仿原图像转换输出的真彩图像中的纯色区域会出现固定型谱噪声,影响图像的质量。因此,需要对色块图像进行串扰补偿。
请再参阅图13,在某些实施方式中,步骤S17后包括步骤:
S18b:对仿原图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
请再参阅图14,在某些实施方式中,图像处理装置10还包括处理模块18b。
也即是说,步骤S18b可以由处理模块18b实现。
可以理解,将色块图像转换为仿原图像后,仿原像素排布为典型的拜耳阵列,可采用处理模块18b进行处理,处理过程中包括镜片阴影校正、去马赛克、降噪和边缘锐化处理,如此,处理后即可得到真彩图像输出给用户。
对于一帧图像的固定区域外的图像,需利用第二插值算法进行图像处理。第二插值算法的插值过程是:对固定区域外的每一个图像像素单元中所有的原始像素的像素值取均值,随后判断当前像素与关联像素的颜色是否相同,在当前像素与关联像素颜色相同时,去关联像素的像素值作为当前像素的像素值,在当前像素与关联像素颜色不同时,取最邻近的与当前像素颜色相同的图像像素单元中的原始像素的像素值作为当前像素的像素值。
具体地,请参阅图15,以图15为例,先计算图像像素单元中各个原始像素的像素 值:Ravg=(R1+R2+R3+R4)/4,Gravg=(Gr1+Gr2+Gr3+Gr4)/4,Gbavg=(Gb1+Gb2+Gb3+Gb4)/4,Bavg=(B1+B2+B3+B4)/4。此时,R11、R12、R21、R22的像素值均为Ravg,Gr31、Gr32、Gr41、Gr42的像素值均为Gravg,Gb13、Gb14、Gb23、Gb24的像素值均为Gbavg,B33、B34、B43、B44的像素值均为Bavg。以当前像素B22为例,当前像素B22对应的关联像素为R22,由于当前像素B22的颜色与关联像素R22的颜色不同,因此当前像素B22的像素值应取最邻近的蓝色滤光片对应的像素值即取B33、B34、B43、B44中任一Bavg的值。同样地,其他颜色也采用第二插值算法进行计算以得到各个像素的像素值。
如此,由于第二插值算法较为简单,所需处理的数据相对于第一插值算法较少,且第二插值算法的复杂度包括时间复杂度和空间复杂度,因此第二插值算法的复杂度相较于第一插值算法较低。第二插值算法同样能提升仿原图像的解析度,但图像的仿原效果比第一插值算法的仿原效果略差。因此,用第一插值算法处理固定区域内的图像,而采用第二插值算法处理固定区域外的色块图像,提升用户关注的主体部分的解析度和仿原效果,提升了用户体验,同时减少了图像处理所需的时间。
请参阅图16及图21,本发明实施方式的控制方法,用于控制电子装置1000,电子装置1000包括成像装置100,成像装置包括图像传感器20,图像传感器20包括感光像素单元阵列212和设置在感光像素单元阵列212上的滤光片单元阵列211,每个滤光片单元211阵列覆盖对应一个感光像素单元212a,每个感光像素单元212a包括多个感光像素2121,控制方法包括以下步骤:
S31:控制图像传感器输出色块图像;和
S33:采用上述的图像处理方法将色块图像转换成仿原图像。
请参阅图17,本发明实施方式的控制方法可以由本发明实施方式的控制装置300实现。
控制装置300包括输出模块30和图像处理装置10。输出模块30用于控制图像传感器输出色块图像,图像处理装置10用于将色块图像转换成仿原图像。
也即是说,步骤S31可以由输出模块30实现,步骤S33可以由图像处理装置10实现。
请参阅图18及图21,在某些实施方式中,电子装置1000包括触摸屏200,控制方法包括以下步骤:
S321:采用第三插值算法将色块图像转换成预览图像,第三插值算法包括第二插值算法;
S322:控制触摸屏200显示预览图像;和
S323:控制触摸屏200显示提示图形以显示固定区域。
请参阅图19,在某些实施方式中,控制装置300还包括转换模块40、第一显示模块50和第二显示模块60。转换模块40用于采用第三插值算法将色块图像转换成预览 图像,第三插值算法包括第二插值算法,第一显示模块50用于控制触摸屏200显示预览图像,第二显示模块60用于控制触摸屏200显示提示图形以显示固定区域。
也即是说,步骤S321可以由转换模块40实现,步骤S322可以由第一显示模块50实现,步骤S323可以由第二显示模块60实现。
可以理解,图像传感器20输出色块图像后,需要将色块图像转换成真彩图像并在触摸屏200上实现预览。其中,将色块图像转换成真彩图像可采用第三插值算法,第三插值算法的插值过程是:先利用第二插值算法将色块图像转换成典型的拜耳阵列的仿原图像,再将仿原图像通过进一步插值,如采用双线性插值法进行插值以得到真彩图像,并在触摸屏200上进行显示。
具体地,请参阅图20,触摸屏200显示待拍摄的真彩图像,固定区域采用方框进行限定,图中虚线方框即为围成的固定区域,用户通过移动电子装置1000将需要利用第一插值算法处理的图像移入该虚线方框内。如此,固定区域内的图像经处理后有更高的清晰度,用户有更好的视觉体验。
需要说明的是,固定区域不限于利用方框进行区域限定,在某些实施方式中,触摸屏上固定区域显示真彩图像即预览图像,固定区域外的部分则做模糊处理,例如将固定区域外的部分处理成毛玻璃覆盖的效果。
请参阅图21,本发明实施方式的电子装置1000包括成像装置100、触摸屏200和控制装置300。
在某些实施方式中,电子装置1000包括手机和平板电脑。
手机和平板电脑均带有摄像头即成像装置100,用户使用手机或平板电脑进行拍摄时,可以采用本发明实施方式的图像处理方法和控制方法,以得到高解析度的图片。
需要说明的是,电子装置1000也包括其他具有拍摄功能的电子设备。本发明实施方式的图像处理方法是电子装置1000进行图像处理的指定处理模式之一。也即是说,用户利用电子装置1000进行拍摄时,需要对电子装置1000中包含的各种指定处理模式进行选择,当用户选择本发明实施方式的指定处理模式时,电子装置1000采用本发明实施方式的图像处理方法进行图像处理。
在某些实施方式中,成像装置100包括前置相机和后置相机。
可以理解,许多成像装置100包括前置相机和后置相机,前置相机和后置相机均可采用本发明实施方式的图像处理方法和控制方法实现图像处理,以提升用户体验。
请参阅图22,本发明实施方式的电子装置1000包括处理器400、存储器500、电路板600、电源电路700和壳体800。其中,电路板600安置在壳体800围成的空间内部,处理器400和存储器500设置在电路板600上;电源电路700用于为电子装置1000的各个电路或器件供电;存储器500用于存储可执行程序代码;处理器400通过读取存储器500中存储的可执行代码来运行与可执行程序代码对应的程序以实现上述中本发明任一实施方式的图像处理方法和控制方法。
例如,处理器400可以用于执行以下步骤:
S11:判断关联像素是否位于固定区域内;
S13:在关联像素位于固定区域内时判断当前像素的颜色与关联像素的颜色是否相同;
S15:在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;
S17:在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;和
S19:在关联像素位于固定区域外时,通过第二插值算法计算当前像素的像素值,第二插值算法的复杂度小于第一插值算法。
再例如,处理器400还可以用于执行以下步骤:
S31:控制图像传感器20输出色块图像;和
S33:采用上述的图像处理方法将色块图像转换成仿原图像。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于执行特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的执行,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于执行逻辑功能的可执行指令的定序列表,可以具体执行在任何计算机可读介质中,以供指令执行***、装置或设备(如基于计算机的***、包括处理器的***或其他可以从指令执行***、装置或设备取指令并执行指令的***)使用,或结合这些指令执行***、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行***、装置或设备或结合这些指令执行***、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来执行。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行***执行的软件或固件来执行。例如,如果用硬件来执行,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来执行:具有用于对数据信号执行逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解执行上述实施方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式执行,也可以采用软件功能模块的形式执行。所述集成的模块如果以软件功能模块的形式执行并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (25)

  1. 一种图像处理方法,用于将色块图像转换成仿原图像,其特征在于,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述色块图像包括固定区域,所述图像处理方法包括以下步骤:
    判断所述关联像素是否位于所述固定区域内;
    在所述关联像素位于所述固定区域内时判断所述当前像素的颜色与所述关联像素的颜色是否相同;
    在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;
    在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
    在所述关联像素位于所述固定区域外时,通过第二插值算法计算所述当前像素的像素值,所述第二插值算法的复杂度小于所述第一插值算法。
  2. 根据权利要求1所述的图像处理方法,其特征在于,所述预定阵列包括贝尔阵列。
  3. 根据权利要求1所述的图像处理方法,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
  4. 根据权利要求1所述的图像处理方法,其特征在于,所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤包括以下步骤:
    计算所述关联像素各个方向上的渐变量;
    计算所述关联像素各个方向上的权重;和
    根据所述渐变量及所述权重计算所述当前像素的像素值。
  5. 根据权利要求1所述的图像处理方法,其特征在于,所述图像处理方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前包括以下步骤:
    对所述色块图像做白平衡补偿;
    所述图像处理方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤后包括以下步骤:
    对所述仿原图像做白平衡补偿还原。
  6. 根据权利要求1所述的图像处理方法,其特征在于,所述图像处理方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前包括以下步 骤:
    对所述色块图像做坏点补偿。
  7. 根据权利要求1所述的图像处理方法,其特征在于,所述图像处理方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前包括以下步骤:
    对所述色块图像做串扰补偿。
  8. 根据权利要求1所述的图像处理方法,其特征在于,所述图像处理方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤后包括如下步骤:
    对所述仿原图像进行镜片形状校正、去马赛克、降噪和边缘锐化处理。
  9. 一种控制方法,用于控制电子装置,其特征在于,所述电子装置包括成像装置,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制方法包括以下步骤:
    控制所述图像传感器输出色块图像;和
    采用根据权利要求1-8任意一项所述的图像处理方法将所述色块图像转换成仿原图像。
  10. 根据权利要求9所述的控制方法,其特征在于,所述电子装置包括触摸屏,所述控制方法包括以下步骤:
    采用所述第三插值算法将所述色块图像转换成预览图像,所述第三插值算法包括所述第二插值算法;
    控制所述触摸屏显示所述预览图像;和
    控制所述触摸屏显示提示图形以显示所述固定区域。
  11. 一种图像处理装置,用于将色块图像转换成仿原图像,其特征在于,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述色块图像包括固定区域,所述图像处理装置包括:
    第一判断模块,所述第一判断模块用于判断所述关联像素是否位于所述固定区域内;
    第二判断模块,所述第二判断模块用于在所述关联像素位于所述固定区域内时判断所述当前像素的颜色与所述关联像素的颜色是否相同;
    第一计算模块,所述第一计算模块用于在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;
    第二计算模块,所述第二计算模块用于在所述当前像素的颜色与所述关联像素的颜色 不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
    第三计算模块,所述第三计算模块用于在所述关联像素位于所述固定区域外时,通过第二插值算法计算所述当前像素的像素值,所述第二插值算法的复杂度小于所述第一插值算法。
  12. 根据权利要求11所述的图像处理装置,其特征在于,所述预定阵列包括贝尔阵列。
  13. 根据权利要求11所述的图像处理装置,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
  14. 根据权利要求11所述的图像处理装置,其特征在于,所述第二计算模块包括:
    第一计算单元,所述第一计算单元用于计算所述关联像素各个方向上的渐变量;
    第二计算单元,所述第二计算单元用于计算所述关联像素各个方向上的权重;和
    第三计算单元,所述第三计算单元根据所述渐变量及所述权重计算所述当前像素的像素值。
  15. 根据权利要求11所述的图像处理装置,其特征在于,所述图像处理装置包括:
    白平衡补偿模块,所述白平衡补偿模块用于对所述色块图像做白平衡补偿;
    白平衡还原模块,所述白平衡还原模块用于对所述仿原图像做白平衡补偿还原。
  16. 根据权利要求11所述的图像处理装置,其特征在于,所述图像处理装置包括:
    坏点补偿模块,所述坏点补偿模块用于对所述色块图像做坏点补偿。
  17. 根据权利要求11所述的图像处理装置,其特征在于,所述图像处理装置包括:
    串扰补偿模块,所述串扰补偿模块用于对所述色块图像做串扰补偿。
  18. 根据权利要求11所述的图像处理装置,其特征在于,所述图像处理装置包括:
    处理模块,所述处理模块用于对所述仿原图像进行镜片形状校正、去马赛克、降噪和边缘锐化处理。
  19. 一种成像装置,其特征在于包括:
    权利要求11-18任意一项所述的图像处理装置;和
    图像传感器,用于产生所述色块图像。
  20. 一种控制装置,用于控制电子装置,其特征在于,所述电子装置包括成像装置,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制装置包括:
    输出模块,所述输出模块用于控制所述图像传感器输出色块图像;和
    图像处理装置,所述图像处理装置用于采用权利要求1-8任意一项所述的图像处理方法将所述色块图像转换成仿原图像。
  21. 根据权利要求20所述的控制装置,其特征在于,所述电子装置包括触摸屏,所述控制装置包括:
    转换模块,所述转换模块用于采用所述第三插值算法将所述色块图像转换成预览图像,所述第三插值算法包括所述第二插值算法;
    第一显示模块,所述第一显示模块用于控制所述触摸屏显示所述预览图像;和
    第二显示模块,所述第二显示模块用于控制所述触摸屏显示提示图形以显示所述固定区域。
  22. 一种电子装置,其特征在于包括:
    权利要求19所述的成像装置;
    触摸屏;和
    权利要求20或21所述的控制装置。
  23. 根据权利要求22所述的电子装置,其特征在于,所述电子装置包括手机和平板电脑。
  24. 根据权利要求22所述的电子装置,其特征在于,所述成像装置包括前置相机或后置相机。
  25. 一种电子装置,包括壳体、处理器、存储器、电路板和电源电路,其特征在于,所述电路板安置在所述壳体围成的空间内部,所述处理器和所述存储器设置在所述电路板上;所述电源电路用于为所述电子装置的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行权利要求1至8中任意一项所述的图像处理方法和权利要求9或10所述的控制方法。
PCT/CN2017/081920 2016-11-29 2017-04-25 图像处理方法及装置、控制方法及装置、成像及电子装置 WO2018098983A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611079583.7A CN106507068B (zh) 2016-11-29 2016-11-29 图像处理方法及装置、控制方法及装置、成像及电子装置
CN201611079583.7 2016-11-29

Publications (1)

Publication Number Publication Date
WO2018098983A1 true WO2018098983A1 (zh) 2018-06-07

Family

ID=58328104

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/081920 WO2018098983A1 (zh) 2016-11-29 2017-04-25 图像处理方法及装置、控制方法及装置、成像及电子装置

Country Status (5)

Country Link
US (2) US10440265B2 (zh)
EP (1) EP3327664B1 (zh)
CN (1) CN106507068B (zh)
ES (1) ES2757506T3 (zh)
WO (1) WO2018098983A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106507019B (zh) * 2016-11-29 2019-05-10 Oppo广东移动通信有限公司 控制方法、控制装置、电子装置
CN106507068B (zh) 2016-11-29 2018-05-04 广东欧珀移动通信有限公司 图像处理方法及装置、控制方法及装置、成像及电子装置
CN107743199B (zh) * 2017-10-30 2020-05-15 努比亚技术有限公司 图像处理方法、移动终端及计算机可读存储介质
CN107808361B (zh) * 2017-10-30 2021-08-10 努比亚技术有限公司 图像处理方法、移动终端及计算机可读存储介质
CN111340863B (zh) * 2019-08-29 2023-12-05 杭州海康慧影科技有限公司 一种摩尔纹像素点确定方法、装置及电子设备
US11776088B2 (en) * 2020-03-11 2023-10-03 Samsung Electronics Co., Ltd. Electronic device generating image data and converting the generated image data and operating method of the electronic device
CN111355937B (zh) * 2020-03-11 2021-11-16 北京迈格威科技有限公司 图像处理方法、装置和电子设备

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101472048A (zh) * 2007-12-21 2009-07-01 索尼株式会社 图像拾取装置、色噪声降低方法和色噪声降低程序
US8248496B2 (en) * 2009-03-10 2012-08-21 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and image sensor
CN105516698A (zh) * 2015-12-18 2016-04-20 广东欧珀移动通信有限公司 图像传感器的成像方法、成像装置和电子装置
CN105573522A (zh) * 2015-12-22 2016-05-11 广东欧珀移动通信有限公司 一种移动终端的操作方法及移动终端
CN105611258A (zh) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 图像传感器的成像方法、成像装置和电子装置
CN106412592A (zh) * 2016-11-29 2017-02-15 广东欧珀移动通信有限公司 图像处理方法、图像处理装置、成像装置及电子装置
CN106507068A (zh) * 2016-11-29 2017-03-15 广东欧珀移动通信有限公司 图像处理方法及装置、控制方法及装置、成像及电子装置
CN106507069A (zh) * 2016-11-29 2017-03-15 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN106507019A (zh) * 2016-11-29 2017-03-15 广东欧珀移动通信有限公司 控制方法、控制装置、电子装置

Family Cites Families (91)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3592147B2 (ja) 1998-08-20 2004-11-24 キヤノン株式会社 固体撮像装置
JP4317624B2 (ja) 1999-09-10 2009-08-19 メディア・テック・ユーエスエイ・インコーポレーテッド 画像処理装置
TW563365B (en) 2001-09-24 2003-11-21 Winbond Electronics Corp Image compensation processing method of digital camera
JP4458236B2 (ja) 2003-09-30 2010-04-28 パナソニック株式会社 固体撮像装置
JP2006140594A (ja) 2004-11-10 2006-06-01 Pentax Corp デジタルカメラ
JP4816336B2 (ja) 2006-02-07 2011-11-16 日本ビクター株式会社 撮像方法及び撮像装置
KR100843087B1 (ko) 2006-09-06 2008-07-02 삼성전자주식회사 영상 생성 장치 및 방법
US7773138B2 (en) 2006-09-13 2010-08-10 Tower Semiconductor Ltd. Color pattern and pixel level binning for APS image sensor using 2×2 photodiode sharing scheme
CN101150733A (zh) 2006-09-22 2008-03-26 华邦电子股份有限公司 影像像素干扰的补偿方法
US20080084942A1 (en) 2006-10-04 2008-04-10 Interdigital Technology Corporation Method and apparatus for advanced adaptive two dimensional channel interpolation in orthogonal frequency division multiplexing (ofdm) wireless communication systems
WO2008053791A1 (fr) 2006-10-31 2008-05-08 Sanyo Electric Co., Ltd. Dispositif d'imagerie et procédé de génération de signal vidéo utilisé dans le dispositif d'imagerie
JP4795929B2 (ja) 2006-12-26 2011-10-19 富士通株式会社 補間方法を決定するプログラム、装置、および方法
JP5053654B2 (ja) 2007-02-09 2012-10-17 オリンパスイメージング株式会社 画像処理装置およびその方法と電子カメラ
US20080235933A1 (en) 2007-03-29 2008-10-02 Emerson Power Transmission Manufacturing Mechanism for mounting and dismounting bearing
JP4359634B2 (ja) 2007-06-21 2009-11-04 シャープ株式会社 カラー固体撮像装置、および画素信号の読み出し方法
US8102435B2 (en) 2007-09-18 2012-01-24 Stmicroelectronics S.R.L. Method for acquiring a digital image with a large dynamic range with a sensor of lesser dynamic range
JP5113514B2 (ja) * 2007-12-27 2013-01-09 キヤノン株式会社 ホワイトバランス制御装置およびホワイトバランス制御方法
CN101227621A (zh) 2008-01-25 2008-07-23 炬力集成电路设计有限公司 在cmos传感器中对cfa进行插值的方法及电路
US7745779B2 (en) 2008-02-08 2010-06-29 Aptina Imaging Corporation Color pixel arrays having common color filters for multiple adjacent pixels for use in CMOS imagers
JP2010028722A (ja) 2008-07-24 2010-02-04 Sanyo Electric Co Ltd 撮像装置及び画像処理装置
JP5219778B2 (ja) 2008-12-18 2013-06-26 キヤノン株式会社 撮像装置及びその制御方法
CN101815157B (zh) 2009-02-24 2013-01-23 虹软(杭州)科技有限公司 图像及视频的放大方法与相关的图像处理装置
KR101335127B1 (ko) 2009-08-10 2013-12-03 삼성전자주식회사 에지 적응적 보간 및 노이즈 필터링 방법, 컴퓨터로 판독 가능한 기록매체 및 휴대 단말
US8724928B2 (en) 2009-08-31 2014-05-13 Intellectual Ventures Fund 83 Llc Using captured high and low resolution images
US8897551B2 (en) 2009-11-27 2014-11-25 Nikon Corporation Data processing apparatus with horizontal interpolation
JP2011248576A (ja) 2010-05-26 2011-12-08 Olympus Corp 画像処理装置、撮像装置、プログラム及び画像処理方法
US8803994B2 (en) 2010-11-18 2014-08-12 Canon Kabushiki Kaisha Adaptive spatial sampling using an imaging assembly having a tunable spectral response
US8878950B2 (en) 2010-12-14 2014-11-04 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using super-resolution processes
CN102073986A (zh) 2010-12-28 2011-05-25 冠捷显示科技(厦门)有限公司 实现显示装置画面放大的方法
JP5741007B2 (ja) 2011-01-21 2015-07-01 株式会社リコー 画像処理装置、画素補間方法およびプログラム
CN103380615B (zh) 2011-02-21 2015-09-09 三菱电机株式会社 图像放大装置及方法
CN103416067B (zh) 2011-03-11 2014-10-29 富士胶片株式会社 摄像装置
JP5524406B2 (ja) 2011-03-11 2014-06-18 富士フイルム株式会社 撮像装置及び撮像プログラム
JP5822508B2 (ja) 2011-04-05 2015-11-24 キヤノン株式会社 撮像装置及びその制御方法
JP2012234393A (ja) * 2011-05-02 2012-11-29 Sony Corp 画像処理装置、および画像処理方法、並びにプログラム
DE102011100350A1 (de) 2011-05-03 2012-11-08 Conti Temic Microelectronic Gmbh Bildsensor mit einstellbarer Auflösung
EP2731334A4 (en) 2011-07-08 2015-02-25 Olympus Corp PICTURE RECORDING DEVICE AND PICTURE PRODUCTION PROCESS
JP5528627B2 (ja) 2011-07-13 2014-06-25 富士フイルム株式会社 撮像装置、撮像素子及び感度差補正方法
JP2013066140A (ja) 2011-08-31 2013-04-11 Sony Corp 撮像装置、および信号処理方法、並びにプログラム
JP2013066146A (ja) 2011-08-31 2013-04-11 Sony Corp 画像処理装置、および画像処理方法、並びにプログラム
US8891866B2 (en) 2011-08-31 2014-11-18 Sony Corporation Image processing apparatus, image processing method, and program
WO2013046828A1 (ja) 2011-09-29 2013-04-04 富士フイルム株式会社 画像処理装置、方法、プログラムおよび記録媒体並びに撮像装置
GB201117191D0 (en) 2011-10-04 2011-11-16 Imagination Tech Ltd +Detecting image impairments in an interpolated image
JP5687608B2 (ja) 2011-11-28 2015-03-18 オリンパス株式会社 画像処理装置、画像処理方法、および画像処理プログラム
WO2013111449A1 (ja) 2012-01-24 2013-08-01 ソニー株式会社 画像処理装置、および画像処理方法、並びにプログラム
JP5889049B2 (ja) 2012-03-09 2016-03-22 オリンパス株式会社 画像処理装置、撮像装置及び画像処理方法
CN102630019B (zh) 2012-03-27 2014-09-10 上海算芯微电子有限公司 去马赛克的方法和装置
CN104170376B (zh) 2012-03-27 2016-10-19 索尼公司 图像处理设备、成像装置及图像处理方法
JP2013211603A (ja) 2012-03-30 2013-10-10 Sony Corp 撮像装置、撮像方法およびプログラム
WO2014034486A1 (ja) 2012-08-27 2014-03-06 富士フイルム株式会社 画像処理装置、方法、プログラム及び記録媒体並びに撮像装置
JP2014110507A (ja) 2012-11-30 2014-06-12 Canon Inc 画像処理装置および画像処理方法
KR101744761B1 (ko) 2012-11-30 2017-06-09 한화테크윈 주식회사 영상처리장치 및 방법
JP6264616B2 (ja) 2013-01-30 2018-01-24 パナソニックIpマネジメント株式会社 撮像装置及び固体撮像装置
CN104969545B (zh) 2013-02-05 2018-03-20 富士胶片株式会社 图像处理装置、摄像装置、图像处理方法以及程序
US9224362B2 (en) 2013-03-14 2015-12-29 Microsoft Technology Licensing, Llc Monochromatic edge geometry reconstruction through achromatic guidance
JP6263035B2 (ja) 2013-05-17 2018-01-17 キヤノン株式会社 撮像装置
US9692992B2 (en) 2013-07-01 2017-06-27 Omnivision Technologies, Inc. Color and infrared filter array patterns to reduce color aliasing
CN103810675B (zh) 2013-09-09 2016-09-21 深圳市华星光电技术有限公司 图像超分辨率重构***及方法
CN103531603B (zh) 2013-10-30 2018-10-16 上海集成电路研发中心有限公司 一种cmos图像传感器
US9438866B2 (en) 2014-04-23 2016-09-06 Omnivision Technologies, Inc. Image sensor with scaled filter array and in-pixel binning
CN103996170B (zh) 2014-04-28 2017-01-18 深圳市华星光电技术有限公司 一种具有超高解析度的图像边缘锯齿消除方法
JP6415113B2 (ja) 2014-05-29 2018-10-31 オリンパス株式会社 撮像装置、画像処理方法
US9888198B2 (en) 2014-06-03 2018-02-06 Semiconductor Components Industries, Llc Imaging systems having image sensor pixel arrays with sub-pixel resolution capabilities
CN104168403B (zh) 2014-06-27 2018-03-02 深圳市大疆创新科技有限公司 基于拜尔颜色滤波阵列的高动态范围视频录制方法和装置
US9479695B2 (en) 2014-07-31 2016-10-25 Apple Inc. Generating a high dynamic range image using a temporal filter
US9344639B2 (en) 2014-08-12 2016-05-17 Google Technology Holdings LLC High dynamic range array camera
JP5893713B1 (ja) 2014-11-04 2016-03-23 オリンパス株式会社 撮像装置、撮像方法、処理プログラム
US10013735B2 (en) 2015-01-28 2018-07-03 Qualcomm Incorporated Graphics processing unit with bayer mapping
JP6508626B2 (ja) 2015-06-16 2019-05-08 オリンパス株式会社 撮像装置、処理プログラム、撮像方法
CN105025283A (zh) 2015-08-07 2015-11-04 广东欧珀移动通信有限公司 一种新的色彩饱和度调整方法、***及移动终端
CN105120248A (zh) 2015-09-14 2015-12-02 北京中科慧眼科技有限公司 像素阵列及相机传感器
CN108141509B (zh) * 2015-10-16 2020-07-17 奥林巴斯株式会社 图像处理装置、摄像装置、图像处理方法和存储介质
CN105578067B (zh) 2015-12-18 2017-08-25 广东欧珀移动通信有限公司 图像生成方法、装置及终端设备
CN105578078B (zh) 2015-12-18 2018-01-19 广东欧珀移动通信有限公司 图像传感器、成像装置、移动终端及成像方法
CN105609516B (zh) 2015-12-18 2019-04-12 Oppo广东移动通信有限公司 图像传感器及输出方法、相位对焦方法、成像装置和终端
CN105578072A (zh) 2015-12-18 2016-05-11 广东欧珀移动通信有限公司 成像方法、成像装置及电子装置
CN105611124B (zh) 2015-12-18 2017-10-17 广东欧珀移动通信有限公司 图像传感器、成像方法、成像装置及终端
CN105578005B (zh) 2015-12-18 2018-01-19 广东欧珀移动通信有限公司 图像传感器的成像方法、成像装置和电子装置
CN105592303B (zh) 2015-12-18 2018-09-11 广东欧珀移动通信有限公司 成像方法、成像装置及电子装置
CN105611123B (zh) * 2015-12-18 2017-05-24 广东欧珀移动通信有限公司 成像方法、图像传感器、成像装置及电子装置
CN105578076A (zh) 2015-12-18 2016-05-11 广东欧珀移动通信有限公司 成像方法、成像装置及电子装置
JP6711612B2 (ja) 2015-12-21 2020-06-17 キヤノン株式会社 画像処理装置、画像処理方法、および撮像装置
US9883155B2 (en) 2016-06-14 2018-01-30 Personify, Inc. Methods and systems for combining foreground video and background video using chromatic matching
CN106604001B (zh) 2016-11-29 2018-06-29 广东欧珀移动通信有限公司 图像处理方法、图像处理装置、成像装置及电子装置
CN106713790B (zh) 2016-11-29 2019-05-10 Oppo广东移动通信有限公司 控制方法、控制装置及电子装置
CN106488203B (zh) 2016-11-29 2018-03-30 广东欧珀移动通信有限公司 图像处理方法、图像处理装置、成像装置及电子装置
CN106454289B (zh) 2016-11-29 2018-01-23 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN106506984B (zh) 2016-11-29 2019-05-14 Oppo广东移动通信有限公司 图像处理方法及装置、控制方法及装置、成像及电子装置
CN106357967B (zh) 2016-11-29 2018-01-19 广东欧珀移动通信有限公司 控制方法、控制装置和电子装置
CN106504218B (zh) 2016-11-29 2019-03-12 Oppo广东移动通信有限公司 控制方法、控制装置及电子装置
CN109005392B (zh) 2017-06-07 2021-01-12 联发科技股份有限公司 一种去马赛克方法及其电路

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101472048A (zh) * 2007-12-21 2009-07-01 索尼株式会社 图像拾取装置、色噪声降低方法和色噪声降低程序
US8248496B2 (en) * 2009-03-10 2012-08-21 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and image sensor
CN105516698A (zh) * 2015-12-18 2016-04-20 广东欧珀移动通信有限公司 图像传感器的成像方法、成像装置和电子装置
CN105611258A (zh) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 图像传感器的成像方法、成像装置和电子装置
CN105573522A (zh) * 2015-12-22 2016-05-11 广东欧珀移动通信有限公司 一种移动终端的操作方法及移动终端
CN106412592A (zh) * 2016-11-29 2017-02-15 广东欧珀移动通信有限公司 图像处理方法、图像处理装置、成像装置及电子装置
CN106507068A (zh) * 2016-11-29 2017-03-15 广东欧珀移动通信有限公司 图像处理方法及装置、控制方法及装置、成像及电子装置
CN106507069A (zh) * 2016-11-29 2017-03-15 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN106507019A (zh) * 2016-11-29 2017-03-15 广东欧珀移动通信有限公司 控制方法、控制装置、电子装置

Also Published As

Publication number Publication date
US10447925B2 (en) 2019-10-15
EP3327664B1 (en) 2019-10-02
ES2757506T3 (es) 2020-04-29
US20190089896A1 (en) 2019-03-21
CN106507068B (zh) 2018-05-04
CN106507068A (zh) 2017-03-15
US10440265B2 (en) 2019-10-08
US20180152633A1 (en) 2018-05-31
EP3327664A1 (en) 2018-05-30

Similar Documents

Publication Publication Date Title
WO2018098983A1 (zh) 图像处理方法及装置、控制方法及装置、成像及电子装置
WO2018099009A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2018098982A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
WO2018099010A1 (zh) 控制方法、控制装置和电子装置
WO2018098978A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2018098981A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2017101451A1 (zh) 成像方法、成像装置及电子装置
TWI615027B (zh) 高動態範圍圖像的生成方法、拍照裝置和終端裝置、成像方法
WO2018099012A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
WO2018099008A1 (zh) 控制方法、控制装置及电子装置
US10290079B2 (en) Image processing method and apparatus, and electronic device
WO2018099005A1 (zh) 控制方法、控制装置及电子装置
WO2018099031A1 (zh) 控制方法和电子装置
WO2018099007A1 (zh) 控制方法、控制装置及电子装置
TW201724842A (zh) 圖像傳感器及輸出方法、相位對焦方法、成像裝置和終端
WO2018098977A1 (zh) 图像处理方法、图像处理装置、成像装置、制造方法和电子装置
WO2018099011A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
WO2018099006A1 (zh) 控制方法、控制装置及电子装置
US10165205B2 (en) Image processing method and apparatus, and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17875325

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17875325

Country of ref document: EP

Kind code of ref document: A1