CN113890999A - Shooting method and device, electronic equipment and computer readable storage medium - Google Patents

Shooting method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113890999A
CN113890999A CN202111245678.2A CN202111245678A CN113890999A CN 113890999 A CN113890999 A CN 113890999A CN 202111245678 A CN202111245678 A CN 202111245678A CN 113890999 A CN113890999 A CN 113890999A
Authority
CN
China
Prior art keywords
image
frames
images
pixel
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111245678.2A
Other languages
Chinese (zh)
Inventor
蒋乾波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111245678.2A priority Critical patent/CN113890999A/en
Publication of CN113890999A publication Critical patent/CN113890999A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method, which comprises the steps of controlling an image sensor to move parallel to an imaging plane when an electronic device is in a static state, and collecting a plurality of frames of first images; and generating a target image according to the plurality of frames of the first image. According to the shooting method, the shooting device, the electronic equipment and the nonvolatile computer readable storage medium, when the electronic equipment is in a static state, the image sensor is controlled to move parallel to the imaging surface, so that shifts can occur among multiple frames of first images collected, the collection points (corresponding to one pixel) of the same target object correspond to optical filters with different colors in different frames, and the first images in different frames can collect light rays with different color components of the same collection point.

Description

Shooting method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a photographing method, a photographing apparatus, an electronic device, and a non-volatile computer-readable storage medium.
Background
At present, image sensors for color imaging are generally provided with optical filters, each pixel can only receive light of one color, for example, in a bayer array optical filter which is common at present, a pixel can only receive a red component, a green component or a blue component in light, pixel information missing from the pixel needs to be filled up through a demosaicing algorithm, however, the accuracy of pixel information calculated through the demosaicing algorithm is low.
Disclosure of Invention
The embodiment of the application provides a shooting method, a shooting device, an electronic device and a non-volatile computer readable storage medium.
The shooting method comprises the steps of controlling an image sensor to move parallel to an imaging surface when the electronic equipment is in a static state, and collecting a plurality of frames of first images; and generating a target image according to the plurality of frames of the first image.
The shooting device of the embodiment of the application comprises a control module and a generation module. The control module is used for controlling the image sensor to move parallel to the imaging surface when the electronic equipment is in a static state, and acquiring a plurality of frames of first images; the generating module is used for generating a target image according to the plurality of frames of the first image.
The electronic equipment comprises a camera, a processor and a driving device, wherein the camera comprises an image sensor, when the electronic equipment is in a static state, the driving device drives the image sensor to move parallel to an imaging plane, and the image sensor collects a plurality of frames of first images; the processor is used for generating a target image according to the first images of the plurality of frames.
The non-transitory computer-readable storage medium of the embodiments of the present application contains a computer program that, when executed by one or more processors, causes the processors to execute a photographing method of: when the electronic equipment is in a static state, controlling the image sensor to move parallel to an imaging surface, and acquiring a plurality of frames of first images; and generating a target image according to the plurality of frames of the first image.
The shooting method, the shooting device, the electronic device and the non-volatile computer-readable storage medium according to the embodiments of the present application control the image sensor to move parallel to the imaging plane when the electronic device is in a static state, so that a shift may occur between the collected multiple frames of first images, and the collection point (e.g., corresponding to one pixel) of the same target object corresponds to different color filters in different frames, so that the first images in different frames can collect light rays with different color components of the same collection point, so that when a target image is generated according to the multiple frames of first images, the demosaicing process is not required by the demosaicing algorithm, but the light rays with different color components of each collection point of the target object are directly obtained from the first images in different frames, and the demosaiced target image is directly generated, and compared with the case that the accuracy of the pixel information obtained by the demosaicing algorithm is poor, demosaicing algorithm processing is not needed, pseudo color and zipper effects are reduced, and accuracy of pixel information is high.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart of a capture method according to some embodiments of the present application;
FIG. 2 is a schematic view of a camera according to some embodiments of the present application;
FIG. 3 is a schematic plan view of an electronic device of some embodiments of the present application;
FIGS. 4 and 5 are schematic views of scenes of a photographing method according to some embodiments of the present application;
fig. 6 to 9 are schematic flowcharts of a photographing method according to some embodiments of the present application;
FIG. 10 is a schematic view of a scene of a capture method according to some embodiments of the present application;
FIGS. 11 and 12 are schematic flow charts of a photographing method according to some embodiments of the present application;
fig. 13 to 15 are scene diagrams of a photographing method according to some embodiments of the present application;
FIG. 16 is a schematic diagram of a connection state of a non-volatile computer readable storage medium and a processor of some embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1, an embodiment of the present application provides a photographing method. The shooting method comprises the following steps:
011: when the electronic equipment is in a static state, controlling the image sensor to move parallel to an imaging surface, and acquiring a plurality of frames of first images; and
012: and generating a target image according to the plurality of frames of first images.
Referring to fig. 2, an embodiment of the present disclosure provides a camera 10. The photographing apparatus 10 includes a control module 11 and a generation module 12. The imaging method according to the embodiment of the present application is applicable to the imaging apparatus 10. Wherein the control module 11 and the generating module 12 are respectively used for executing step 011 and step 012. That is, the control module 11 is configured to control the image sensor to move parallel to the imaging plane when the electronic device is in a static state, and acquire a plurality of frames of first images, and the generating module 12 is configured to generate a target image according to the plurality of frames of first images
Referring to fig. 3, an electronic device 100 is further provided in the present embodiment. The electronic device 100 comprises a processor 20, a camera 30 and a driving means 40, the camera 30 comprising an image sensor 31. The shooting method according to the embodiment of the present application is applicable to the electronic device 100. The driving device 40 may cooperate with the image sensor 31 to perform step 011, and the processor 20 is configured to perform step 012. That is, when the electronic device 100 is in a static state, the driving device 40 drives the image sensor 31 to move parallel to the imaging plane, and the image sensor 31 collects multiple frames of first images; the processor 20 is configured to generate a target image according to the plurality of frames of the first image.
Specifically, the electronic device 100 further includes a housing 50. The electronic device 100 may be a cell phone, a tablet computer, a display device, a notebook computer, a teller machine, a gate, a smart watch, a head-up display device, a game console, etc. As shown in fig. 3, in the embodiment of the present application, the electronic device 100 is a mobile phone as an example, and it is understood that the specific form of the electronic device 100 is not limited to the mobile phone. The housing 50 may also be used to mount functional modules of the electronic device 100, such as a display device, an imaging device, a power supply device, and a communication device, so that the housing 50 provides protection for the functional modules, such as dust prevention, drop prevention, and water prevention.
The electronic device 100 further includes an attitude sensor for acquiring attitude data of the electronic device 100, for example, the attitude sensor may be a gyroscope capable of acquiring angular velocities of the electronic device 100 about a pitch axis, a roll axis, and a yaw axis, or the attitude sensor may be an accelerometer capable of acquiring accelerations of the electronic device 100 about a pitch axis, a roll axis, and a yaw axis.
Taking the attitude sensor as an example of a gyroscope, the processor 20 may determine whether the electronic device 100 is in a static state through the attitude data, where it is understood that the static state refers to a situation where the position of the electronic device 100 relative to the ground is not changed, at this time, the preset attitude threshold is 0, and when the attitude data is equal to the preset attitude threshold, it may determine that the electronic device 100 is in the static state; when the static state refers to that the position change of the electronic device 100 relative to the ground is smaller than a predetermined difference threshold, that is, the speed of the motion of the electronic device 100 is very slow, the scene ranges of the first image shooting of different frames are basically the same, and basically no offset exists, for example, the preset attitude threshold may be 0.5, 1, 2, and the like, and the electronic device 100 is determined to be in the static state if the attitude data is smaller than the preset attitude threshold, that is, when the pitch angle speed, the roll angle speed, and the yaw angle speed (since the pitch angle speed, the roll angle speed, and the yaw angle speed have directionality and may be negative values, the absolute values of the pitch angle speed, the roll angle speed, and the yaw angle speed are all smaller than the preset attitude threshold, the electronic device 100 is determined to be in the static state.
When the electronic device 100 is in a static state, for example, when the electronic device 100 is in a tripod mode for shooting, the scene range of the first images of the plurality of frames shot by the image sensor 31 is always kept unchanged, and the first images of the plurality of frames do not have a shift, and the demosaicing processing cannot be performed without performing interpolation by using a demosaicing algorithm, so that the processor 20 drives the image sensor 31 to move parallel to the imaging plane by using the driving device 40 when the electronic device 100 is in the static state, so that the first images of the plurality of frames collected by the image sensor 31 have a shift. The offset existing between the first images of the multiple frames means that for the same acquisition point of the shot scene, the pixel information of the acquisition point is different in the first images of different frames, and the pixel information of the acquisition point in the first images of different frames is generated by different color components in the light ray reflected by the acquisition point.
The driving device 40 can drive the image sensor 31 to move along a plurality of movement directions, for example, the driving device 40 can drive the image sensor 31 to move along a direction parallel to the imaging surface, or the driving device 40 can drive the image sensor 31 to move along a direction close to or far from the imaging surface, so as to realize focusing; it is understood that, in order to ensure that the scene range of the first image captured by the image sensor 31 is shifted, the plurality of moving directions at least include a direction parallel to the imaging plane. The moving direction and the moving distance of the image sensor 31 can be precisely controlled by the driving device 40 while moving parallel to the image forming surface.
The driving device 40 may be a voice coil motor or a micro-pan/tilt head, which can realize high-precision movement of the image sensor 31, thereby realizing pixel-level offset of the image sensor 31 by a distance smaller than a predetermined pixel distance, e.g., within 2 pixel distances.
For example, referring to fig. 4, which is a partial schematic view of the image sensor 31, taking a bayer array of the image sensor 31 as an example for explanation, the image sensor 31 includes an R channel, a G channel, and a B channel, when the image sensor 31 is not moved, a frame of the first image P1 is captured, the first image P1 includes a sub-image R1, a sub-image G1, and a sub-image B1, the sub-image R1 includes an R pixel and a blank pixel, the sub-image G1 includes a G pixel and a blank pixel, the sub-image B1 includes a B pixel and a blank pixel, when the driving device 40 drives the image sensor 31 to move one pixel in a horizontal direction (horizontally to the right as shown in fig. 4), the image sensor 31 further acquires a frame of the first image P1, and the first image P1 includes a sub-image R2, a sub-image G2, and a sub-image B2; then, when the driving device 40 drives the image sensor 31 again to move by one pixel in the vertical direction (vertically downward as shown in fig. 4) perpendicular to the horizontal direction, the image sensor 31 captures a frame of the first image P1 again, the first image P1 at this time includes the sub-image R3, the sub-image G3, and the sub-image B3; finally, when the driving device 40 drives the image sensor 31 to move one pixel again along the horizontal direction (horizontally to the left as shown in fig. 4), at this time, the image sensor 31 acquires a frame of the first image P1, where the first image P1 includes the sub-image R4, the sub-image G4, and the sub-image B4; thereby acquiring a 4-frame image first image P1.
The first image may be a RAW map that has not been processed, that is, image data that has not been processed by any algorithm after being output by the image sensor 31 of the camera 30. Or the first image may be an image after Black Level Correction (BLC) and Lens Shading Correction (LSC), so as to obtain the first image with better image quality. Of course, the first image may also be processed by other algorithms than BLC and LSC, not limited to those described above, to achieve better image quality.
After acquiring the frames of the first image with offset, the target image may be synthesized from the frames of the first image, for example, referring to fig. 5, in the 4 frames of the first image P1, the sub-images of the same channel (e.g., the sub-images R1 to R4, the sub-images G1 to G4, and the sub-images B1 to B4) can respectively form a single-channel R-channel image, a single-channel G-channel image, and a single-channel B-channel image without blank pixels, and then the target image P0 may be generated by synthesizing the R-channel image, the single-channel G-channel image, and the single-channel B-channel image. Because the image sensor 31 moves along the parallel imaging surface when acquiring the multi-frame first image, when the plurality of pixels corresponding to the same acquisition point in the multi-frame first image receive the light reflected by the acquisition point, the colors of the optical filters corresponding to the plurality of pixels are different, so that the plurality of pixels can obtain different color components of the light reflected by the acquisition point.
The shooting method, the shooting device 10 and the electronic device 100 of the embodiment of the application control the image sensor 31 to move parallel to the imaging plane when the electronic device 100 is in a static state, so that the collected multiple frames of first images can deviate, the collection point (such as corresponding to one pixel) of the same target object corresponds to the optical filters with different colors in different frames, so that the first images in different frames can collect the light rays with different color components of the same collection point, when a target image is generated according to multiple frames of first images, the demosaicing processing is not required to be performed through the demosaicing algorithm, but the light rays with different color components of each collection point of the target object are directly obtained from the first images in different frames, the demosaiced target image is directly generated, and compared with the poor accuracy of the pixel information obtained through the demosaicing algorithm, the demosaicing algorithm processing is not required, the pseudo-color and zipper effects are reduced, and the accuracy of pixel information is high.
Referring to fig. 2, 3 and 6, in some embodiments, the photographing method further includes:
013: when the electronic device 100 is in a motion state, multiple frames of first images are acquired.
In some embodiments, the camera 10 further comprises an acquisition module 13, and the acquisition module 13 is configured to execute step 013. That is, the acquiring module 13 is configured to acquire a plurality of frames of the first image when the electronic device 100 is in a motion state.
In certain embodiments, image sensor 31 is also used to perform step 013. That is, the image sensor 31 is also used for acquiring a plurality of frames of the first image when the electronic device 100 is in a motion state.
Specifically, the processor 20 may determine whether the electronic device 100 is in the motion state through the attitude data, for example, the processor 20 determines whether the attitude data (such as pitch angle velocity, roll angle velocity, or yaw angle velocity) is greater than a preset attitude threshold, which may be 0, 0.5, 1, 2, or the like, and when the attitude data is greater than the preset attitude threshold, determines that the electronic device 100 is in the motion state. In other embodiments, in order to ensure the diversity of offsets of different frames of first images, when it is determined that the electronic device 100 is in a motion state, it may be determined whether the pitch angle velocity, the roll angle velocity, and the yaw angle velocity are all greater than a preset attitude threshold, or at least two of the pitch angle velocity, the roll angle velocity, and the yaw angle velocity are greater than a preset attitude threshold, so as to ensure that multiple frames of first images have offsets in multiple directions, and the light of each acquisition point may be received by the optical filters of at least three colors, so as to ensure that an accurate de-mosaiced target image may be subsequently generated.
When the electronic device 100 is in a moving state, the offset exists between the multiple frames of first images acquired by the image sensor 31, and therefore, the multiple frames of first images with offset can be obtained without controlling the driving device 40 to drive the image sensor 31 to move parallel to the imaging plane, and then the processor 20 can synthesize a target image based on the multiple frames of offset images, where the multiple frames of first images acquired in the moving state are similar to the multiple frames of first images acquired in the static state, and the principle of synthesizing the target image is not described herein again.
Referring to fig. 2, 3, 7 and 8, in some embodiments, the photographing method further includes:
014: when the attitude data is greater than a preset attitude threshold, determining that the electronic device 100 is in a motion state; or
015: when the attitude data is larger than a preset attitude threshold value, acquiring interframe information of a plurality of frames of collected images;
016: when the interframe information is within the preset range, it is determined that the electronic device 100 is in a motion state.
In some embodiments, the camera 10 further includes a determining module 14, and the first determining module 14 is configured to perform step 014, step 015, and step 016. That is, the determining module 14 is configured to determine that the electronic device 100 is in a motion state when the gesture data is greater than the preset gesture threshold; or when the attitude data is larger than a preset attitude threshold value, acquiring interframe information of multiple frames of collected images; when the interframe information is within the preset range, it is determined that the electronic device 100 is in a motion state.
In certain embodiments, processor 20 is further configured to perform steps 014, 015, and 016. That is, the processor 20 is further configured to determine that the electronic device 100 is in a motion state when the gesture data is greater than the preset gesture threshold; or when the attitude data is larger than a preset attitude threshold value, acquiring interframe information of multiple frames of collected images; when the interframe information is within the preset range, it is determined that the electronic device 100 is in a motion state.
Specifically, the processor 20 may directly determine whether the electronic device 100 is in the motion state through the attitude data, for example, the processor 20 determines whether the attitude data (such as pitch angle velocity, roll angle velocity, or yaw angle velocity) is greater than a preset attitude threshold, which may be 0, 0.5, 1, 2, or the like, and when the attitude data is greater than the preset attitude threshold, determines that the electronic device 100 is in the motion state.
However, there may be a case where the electronic device 100 and the subject are on a moving object (for example, in a subway which runs smoothly, the electronic device 100 is set on a tripod, and the subject is photographed in a tripod mode), and the relative position of the electronic device 100 and the subject does not change, and at this time, the electronic device 100 moves relative to the earth and is in a moving state, but there is no offset between the first images of the plurality of frames acquired by the image sensor 31, and therefore, a demosaiced target image cannot be generated without performing interpolation by using the past mosaic algorithm.
Therefore, when the pose data is determined to be greater than the preset pose threshold, it is further determined whether the relative position of the electronic device 100 and the object is not changed, the processor 20 may obtain the multi-frame captured image, and then calculate inter-frame information of the multi-frame captured image, where the inter-frame information may be determined by any two frames of images in the multi-frame captured image, for example, calculate a sum of differences between pixel values of corresponding positions of any two frames of images in the multi-frame captured image as inter-frame information, and when the relative position of the electronic device 100 and the object is not changed, the two frames of images are substantially the same, so that the inter-frame information is substantially 0 or the inter-frame information is within a preset range, such as the inter-frame information is smaller than a preset difference threshold (e.g., the preset difference threshold is 5, 10, etc.). Therefore, whether the relative position of the electronic device 100 and the subject changes can be accurately determined by determining whether the interframe information is within the preset range, so that when the interframe information is within the preset range, it is determined that the electronic device 100 is in a motion state. And the inter-frame information is judged only when the directly acquired attitude data is larger than a preset attitude threshold value, and the inter-frame information is prevented from being calculated in a static state, so that the time required by the state judgment of the electronic equipment 100 is reduced, and the shooting efficiency is improved.
Referring to fig. 2, 3 and 9, in some embodiments, step 012 includes:
0121: selecting a frame of first image from the multiple frames of first images according to a preset strategy to define as a reference frame, and defining the rest first images as non-reference frames;
0122: respectively calculating the pixel deviation and the similarity of a reference frame and each non-reference frame;
0123: generating a deviation function according to the pixel deviation and the similarity, determining the maximum similarity in the deviation function based on the deviation function, and aligning the non-reference frame in the first images of the multiple frames with the reference frame according to the pixel deviation corresponding to the maximum similarity;
0124: synthesizing the aligned plurality of frames of the first image to generate the target image.
In certain embodiments, the generation module 12 is further configured to perform step 0121, step 0122, step 0123, and step 0124. That is, the generating module 12 is configured to select a frame of first image from the multiple frames of first images according to a preset policy to define the first image as a reference frame, define the remaining first images as non-reference frames, calculate pixel deviations and similarities of the reference frame and each non-reference frame, respectively, generate a deviation function according to the pixel deviations and the similarities, determine the maximum similarity in the deviation function based on the deviation function, and align the non-reference frame in the multiple frames of first images with the reference frame according to the pixel deviation corresponding to the maximum similarity; synthesizing the aligned plurality of frames of the first image to generate the target image.
In certain embodiments, processor 20 is also configured to perform step 0121, step 0122, step 0123, and step 0124. That is, the processor 20 is configured to select one frame of the first image from the plurality of frames of the first image according to a preset policy to define the first image as a reference frame, and define the remaining first images as non-reference frames; respectively calculating the pixel deviation and the similarity of a reference frame and each non-reference frame; generating a deviation function according to the pixel deviation and the similarity, determining the maximum similarity in the deviation function based on the deviation function, and aligning the non-reference frame in the first images of the multiple frames with the reference frame according to the pixel deviation corresponding to the maximum similarity; synthesizing the aligned plurality of frames of the first image to generate the target image.
Specifically, before aligning the multiple first images, the processor 20 needs to select one first image of the multiple first images as a reference frame and the remaining first images as non-reference frames according to a preset policy, so as to determine the reference frame and the non-reference frames of the multiple first images, and then perform alignment with reference to the reference frame. The preset policy may be: the sharpness of each first image is firstly calculated, the sharpness can be determined according to the gradients of the horizontal direction and the vertical direction of the first images, then the first image with the largest sharpness is selected as a reference frame, other first images except the reference frame in a plurality of frames of the first images are used as non-reference frames, and then the non-reference frames are aligned with the reference frame by taking the reference frame as a reference.
More specifically, the processor 20 may first calculate the pixel deviation and the similarity of the reference frame and each non-reference frame, respectively, and may obtain the corresponding relationship between the pixel deviations and the similarities of the plurality of groups due to the existence of the first image of the plurality of frames.
As shown in fig. 10, taking the two-frame first image as an example, the processor 20 may first identify a first pixel L1 of the reference frame P11, then identify a second pixel L2 corresponding to the first pixel L1 in the non-reference frame, find a pixel deviation between the first pixel L1 and the second pixel L2, and then find a similarity between the first feature point L1 and the second feature point L2 by the following formula (1). The processor 20 may obtain the pixel deviation between the reference frame and the non-reference frame by obtaining the position deviation between the first pixel L1 and the second pixel L2, since the reference frame and the non-reference frame in the first image of the plurality of frames are determined.
Figure BDA0003320878410000051
Wherein D (u, v) represents the pixel deviation between the reference frame and the non-reference frame, δ is a parameter artificially set to adjust the similarity, and as can be seen from formula (1), the greater the pixel deviation D (u, v), the smaller the similarity Wij.
In summary, the processor 20 may obtain the pixel deviation D (u, v) between the reference frame and the non-reference frame by the position deviation between the first pixel L1 on the reference frame P11 and the second pixel L2 on the non-reference frame P12, and substitute D (u, v) into formula (1) to obtain the similarity W between the reference frame and the non-reference frame.
Next, since the processor 20 passes formula (1), the pixel deviation and the similarity of the reference frame and each non-reference frame are obtained.
Thus, the processor 20 may construct a deviation function as shown in equation (2) below from the sets of pixel deviations and similarities.
Figure BDA0003320878410000052
Where I (x, y) is the pixel position in the reference frame, u and v represent the amount of shift of the non-reference frame from the reference frame in the horizontal and vertical directions, respectively, and W represents the degree of similarity at the I (x, y) position in the reference frame and the corresponding position in the non-reference frame.
Thus, the non-reference frame in the first image of the plurality of frames can be aligned with the reference frame according to the pixel deviation D (u, v) corresponding to the maximum value W obtained by obtaining the maximum value of W (similarity) in the deviation function in the formula (2).
At present, when aligning multi-frame images, traversal is only performed pixel by pixel or region by region to obtain a position offset between a non-reference frame and a reference frame at a corresponding position, so as to align the multi-frame images.
In contrast, the present embodiment aligns the first images of the plurality of frames by constructing an offset function with respect to the pixel offset and the similarity between the reference frame and the non-reference frame and according to the pixel deviation corresponding to the maximum similarity. When the similarity of the offset function is maximum, the precision of the pixel deviation obtained by calculation according to the maximum similarity of the offset function can be accurate to be less than 1 pixel, namely, the sub-pixel level, and the alignment precision is improved.
Referring to fig. 2, 3 and 11, in some embodiments, step 0121: selecting a frame of first image from a plurality of frames of first images according to a preset strategy to define as a reference frame, and defining the rest first images as non-reference frames, wherein the method comprises the following steps:
01211: respectively calculating the acutance of a plurality of frames of first images, wherein the acutance is determined according to the gradients of the horizontal direction and the vertical direction of the first images;
01212: determining a first image with the maximum sharpness as a reference frame; and
01213: the first image other than the reference frame is determined to be a non-reference frame.
In certain embodiments, the generation module 12 is further configured to perform step 01211, step 01212 and step 01213. Namely the generating module 12 is used for respectively calculating the acutance of a plurality of frames of first images, and the acutance is determined according to the gradients of the horizontal direction and the vertical direction of the first images; determining a first image with the maximum sharpness as a reference frame; and determining the first image except the reference frame as a non-reference frame.
In certain embodiments, processor 20 is configured to perform step 01211, step 01212, and step 01213. That is, the processor 20 is configured to calculate the sharpness of the first image of the plurality of frames, respectively, the sharpness being determined according to the gradients of the first image in the horizontal direction and the vertical direction; determining a first image with the maximum sharpness as a reference frame; and determining the first image except the reference frame as a non-reference frame.
In particular, the processor 20 may calculate the sharpness of the plurality of frames of the first image, respectively, the sharpness being determined based on the gradients of the first image in the horizontal direction and in the vertical direction. Wherein, the sharpness can directly reflect the definition of the image, and when the sharpness is in a certain range, the larger the sharpness is, the better the image quality is.
Therefore, the sharpness of the first image of a plurality of frames can be calculated, the first image with the highest sharpness is used as a reference frame, and the first image except the reference frame is a non-reference frame.
More specifically, the sharpness of the first image may be calculated according to the following formula (3):
Figure BDA0003320878410000061
where Ix is the gradient of the first image in the horizontal direction, Iy is the gradient of the first image in the vertical direction, and S is the sharpness of the first image.
Referring to fig. 2, 3 and 12, in the photographing method according to the embodiment of the present application, the step 012 further includes:
0125: extracting image data of channels of the same preset type in the first image to generate a plurality of second images of a single channel; and
0126: processing the second image of the single channel to generate a third image, the third image being larger in size than the second image;
0127: synthesizing a plurality of third images to generate a target image, the third images including channel pixels and filler pixels, the filler pixels being derived from the channel pixels by the blank pixels, including:
01271: polling channel pixels and blank pixels through a polling frame with a preset size, and calculating the weight of the channel pixels according to the position offset of the channel pixels and the blank pixels in the polling frame and the gradient information of the polling frame; and
01272: and filling the blank pixels into the filling pixels according to the pixel values and the weights of the channel pixels in the polling frame.
In certain embodiments, the generation module 12 is configured to perform step 0125, step 0126, step 0127, step 01271, and step 01272. Namely, the generating module 12 is configured to extract image data of channels of the same preset type in the first image to generate a plurality of second images of a single channel; processing the second image of the single channel to generate a third image, wherein the size of the third image is larger than that of the second image; synthesizing a plurality of third images to generate a target image, the third images including channel pixels and filler pixels, the filler pixels being derived from the channel pixels by the blank pixels, including: polling channel pixels and blank pixels through a polling frame with a preset size, and calculating the weight of the channel pixels according to the position offset of the channel pixels and the blank pixels in the polling frame and the gradient information of the polling frame; and filling the blank pixels into filling pixels according to the pixel values and the weights of the channel pixels in the polling frame.
In certain embodiments, processor 20 is configured to perform step 0125, step 0126, step 0127, step 01271, and step 01272. That is, the processor 20 is configured to extract image data of channels of the same preset type from the first image to generate a plurality of second images of a single channel; processing the second image of the single channel to generate a third image, wherein the size of the third image is larger than that of the second image; synthesizing a plurality of third images to generate a target image, the third images including channel pixels and filler pixels, the filler pixels being derived from the channel pixels by the blank pixels, including: polling channel pixels and blank pixels through a polling frame with a preset size, and calculating the weight of the channel pixels according to the position offset of the channel pixels and the blank pixels in the polling frame and the gradient information of the polling frame; and filling the blank pixels into filling pixels according to the pixel values and the weights of the channel pixels in the polling frame.
Specifically, the processor 20 may further extract image data of channels of the same preset type from the multiple frames of first images, and fuse the extracted image data to generate multiple second images of a single channel.
For example, for a filter arranged in a conventional bayer array, the number of the preset type channels may be determined according to colors, for example, the preset type channels of the first image include an R channel, a G channel, and a B channel, and the number ratio of pixels of the R channel, the G channel, and the B channel is 1:2: 1; or the number of the preset type channels can be determined according to the pixels contained in the basic unit forming the filter, the basic unit of the bayer array comprises 4 pixels, namely the first image comprises an R channel, a G1 channel, a G2 channel and a B channel, wherein the wave bands of G1 and G2 can be the same or partially overlapped; or the first image comprises an R channel, a G channel, a B channel and a W channel, and the number of pixels of the R channel, the G channel, the B channel and the W channel is the same; of course, the channel distribution of the first image is not limited to the above two ways, and is not limited herein. In the embodiment of the present application, the first image includes an R channel, a G1 channel, a G2 channel, and a B channel.
Thus, processor 20 may determine the number of second images to decimate based on the number of channels of the smallest repeating unit in the first image. If the minimum repeating unit in the first image includes 4 channels, i.e., R channel, G1 channel, G2 channel, and B channel, the number of the second images is 4. Processor 20 may also determine the number of second images based on the number of colors included in the first image, e.g., R, G, B colors included in the first image, the number of second images is 3, and e.g., R, G, B, W four colors included in the third image, the number of second images is 4.
It should be noted that, if the minimum repeating unit of the first image includes 4 channels, 4 single-channel data maps may be extracted from each frame of the first image, and then N first images may be extracted from 4 × N single-channel second images.
In the processing of the second image of the single channel, the second image may be subjected to a predetermined magnification processing, so that the resolution of the generated third image is magnified by a predetermined magnification factor, such as 2 times, 3 times, 4 times, or more. As shown in fig. 13, the second image is enlarged by 2 times to obtain a third image as shown in the right image of fig. 13, and the third image includes channel pixels having the same pixel value as each pixel in the second image and filled pixels not filled with pixel values, i.e., blank pixels. The pixel value of the blank pixel can be obtained according to the pixel value of the channel pixel, so as to obtain the pixel value of the filling pixel.
The second image of the single channel includes an image of R (red) channel, an image of G (green) channel, an image of B (blue) channel, an image of W (white) channel, and the like. After the second image of the R channel is processed, the generated third image is an enlarged image of the second image of the R channel; after the second image of the G channel is processed, the generated third image is an enlarged image of the second image of the G channel; and after the second image of the B channel is processed, generating a third image which is an enlarged image of the second image of the B channel. Thus, when the second image contains a plurality of different color single channel images, the third image is generated as a plurality of different color single channel images.
Next, the processor 20 may poll the channel pixel and the blank pixel in the third image through the polling frame to calculate the weight of the channel pixel according to the position offset of the channel pixel and the blank pixel in the same polling frame and the gradient information of the polling frame.
Specifically, as shown in fig. 14, a coordinate system is established with the upper left corner of the third image as the origin, S is a polling frame, when the polling frame S polls to the pixel N, the channel pixels located in the polling frame S are a1, a2, a4, and a5, respectively, and the position offset between the channel pixel and the blank pixel can be obtained from the coordinate system, and taking one pixel as a unit on the coordinate axis as an example, the position of the channel pixel a1 with respect to the blank pixel is changed to (1, 1), the position of the channel pixel a2 with respect to the blank pixel is changed to (-1, 1), the position of the channel pixel a4 with respect to the blank pixel is changed to (1, -1), and the position of the channel pixel a5 with respect to the blank pixel is changed to (-1, -1). The gradient information of the polling frame is the gradient information of the third image corresponding to the polling frame, and can be obtained according to the gradient value of the channel pixel in the polling frame. Thus, the weights corresponding to all the channel pixels can be obtained according to the following formula (4). The positional deviation between the channel pixels and the blank pixels can be obtained according to the following equation (5). For example, taking the polling frame S as an example, the position offset d1 of the channel pixel A1 and the blank pixel is [1, 1] T, and substituting the same into equation (4) can obtain the weight W1 of the channel pixel A1. Thus, the weight W1 of the channel pixel a1, the weight W2 of the channel pixel a2, the weight W4 of the channel pixel a4, and the weight W5 of the channel pixel a2 can be obtained, respectively, i.e., the weights of all the channel pixels in the polling frame S can be obtained.
Figure BDA0003320878410000071
di=[xi-x0,yi-y0]T (5)
Where Wi is a weight of a channel pixel at the ith pixel position, di is a position offset between the channel pixel and a blank pixel at the ith pixel position, C is a covariance matrix constructed according to gradient information of the third image corresponding to the polling frame, T is a transposed symbol, R (x, y) is a pixel value of the blank pixel polled by the polling frame, and Vi is a pixel value of the channel pixel at the ith pixel position.
After aligning the M frames of images to extract M x N second images, M x N second images may generate a plurality of third images. Therefore, the second images of the same preset type of channels correspond to a plurality of third images, for example, two frames of the third images including the R channel, the G1 channel, the G2 channel, and the B channel generate two second images of the R channel, two second images of the G1 channel, two second images of the G2 channel, and two second images of the B channel, where the second images of the same channel are the second images of the same preset type.
Taking the second images of two R channels as an example, the third images of two R channels are corresponded, so that the processor 20 can calculate the pixel value of the blank pixel according to the pixel values and weights of the channel pixels in the polling frames corresponding to the positions in the two third images, as shown in fig. 15, P3 and P4 represent one third image respectively, when the polling block S2 polls for a blank pixel N1 in the third image P3, at this time, the pixel value of the blank pixel N1 needs to be calculated, it is required to determine the pixel values and corresponding weights (calculated according to the above formula (4)) of the channel pixels R1, R2, R4 and R5 in the polling box S2 of the third image P3, and the pixel values and the corresponding weights of the channel pixels R1 ', R2', R4 'and R5' in the polling frame S3 in the third image P4, and the pixel value of the blank pixel N1 is found according to the following formula (6).
Figure BDA0003320878410000081
Where M is the number of frames M of the third image (as can be seen from the above, the number of frames M of the third image matches the number of third images).
Note that, if the polling frame S2 in the third image P3 and the polling frame S3 in the third image P4 are at the same position, the blank pixel N1 and the blank pixel N2 are at the same position, and as is clear from the formula (6), the calculation parameters of the blank pixels at the same position in the plurality of third images are the same, and therefore the pixel values of the blank pixels at the same position in the plurality of third images are the same. The processor 20 only needs to calculate the pixel values of the blank pixels in one third image, and then the pixel values of the blank pixels in all the third images can be obtained, so that the pixel values of the blank pixels are filled as filling pixels, and a third image with completely filled pixels is obtained.
It can be understood that, according to the blank pixels obtained in the above manner, the pixel values of the blank pixels at the same positions in the plurality of third images are the same, and therefore, the pixel values of the filled pixels in the plurality of third images are all the same. The processor 20 may generate a fourth image according to the pixel values of the filling pixels and the pixel values of the channel pixels in the third image corresponding to the reference frame. In this manner, the fourth image of the R channel, the fourth image of the G1 channel, the fourth image of the G2 channel, and the fourth image of the B channel can be obtained.
And the fourth image generated according to the pixel values of the filling pixels and the pixel values of the channel pixels in the third image corresponding to the reference frame is closer to the gradient information of the first image.
The filling of the blank pixels refers to single-channel image information corresponding to multiple frames of offset first images, so that gradient information of multiple frames of third images is fused, and the pixel information of the third images is more accurate, therefore, the demosaicing is not required to be performed through a demosaicing algorithm, but the fourth image of the R channel, the fourth image of the G1 channel, the fourth image of the G2 channel and the fourth image of the B channel are directly fused, and the demosaiced target image can be generated.
In some embodiments, after obtaining the target image, the processor 20 may further perform an image processing operation on the target image to further improve the image quality of the target image. Wherein the image processing operation comprises at least one of gamma correction, color correction, and sharpening operations.
Then, the processor 20 may perform at least one of gamma correction, color correction, and sharpening operations on the target image. If the processor 20 performs gamma correction on the target image, the brightness of the target image can be improved, and if the processor 20 performs color correction on the target image, the image color of the target image can be more consistent with the real situation, and if the processor 20 performs sharpening operation on the target image, the definition of the target image can be improved. The processor 20 may perform image processing on the demosaic according to the actual situation of the generated target image, perform gamma correction if the brightness of the target image is low, perform color correction if the color of the target image is distorted, and perform sharpening if the definition of the target image is low. Therefore, the target image with better image quality can be obtained, and good visual experience can be obtained.
Referring to fig. 16, the present embodiment further provides a non-volatile pseudo-computer readable storage medium 200 containing a computer program 201. The computer program 201, when executed by the one or more processors 20, causes the one or more processors 20 to perform the photographing method of any of the embodiments described above.
For example, the computer program 201, when executed by the one or more processors 20, causes the processors 20 to perform the following photographing method:
011: when the electronic equipment is in a static state, controlling the image sensor to move parallel to an imaging surface, and acquiring a plurality of frames of first images; and
012: and generating a target image according to the plurality of frames of first images.
As another example, the computer program 201, when executed by the one or more processors 20, causes the processors 20 to perform the following photographing methods:
013: when the electronic device 100 is in a motion state, multiple frames of first images are acquired.
In the description herein, references to the description of the terms "certain embodiments," "one example," "exemplary," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (13)

1. A photographing method, characterized by comprising:
when the electronic equipment is in a static state, controlling the image sensor to move parallel to an imaging surface, and acquiring a plurality of frames of first images; and
and generating a target image according to the plurality of frames of the first image.
2. The photographing method according to claim 1, further comprising:
and when the electronic equipment is in a motion state, acquiring a plurality of frames of the first image.
3. The photographing method according to claim 1, wherein the electronic device further includes an attitude sensor for collecting attitude data, the photographing method further comprising:
acquiring the attitude data; and
and when the attitude data is smaller than a preset attitude threshold value, determining that the electronic equipment is in the static state.
4. The shooting method according to claim 3, wherein the attitude data includes a pitch angle velocity, a roll angle velocity, and a yaw angle velocity, and the determining that the electronic device is in the stationary state when the attitude data is smaller than a preset attitude threshold includes:
and when the pitch angle speed, the roll angle speed and the yaw angle speed are all smaller than the preset attitude threshold value, determining that the electronic equipment is in the static state.
5. The photographing method according to claim 2, wherein the electronic device further includes an attitude sensor for collecting attitude data, the photographing method further comprising:
when the attitude data is larger than a preset attitude threshold value, determining that the electronic equipment is in the motion state; or
When the attitude data is larger than the preset attitude threshold value, acquiring interframe information of multiple frames of collected images;
and when the interframe information is in a preset range, determining that the electronic equipment is in the motion state.
6. The photographing method according to claim 1, wherein the controlling of the image sensor to move parallel to the imaging plane includes: controlling the image sensor to be movable in a plurality of moving directions including at least a direction parallel to the imaging plane.
7. The photographing method according to claim 1, wherein the electronic device further includes a driving device including a voice coil motor or a micro-pan-tilt, and the controlling the image sensor to move parallel to an imaging plane includes:
and controlling the driving device to drive the image sensor to move parallel to the imaging surface.
8. The shooting method according to claim 1, wherein the generating a target image from the plurality of frames of the first image includes:
selecting one frame of the first image from the multiple frames of the first image according to a preset strategy, wherein the first image is defined as a reference frame, and the rest first images are defined as non-reference frames;
respectively calculating the pixel deviation and the similarity of the reference frame and the non-reference frame of each frame;
generating a deviation function according to the pixel deviation and the similarity, determining the maximum similarity in the deviation function based on the deviation function, and aligning the non-reference frame in the first image of the plurality of frames with the reference frame according to the pixel deviation corresponding to the maximum similarity;
synthesizing the aligned plurality of frames of the first image to generate the target image.
9. The shooting method according to claim 8, wherein the selecting one frame of the first image from the plurality of frames of the first image according to a preset strategy, the first image being defined as a reference frame, and the remaining first images being defined as non-reference frames, comprises:
respectively calculating the acuteness of a plurality of frames of the first image, wherein the acuteness is determined according to the gradients of the horizontal direction and the vertical direction of the first image;
determining the first image with the greatest sharpness as the reference frame; and
determining the first image other than the reference frame as the non-reference frame.
10. The photographing method according to claim 1 or 8, wherein the generating of the target image from the plurality of frames of the first image further includes:
extracting image data of channels of the same preset type in the first image to generate a plurality of second images of a single channel; and
processing the second image of the single channel to generate a third image, the third image being larger in size than the second image;
synthesizing a plurality of the third images to generate the target image, the third images including channel pixels and filler pixels, the filler pixels being obtained from blank pixels according to the channel pixels, including:
polling the channel pixels and the blank pixels through a polling frame with a preset size, and calculating the weight of the channel pixels according to the position offset of the channel pixels and the blank pixels in the polling frame and the gradient information of the polling frame; and
and calculating the pixel value of the blank pixel according to the pixel value and the weight of the channel pixel in a plurality of polling frames in a plurality of third images corresponding to the second images of the same preset type of channels, and filling the pixel value of the blank pixel into the filling pixel, wherein the plurality of polling frames correspond to the plurality of third images respectively and have the same position in the plurality of second images.
11. An image acquisition apparatus, characterized by comprising:
the first acquisition module is used for controlling the image sensor to move parallel to the imaging plane when the electronic equipment is in a static state and acquiring a plurality of frames of first images;
and the generating module is used for generating a target image according to the plurality of frames of the first image.
12. An electronic device is characterized by comprising a camera, a processor and a driving device, wherein the camera comprises an image sensor, the driving device drives the image sensor to move parallel to an imaging plane when the electronic device is in a static state, and the image sensor acquires a plurality of frames of first images; the processor is used for generating a target image according to the first images of the plurality of frames.
13. A non-transitory computer-readable storage medium containing a computer program which, when executed by a processor, causes the processor to execute the photographing method according to any one of claims 1 to 10.
CN202111245678.2A 2021-10-26 2021-10-26 Shooting method and device, electronic equipment and computer readable storage medium Pending CN113890999A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111245678.2A CN113890999A (en) 2021-10-26 2021-10-26 Shooting method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111245678.2A CN113890999A (en) 2021-10-26 2021-10-26 Shooting method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113890999A true CN113890999A (en) 2022-01-04

Family

ID=79014293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111245678.2A Pending CN113890999A (en) 2021-10-26 2021-10-26 Shooting method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113890999A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115297260A (en) * 2022-07-29 2022-11-04 维沃移动通信有限公司 Image processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070014551A1 (en) * 2005-07-13 2007-01-18 Konica Minolta Photo Imaging, Inc. Image sensing apparatus, imaging system, and operation program product therefor
CN101115209A (en) * 2006-07-24 2008-01-30 三星电子株式会社 Method and apparatus for color interpolation in digital photographing device
US20140125825A1 (en) * 2012-11-08 2014-05-08 Apple Inc. Super-resolution based on optical image stabilization
JP2016167728A (en) * 2015-03-10 2016-09-15 リコーイメージング株式会社 Image detection apparatus, image detection method, and imaging apparatus
CN106210678A (en) * 2016-07-29 2016-12-07 广东欧珀移动通信有限公司 Image color processing method, device and terminal unit

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070014551A1 (en) * 2005-07-13 2007-01-18 Konica Minolta Photo Imaging, Inc. Image sensing apparatus, imaging system, and operation program product therefor
CN101115209A (en) * 2006-07-24 2008-01-30 三星电子株式会社 Method and apparatus for color interpolation in digital photographing device
US20140125825A1 (en) * 2012-11-08 2014-05-08 Apple Inc. Super-resolution based on optical image stabilization
JP2016167728A (en) * 2015-03-10 2016-09-15 リコーイメージング株式会社 Image detection apparatus, image detection method, and imaging apparatus
CN106210678A (en) * 2016-07-29 2016-12-07 广东欧珀移动通信有限公司 Image color processing method, device and terminal unit

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115297260A (en) * 2022-07-29 2022-11-04 维沃移动通信有限公司 Image processing method and device

Similar Documents

Publication Publication Date Title
CN107852462B (en) Camera module, solid-state imaging element, electronic apparatus, and imaging method
CN107925751B (en) System and method for multiple views noise reduction and high dynamic range
US8111910B2 (en) Stereoscopic image processing device, method, recording medium and stereoscopic imaging apparatus
US20120300041A1 (en) Image capturing device
CN103493484B (en) Imaging device and formation method
CN105191283B (en) Photographic device, solid-state imager, camera model, electronic equipment and image capture method
JP6021541B2 (en) Image processing apparatus and method
JP4942221B2 (en) High resolution virtual focal plane image generation method
CN103688536B (en) Image processing apparatus, image processing method
US8704881B2 (en) Stereoscopic image display apparatus
US10708486B2 (en) Generation of a depth-artificial image by determining an interpolated supplementary depth through interpolation based on the original depths and a detected edge
CN107924572A (en) The system and method that high-speed video capture and estimation of Depth are performed using array camera
CN103098457B (en) Stereoscopic imaging apparatus and stereoscopic imaging method
US10489885B2 (en) System and method for stitching images
US9369693B2 (en) Stereoscopic imaging device and shading correction method
CN103370943B (en) Imaging device and formation method
CN110536057A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN105872525A (en) Image processing apparatus and image processing method
US10602067B2 (en) Image processing apparatus, image processing method, image pickup apparatus and storage medium that calculates a correction quality to correct a shake of viewpoint images and a mixture ratio for generation of a virtual viewpoint when generating the virtual viewpoint image using an amount of change in position of an imaging unit
CN104159020A (en) Imaging systems and methods using square image sensor for flexible image orientation
CN110035206A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN109391755A (en) Picture pick-up device and the method wherein executed
US20130083169A1 (en) Image capturing apparatus, image processing apparatus, image processing method and program
CN113890999A (en) Shooting method and device, electronic equipment and computer readable storage medium
CN108833874B (en) Panoramic image color correction method for automobile data recorder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination