WO2022237286A1 - 一种图像的融合方法及电子设备 - Google Patents

一种图像的融合方法及电子设备 Download PDF

Info

Publication number
WO2022237286A1
WO2022237286A1 PCT/CN2022/079139 CN2022079139W WO2022237286A1 WO 2022237286 A1 WO2022237286 A1 WO 2022237286A1 CN 2022079139 W CN2022079139 W CN 2022079139W WO 2022237286 A1 WO2022237286 A1 WO 2022237286A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
electronic device
camera
viewfinder range
interface
Prior art date
Application number
PCT/CN2022/079139
Other languages
English (en)
French (fr)
Inventor
肖斌
丁大钧
陆洋
王宇
朱聪超
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to US17/915,580 priority Critical patent/US20240212100A1/en
Priority to MX2022011895A priority patent/MX2022011895A/es
Priority to EP22743697.9A priority patent/EP4117275A4/en
Publication of WO2022237286A1 publication Critical patent/WO2022237286A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the embodiments of the present application relate to the technical field of electronic equipment, and in particular, to an image fusion method and electronic equipment.
  • electronic devices such as mobile phones, tablet computers or smart watches, etc.
  • cameras can be installed in most electronic devices, so that the electronic devices have the function of capturing images.
  • multiple cameras can be installed in the mobile phone, such as a main camera, a telephoto camera, and a wide-angle camera.
  • the mobile phone can use different cameras to capture images in the same shooting scene to obtain images with different characteristics. For example, based on the feature of long focal length of the telephoto camera, the mobile phone can use the telephoto camera to capture a partially clear telephoto image. For another example, based on the characteristics of large light input and high resolution of the main camera, the mobile phone can use the main camera to capture relatively clear images as a whole. For another example, based on the characteristics of short focal length and large viewing angle of the wide-angle camera, the mobile phone can use the wide-angle camera to capture images with a large viewing angle.
  • the electronic device needs to respond to multiple operations of the user before it can capture multiple images with different characteristics.
  • the shooting process of the electronic device is relatively cumbersome, which affects the user's shooting experience.
  • the present application provides an image fusion method and electronic equipment, which can simplify the process of shooting multiple images with different characteristics, and improve the user's shooting experience.
  • the present application provides a method for image fusion, which can be applied to electronic equipment, and the electronic equipment includes a first camera, a second camera, and a third camera, and the field angle of the first camera is larger than that of the second camera The angle of view of the third camera is larger than that of the first camera.
  • the electronic device detects the first operation.
  • the electronic device may capture a first image through the first camera, capture a second image through the second camera, and capture a third image through the third camera.
  • the viewfinder range in which the first camera collects the first image is the first viewfinder range
  • the viewfinder range in which the second camera collects the second image is the second viewfinder range
  • the viewfinder range in which the third camera collects the third image is the third viewfinder range
  • the third viewing range is larger than the first viewing range
  • the first viewing range is larger than the second viewing range.
  • the electronic device can process the first image to obtain the fourth image.
  • the fourth image includes the first region image
  • the resolution of the first region image in the fourth image is the same as the resolution of the second image
  • the viewfinder range of the first region image relative to the first camera is the fourth viewfinder range.
  • a viewfinder range includes a fourth viewfinder range, and the fourth viewfinder range overlaps with the second viewfinder range.
  • the electronic device may perform image fusion on the second image and the fourth image to obtain a fifth image.
  • the electronic device can perform image fusion on the second image and the fourth image.
  • the second image has a characteristic of clear local images (ie images of distant objects), and the fourth image has the characteristic of relatively clear overall images.
  • the electronic device performs image fusion on the second image and the fourth image, and can synthesize the characteristics of the second image and the fourth image to obtain a fifth image with higher overall image definition and higher local image definition. That is to say, the fifth image combines the characteristics of the second image and the first image. In this way, the image quality captured by the electronic device can be improved.
  • the electronic device can process the third image to obtain a sixth image.
  • the sixth image includes a second area image
  • the resolution of the second area image in the sixth image is the same as that of the first image
  • the viewfinder range of the second area image relative to the third camera is the fifth viewfinder range.
  • the third viewfinder range includes a fifth viewfinder range
  • the fifth viewfinder range overlaps with the first viewfinder range.
  • the electronic device may perform image fusion on the fifth image and the sixth image to obtain a seventh image.
  • the electronic device can perform image fusion on the fifth image and the sixth image.
  • the fifth image has the characteristics of relatively clear overall image and clear partial image (ie, the image of a distant object), and the sixth image has a larger viewfinder range.
  • the electronic device performs image fusion of the fifth image and the sixth image, and can synthesize the characteristics of the fifth image and the sixth image to obtain a seventh image with a larger viewfinder range, higher overall image definition, and higher local image definition. That is to say, the seventh image combines the characteristics of the first image, the second image and the third image. In this way, the image quality captured by the electronic device can be improved,
  • the electronic device processes the first image to obtain the fourth image, including: the electronic device may perform super-resolution reconstruction on the first image to obtain the fourth image.
  • the resolution of the first image can be expanded, that is, the resolution of the fourth image is greater than the resolution of the first image. In this way, it can be ensured that the resolution of the first region image in the obtained fourth image is the same as that of the second image. Furthermore, it can be ensured that the electronic device fuses the fourth image and the second image to obtain images with different image characteristics.
  • the electronic device processes the third image to obtain the sixth image, including: the electronic device may perform super-resolution reconstruction on the third image to obtain the sixth image.
  • the resolution of the third image can be expanded, that is, the resolution of the sixth image is greater than the resolution of the third image. In this way, it can be ensured that the resolution of the second region image in the obtained sixth image is the same as that of the first image. Furthermore, it can be ensured that the electronic device fuses the sixth image and the fifth image to obtain images with different image characteristics.
  • the method further includes: the electronic device may receive the second operation, the first The second operation is used to trigger the electronic device to display the seventh image.
  • the electronic device may display a first interface, and the first interface is used to play a dynamic picture in which the seventh image is centered on the third area image and automatically zoomed.
  • the seventh image includes the third area image
  • the viewfinder range of the third area image relative to the third camera is the sixth viewfinder range
  • the third viewfinder range includes the sixth viewfinder range
  • the sixth viewfinder range overlaps with the second viewfinder range.
  • the seventh image may be centered on the third area image, and the seventh image may be automatically scaled from large to small according to the scaling factor.
  • the maximum zoom ratio of the seventh image is the zoom ratio of the second image
  • the minimum zoom ratio of the seventh image is the zoom ratio of the third image.
  • the display of the seventh image by the electronic device in a dynamic mode can enable the user to watch the dynamic image and increase the fun of viewing the image.
  • the first playback interface also includes a first speed option and a first magnification option; wherein, the first speed option is used to adjust the playback speed in the dynamic picture, and the first magnification option is used to adjust the maximum zoom ratio of the seventh image in the first interface.
  • the method further includes: the electronic device may adjust the playback speed of the dynamic picture in response to the user's adjustment operation on the first speed option.
  • the electronic device may adjust the maximum zoom ratio of the seventh image in response to the user's adjustment operation on the first ratio option.
  • the first speed option may instruct the electronic device to automatically zoom the seventh image at 100 ixel/s or 20% FOV/s.
  • the zoom ratio of the second image is 4X
  • the maximum zoom ratio of the seventh image may be 4X.
  • the user can adjust the zooming speed and the maximum magnification of the image, which improves the user experience.
  • the first interface further includes a manual play button, and the manual play button is used to trigger the electronic device to display the second interface.
  • the second interface includes the seventh image and a preset magnification option, and the preset magnification option is used to set the zoom magnification of the seventh image.
  • the electronic device can switch from the mode of dynamically playing the seventh image to the mode of manually playing the seventh image. In this way, the user can flexibly select a mode for displaying the seventh image, which improves user experience.
  • the method further includes: the electronic device may receive the second operation, and the second Operates to trigger the electronic device to display the seventh image.
  • the electronic device may display a second interface, the second interface includes the seventh image and a preset magnification option, and the preset magnification option is used to set the zoom magnification of the seventh image.
  • the method further includes: the electronic device may receive the zoom ratio set by the user in the preset ratio option, and display the seventh image according to the zoom ratio set by the user.
  • the electronic device can display the image zoomed by the preset magnification. That is to say, the seventh image may display an area characterized by a different image (eg, the first image, the second image or the third image). In this way, the user can see the characteristics of different images in one image, which improves the user experience.
  • the seventh image may display an area characterized by a different image (eg, the first image, the second image or the third image). In this way, the user can see the characteristics of different images in one image, which improves the user experience.
  • the second interface further includes an automatic play button, and the automatic play button is used to trigger the electronic device to display the first interface.
  • the first interface is used to play the dynamic picture of the seventh image centered on the third area image, which is automatically zoomed; the seventh image includes the third area image, and the viewfinder range of the third area image relative to the third camera is the sixth viewfinder
  • the third viewfinder range includes the sixth viewfinder range, and the sixth viewfinder range overlaps with the second viewfinder range.
  • the electronic device can switch from the mode of manually playing the seventh image to the mode of dynamically playing the seventh image. In this way, the user can flexibly select a mode for displaying the seventh image, which improves user experience.
  • the first interface includes a format conversion button, and the format conversion button is used to trigger the electronic device to convert the file format of the seventh image.
  • the method further includes: the electronic device receives the third operation acting on the format conversion button, and generates the first file; wherein, the first file is a video file in which the seventh image is centered on the third area image and automatically zoomed; or, the first The file is a graphics interchange format GIF image centered on the image in the third area and automatically scaling the seventh image.
  • the first file can automatically zoom the seventh image in any electronic device.
  • other electronic devices can display the automatically zoomed seventh image, which improves user experience.
  • the first camera is a main camera
  • the second camera is a telephoto camera
  • the third camera is a wide-angle camera.
  • the first image is the main image.
  • the second camera is a telephoto camera
  • the second image is a telephoto image.
  • the third camera is a wide-angle camera
  • the seventh image obtained by the electronic device has multiple characteristics of a larger viewfinder range, higher overall image definition, and higher local image definition. That is to say, the seventh image synthesizes the characteristics of the main image, the telephoto image and the wide-angle image. In this way, the image quality captured by the electronic device can be improved.
  • the user only needs one operation to obtain an image with the characteristics of multiple images through the electronic device, which simplifies the shooting process and improves the user experience.
  • the present application provides a method for image fusion, which can be applied to electronic equipment, and the electronic equipment includes a first camera, a second camera, a third camera, and a fourth camera, and the field angle of the first camera is
  • the angle of view of the third camera is larger than that of the second camera
  • the angle of view of the third camera is larger than that of the first camera
  • the angle of view of the fourth camera is the same as that of the first camera.
  • the electronic device detects the first operation.
  • the electronic device may collect a first image through the first camera, a second image through the second camera, a third image through the third camera, and an eighth image through the fourth camera.
  • the viewfinder range in which the first camera collects the first image is the first viewfinder range
  • the viewfinder range in which the second camera collects the second image is the second viewfinder range
  • the viewfinder range in which the third camera collects the third image is the third viewfinder range
  • the third viewing range is larger than the first viewing range
  • the first viewing range is larger than the second viewing range
  • the viewing range of the eighth image captured by the fourth camera is the same as the first viewing range.
  • the electronic device may perform image fusion on the first image and the eighth image to obtain a ninth image.
  • image fusion can improve image quality.
  • the electronic device performs image fusion on the first image and the eighth image, and the image quality of the obtained ninth image will be higher than that of the first image (or the eighth image). In this way, the image quality captured by the electronic device can be improved.
  • the electronic device can process the ninth image to obtain the fourth image.
  • the fourth image includes the first region image
  • the resolution of the first region image in the fourth image is the same as the resolution of the second image
  • the viewfinder range of the first region image relative to the first camera is the fourth viewfinder range.
  • a viewfinder range includes a fourth viewfinder range
  • the fourth viewfinder range overlaps with the second viewfinder range.
  • the electronic device may perform image fusion on the second image and the fourth image to obtain a fifth image.
  • the ninth image is an image obtained through image fusion
  • the image quality is relatively high. Therefore, the image quality of the fourth image is high.
  • the resolution of the first area image in the fourth image is the same as that of the second image, and the viewing range of the first area image is the same as that of the second viewing area. Therefore, the electronic device can perform image fusion on the second image and the fourth image.
  • the second image has a characteristic of clear local images (ie images of distant objects), and the fourth image has the characteristic of relatively clear overall images.
  • the electronic device performs image fusion on the second image and the fourth image, and can synthesize the characteristics of the second image and the fourth image to obtain a fifth image with higher overall image definition and higher local image definition. That is to say, the fifth image combines the characteristics of the second image and the first image. In this way, the image quality captured by the electronic device can be improved.
  • the electronic device can process the third image to obtain a sixth image.
  • the sixth image includes a second area image
  • the resolution of the second area image in the sixth image is the same as that of the first image
  • the viewfinder range of the second area image relative to the third camera is the fifth viewfinder range.
  • the third viewfinder range includes a fifth viewfinder range
  • the fifth viewfinder range overlaps with the first viewfinder range.
  • the electronic device may perform image fusion on the fifth image and the sixth image to obtain a seventh image.
  • the electronic device can perform image fusion on the fifth image and the sixth image.
  • the fifth image has the characteristics of relatively clear overall image and clear partial image (ie, the image of a distant object), and the sixth image has a larger viewfinder range.
  • the electronic device performs image fusion of the fifth image and the sixth image, and can synthesize the characteristics of the fifth image and the sixth image to obtain a seventh image with a larger viewfinder range, higher overall image definition, and higher local image definition. That is to say, the seventh image combines the characteristics of the first image, the second image and the third image. In this way, the image quality captured by the electronic device can be improved,
  • the electronic device processes the ninth image to obtain the fourth image, including: the electronic device may perform super-resolution reconstruction on the ninth image to obtain the fourth image.
  • the resolution of the ninth image can be expanded, that is, the resolution of the fourth image is greater than the resolution of the ninth image. In this way, it can be ensured that the resolution of the first region image in the obtained fourth image is the same as that of the second image. Furthermore, it can be ensured that the electronic device fuses the fourth image and the second image to obtain images with different image characteristics.
  • the electronic device processes the third image to obtain the sixth image, including: the electronic device may perform super-resolution reconstruction on the third image to obtain the sixth image.
  • the resolution of the third image can be expanded, that is, the resolution of the sixth image is greater than the resolution of the third image. In this way, it can be ensured that the resolution of the second region image in the obtained sixth image is the same as that of the first image. Furthermore, it can be ensured that the electronic device fuses the sixth image and the fifth image to obtain images with different image characteristics.
  • the method further includes: the electronic device may receive the second operation, the first The second operation is used to trigger the electronic device to display the seventh image.
  • the electronic device may display a first interface, and the first interface is used to play a dynamic picture in which the seventh image is centered on the third area image and automatically zoomed.
  • the seventh image includes the third area image
  • the viewfinder range of the third area image relative to the third camera is the sixth viewfinder range
  • the third viewfinder range includes the sixth viewfinder range
  • the sixth viewfinder range overlaps with the second viewfinder range.
  • the seventh image may be centered on the third area image, and the seventh image may be automatically scaled from large to small according to the scaling factor.
  • the maximum zoom ratio of the seventh image is the zoom ratio of the second image
  • the minimum zoom ratio of the seventh image is the zoom ratio of the third image.
  • the display of the seventh image by the electronic device in a dynamic mode can enable the user to watch the dynamic image and increase the fun of viewing the image.
  • the first playback interface also includes a first speed option and a first magnification option; wherein, the first speed option is used to adjust the playback speed in the dynamic picture, and the first magnification option is used to adjust the maximum zoom ratio of the seventh image in the first interface.
  • the method further includes: the electronic device may adjust the playback speed of the dynamic picture in response to the user's adjustment operation on the first speed option.
  • the electronic device may adjust the maximum zoom ratio of the seventh image in response to the user's adjustment operation on the first ratio option.
  • the first speed option may instruct the electronic device to automatically zoom the seventh image at 100 ixel/s or 20% FOV/s.
  • the zoom ratio of the second image is 4X
  • the maximum zoom ratio of the seventh image may be 4X.
  • the user can adjust the zooming speed and the maximum magnification of the image, which improves the user experience.
  • the first interface further includes a manual play button, and the manual play button is used to trigger the electronic device to display the second interface.
  • the second interface includes the seventh image and a preset magnification option, and the preset magnification option is used to set the zoom magnification of the seventh image.
  • the electronic device can switch from the mode of dynamically playing the seventh image to the mode of manually playing the seventh image. In this way, the user can flexibly select a mode for displaying the seventh image, which improves user experience.
  • the method further includes: the electronic device may receive the second operation, and the second Operates to trigger the electronic device to display the seventh image.
  • the electronic device may display a second interface, the second interface includes the seventh image and a preset magnification option, and the preset magnification option is used to set the zoom magnification of the seventh image.
  • the method further includes: the electronic device may receive the zoom ratio set by the user in the preset ratio option, and display the seventh image according to the zoom ratio set by the user.
  • the electronic device can display the image zoomed by the preset magnification. That is to say, the seventh image may display an area characterized by a different image (eg, the first image, the second image or the third image). In this way, the user can see the characteristics of different images in one image, which improves the user experience.
  • the seventh image may display an area characterized by a different image (eg, the first image, the second image or the third image). In this way, the user can see the characteristics of different images in one image, which improves the user experience.
  • the second interface further includes an automatic play button, and the automatic play button is used to trigger the electronic device to display the first interface.
  • the first interface is used to play the dynamic picture of the seventh image centered on the third area image, which is automatically zoomed; the seventh image includes the third area image, and the viewfinder range of the third area image relative to the third camera is the sixth viewfinder
  • the third viewfinder range includes the sixth viewfinder range, and the sixth viewfinder range overlaps with the second viewfinder range.
  • the electronic device can switch from the mode of manually playing the seventh image to the mode of dynamically playing the seventh image. In this way, the user can flexibly select a mode for displaying the seventh image, which improves user experience.
  • the first interface includes a format conversion button, and the format conversion button is used to trigger the electronic device to convert the file format of the seventh image.
  • the method further includes: the electronic device receives the third operation acting on the format conversion button, and generates the first file; wherein, the first file is a video file in which the seventh image is centered on the third area image and automatically zoomed; or, the first The file is a Graphics Interchange Format GIF image that automatically scales the seventh image centered on the third area image.
  • the first file can automatically zoom the seventh image in any electronic device.
  • other electronic devices can display the automatically zoomed seventh image, which improves user experience.
  • the first camera is a main camera
  • the second camera is a telephoto camera
  • the third camera is a wide-angle camera.
  • the first image is the main image.
  • the second camera is a telephoto camera
  • the second image is a telephoto image.
  • the third camera is a wide-angle camera
  • the seventh image obtained by the electronic device has multiple characteristics of a larger viewfinder range, higher overall image definition, and higher local image definition. That is to say, the seventh image synthesizes the characteristics of the main image, the telephoto image and the wide-angle image. In this way, the image quality captured by the electronic device can be improved.
  • the user only needs one operation to obtain an image with the characteristics of multiple images through the electronic device, which simplifies the shooting process and improves the user experience
  • the present application provides an electronic device, which includes: a memory, a display screen, and a processor, the memory, the display screen and the processor are coupled; the memory is used to store computer program codes, and the computer program codes include computer instructions ; when the computer instruction is executed by the processor, the processor is configured to detect a first operation.
  • the above-mentioned processor is further configured to, in response to the first operation, collect the first image through the first camera, collect the second image through the second camera, and collect the third image through the third camera; wherein, the first camera collects the first image
  • the viewfinder range is the first viewfinder range
  • the viewfinder range for the second camera to capture the second image is the second viewfinder range
  • the viewfinder range for the third camera to capture the third image is the third viewfinder range
  • the third viewfinder range is larger than the first viewfinder range
  • the first viewing range is larger than the second viewing range.
  • the above-mentioned processor is further configured to process the first image to obtain a fourth image; wherein, the fourth image includes the first region image, and the resolution of the first region image in the fourth image is the same as that of the second image,
  • the viewfinder range of the first area image relative to the first camera is a fourth viewfinder range, the first viewfinder range includes the fourth viewfinder range, and the fourth viewfinder range overlaps with the second viewfinder range.
  • the above-mentioned processor is further configured to perform image fusion on the second image and the fourth image to obtain the fifth image.
  • the above-mentioned processor is further configured to process the third image to obtain a sixth image; wherein, the sixth image includes a second area image, and the resolution of the second area image in the sixth image is the same as that of the first image,
  • the viewfinder range of the second area image relative to the third camera is the fifth viewfinder range
  • the third viewfinder range includes the fifth viewfinder range
  • the fifth viewfinder range overlaps with the first viewfinder range.
  • the above-mentioned processor is further configured to perform image fusion on the fifth image and the sixth image to obtain the seventh image.
  • the processor when the computer instructions are executed by the processor, the processor is specifically configured to perform super-resolution reconstruction on the first image to obtain the fourth image.
  • the processor when the computer instructions are executed by the processor, the processor is further specifically configured to perform super-resolution reconstruction on the third image to obtain the sixth image.
  • the processor when the computer instruction is executed by the processor, the processor is further configured to receive a second operation, and the second operation is used to trigger the display screen to display the seventh image .
  • the display screen displays a first interface, and the first interface is used to play a dynamic picture in which the seventh image is centered on the third area image and automatically zoomed; wherein, the seventh image includes the third area image, and the third area
  • the viewfinder range of the image relative to the third camera is the sixth viewfinder range
  • the third viewfinder range includes the sixth viewfinder range
  • the sixth viewfinder range overlaps with the second viewfinder range.
  • the first playback interface also includes a first speed option and a first magnification option; wherein, the first speed option is used to adjust the playback speed in the dynamic picture, and the first magnification option is used to adjust the maximum zoom ratio of the seventh image in the first interface.
  • the processor is further configured to adjust the playback speed of the dynamic picture in response to the user's adjustment operation on the first speed option.
  • the above-mentioned processor is further configured to adjust the maximum zoom ratio of the seventh image in response to the user's adjustment operation on the first zoom ratio option.
  • the first interface further includes a manual play button, and the manual play button is used to trigger the display screen to display the second interface.
  • the second interface includes the seventh image and a preset magnification option, and the preset magnification option is used to set the zoom magnification of the seventh image.
  • the processor when the computer instruction is executed by the processor, the processor is further configured to receive a second operation, and the second operation is used to trigger the display screen to display the seventh image .
  • the display screen is also used to display a second interface in response to the second operation; wherein, the second interface includes the seventh image and a preset magnification option, and the preset magnification option is used to set the zoom magnification of the seventh image.
  • the above-mentioned display screen is also used to receive the zoom ratio set by the user in the preset ratio option, and display the seventh image according to the zoom ratio set by the user.
  • the second interface further includes an automatic play button, and the automatic play button is used to trigger the display screen to display the first interface.
  • the first interface is used to play the dynamic picture of the seventh image centered on the third area image, which is automatically zoomed; the seventh image includes the third area image, and the viewfinder range of the third area image relative to the third camera is the sixth viewfinder
  • the third viewfinder range includes the sixth viewfinder range, and the sixth viewfinder range overlaps with the second viewfinder range.
  • the first interface includes a format conversion button, and the format conversion button is used to trigger the processor to convert the file format of the seventh image.
  • the processor is further configured to receive a third operation acting on the format conversion button to generate a first file; wherein, the first file is the seventh image centered on the third area image .
  • An automatically zoomed video file; or, the first file is a GIF image centered on the image in the third area and automatically zoomed in on the seventh image.
  • the first camera is a main camera
  • the second camera is a telephoto camera
  • the third camera is a wide-angle camera.
  • the present application provides an electronic device, the electronic device includes: a memory, a display screen, and a processor, the memory, the display screen and the processor are coupled; the memory is used to store computer program codes, and the computer program codes include computer instructions ; when the computer instruction is executed by the processor, the processor is configured to detect a first operation.
  • the above-mentioned processor is further configured to, in response to the first operation, collect the first image through the first camera, collect the second image through the second camera, collect the third image through the third camera, and collect the eighth image through the fourth camera; wherein , the viewfinder range in which the first camera collects the first image is the first viewfinder range, the viewfinder range in which the second camera collects the second image is the second viewfinder range, the viewfinder range in which the third camera collects the third image is the third viewfinder range, and The third viewfinder range is larger than the first viewfinder range, the first viewfinder range is larger than the second viewfinder range, and the viewfinder range of the eighth image captured by the fourth camera is the same as the first viewfinder range.
  • the processor above is further configured to perform image fusion on the first image and the eighth image to obtain the ninth image.
  • the above-mentioned processor is further configured to process the ninth image to obtain a fourth image; wherein, the fourth image includes the first region image, and the resolution of the first region image in the fourth image is the same as that of the second image,
  • the viewfinder range of the first area image relative to the first camera is a fourth viewfinder range, the first viewfinder range includes the fourth viewfinder range, and the fourth viewfinder range overlaps with the second viewfinder range.
  • the above-mentioned processor is further configured to perform image fusion on the second image and the fourth image to obtain the fifth image.
  • the above-mentioned processor is further configured to process the third image to obtain a sixth image; wherein, the sixth image includes a second area image, and the resolution of the second area image in the sixth image is the same as that of the first image,
  • the viewfinder range of the second area image relative to the third camera is the fifth viewfinder range
  • the third viewfinder range includes the fifth viewfinder range
  • the fifth viewfinder range overlaps with the first viewfinder range.
  • the above-mentioned processor is further configured to perform image fusion on the fifth image and the sixth image to obtain the seventh image.
  • the processor when the computer instructions are executed by the processor, the processor is specifically configured to perform super-resolution reconstruction on the ninth image to obtain the fourth image.
  • the processor when the computer instructions are executed by the processor, the processor is further specifically configured to perform super-resolution reconstruction on the third image to obtain the sixth image.
  • the processor when the computer instruction is executed by the processor, the processor is further configured to receive a second operation, and the second operation is used to trigger the display screen to display the seventh image .
  • the display screen displays a first interface, and the first interface is used to play a dynamic picture in which the seventh image is centered on the third area image and automatically zoomed; wherein, the seventh image includes the third area image, and the third area
  • the viewfinder range of the image relative to the third camera is the sixth viewfinder range
  • the third viewfinder range includes the sixth viewfinder range
  • the sixth viewfinder range overlaps with the second viewfinder range.
  • the first playback interface also includes a first speed option and a first magnification option; wherein, the first speed option is used to adjust the playback speed in the dynamic picture, and the first magnification option is used to adjust the maximum zoom ratio of the seventh image in the first interface.
  • the processor is further configured to adjust the playback speed of the dynamic picture in response to the user's adjustment operation on the first speed option.
  • the above-mentioned processor is further configured to adjust the maximum zoom ratio of the seventh image in response to the user's adjustment operation on the first zoom ratio option.
  • the first interface further includes a manual play button, and the manual play button is used to trigger the display screen to display the second interface.
  • the second interface includes the seventh image and a preset magnification option, and the preset magnification option is used to set the zoom magnification of the seventh image.
  • the processor when the computer instruction is executed by the processor, the processor is further configured to receive a second operation, and the second operation is used to trigger the display screen to display the seventh image .
  • the display screen is also used to display a second interface in response to the second operation; wherein, the second interface includes the seventh image and a preset magnification option, and the preset magnification option is used to set the zoom magnification of the seventh image.
  • the above-mentioned display screen is also used to receive the zoom ratio set by the user in the preset ratio option, and display the seventh image according to the zoom ratio set by the user.
  • the second interface further includes an automatic play button, and the automatic play button is used to trigger the display screen to display the first interface.
  • the first interface is used to play the dynamic picture of the seventh image centered on the third area image, which is automatically zoomed; the seventh image includes the third area image, and the viewfinder range of the third area image relative to the third camera is the sixth viewfinder
  • the third viewfinder range includes the sixth viewfinder range, and the sixth viewfinder range overlaps with the second viewfinder range.
  • the first interface includes a format conversion button, and the format conversion button is used to trigger the processor to convert the file format of the seventh image.
  • the processor is further configured to receive a third operation acting on the format conversion button to generate a first file; wherein, the first file is the seventh image centered on the third area image .
  • An automatically zoomed video file; or, the first file is a GIF image centered on the image in the third area and automatically zoomed in on the seventh image.
  • the first camera is a main camera
  • the second camera is a telephoto camera
  • the third camera is a wide-angle camera.
  • the present application provides an electronic device, which includes: a memory and a processor, the memory is coupled to the processor; the memory is used to store computer program codes, and the computer program codes include computer instructions; when the computer instructions are executed When the above-mentioned processor is executed, the electronic device is made to execute the method described in the first aspect and any possible design manner thereof.
  • the present application provides an electronic device, which includes: a memory and a processor, the memory is coupled to the processor; the memory is used to store computer program codes, and the computer program codes include computer instructions; when the computer instructions are executed When the above-mentioned processor is executed, the electronic device is made to execute the method described in the second aspect and any possible design manner thereof.
  • the present application provides a chip system, which is applied to an electronic device.
  • the system-on-a-chip includes one or more interface circuits and one or more processors.
  • the interface circuit and the processor are interconnected by wires.
  • the interface circuit is for receiving a signal from the memory of the electronic device and sending the signal to the processor, the signal including computer instructions stored in the memory.
  • the processor executes the computer instructions
  • the electronic device executes the method described in the first aspect or the second aspect and any possible design manner thereof.
  • the present application provides a computer-readable storage medium, the computer-readable storage medium includes computer instructions, and when the computer instructions are run on an electronic device, the electronic device executes the electronic device according to the first aspect or the second aspect.
  • the present application provides a computer program product.
  • the computer program product runs on a computer
  • the computer executes the computer program as described in the first aspect or the second aspect and any possible design thereof. method.
  • the electronic device described in the second aspect and any possible design mode provided above, the electronic device described in the third aspect and any possible design mode thereof, and the electronic device described in the fourth aspect Equipment, the electronic device described in the fifth aspect, the chip system described in the sixth aspect, the computer-readable storage medium described in the seventh aspect, and the beneficial effects that can be achieved by the computer program product described in the eighth aspect can be referred to As for the beneficial effects of the first aspect or the second aspect and any possible design manner thereof, details will not be repeated here.
  • FIG. 1 is a schematic diagram of an example of an image preview interface provided by an embodiment of the present application
  • FIG. 2A is a schematic diagram of a hardware structure of an electronic device provided in an embodiment of the present application.
  • FIG. 2B is a schematic diagram of an example of a viewing range provided by an embodiment of the present application.
  • FIG. 3 is a flowchart of an image fusion method provided in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an example of an image provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of an example of another image provided by the embodiment of the present application.
  • FIG. 6 is a schematic diagram of an example of another image provided by the embodiment of the present application.
  • FIG. 7 is a flowchart of an image fusion method provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an example of another image provided by the embodiment of the present application.
  • FIG. 9 is a schematic diagram of an example of another image preview interface provided by the embodiment of the present application.
  • FIG. 10 is a schematic diagram of an example of a magnification setting interface provided by the embodiment of the present application.
  • FIG. 11 is a schematic diagram of an example of an image display interface provided by an embodiment of the present application.
  • Fig. 12 is a schematic diagram of an example of another image display interface provided by the embodiment of the present application.
  • FIG. 13 is a schematic diagram of an example of another image display interface provided by the embodiment of the present application.
  • FIG. 14 is a schematic diagram of the structural composition of a chip system provided by an embodiment of the present application.
  • A/B can be understood as A or B.
  • first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of this embodiment, unless otherwise specified, "plurality” means two or more.
  • words such as “exemplary” or “for example” are used as examples, illustrations or illustrations. Any embodiment or design described herein as “exemplary” or “for example” is not to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the words “exemplary” or “such as” is intended to present concepts in a specific manner.
  • Super-resolution reconstruction refers to using one or a group of low-quality, low-resolution images to generate a high-quality, high-resolution image.
  • the super-resolution reconstruction may include a reconstruction-based method or a learning-based method.
  • the electronic device can transmit the original image to the ISP module.
  • the RAW format is an unprocessed and uncompressed format.
  • the ISP module can analyze the raw image to check the density gap between adjacent pixels in the image. Then, the ISP module can use the preset adjustment algorithm in the ISP module to properly process the original image, so as to improve the quality of the image captured by the camera.
  • a mobile phone As an example, multiple cameras can be installed in the mobile phone, such as a main camera, a telephoto camera, and a wide-angle camera.
  • the mobile phone can use different cameras to capture images in the same shooting scene to obtain images with different characteristics.
  • the main image is the image of the electronic device through the main camera
  • the telephoto image is the image collected by the electronic device through the telephoto camera
  • the wide-angle image is the image collected by the electronic device through the wide-angle camera.
  • the electronic device when the shooting mode of the electronic device is a normal shooting mode, the electronic device may capture a main image 101 through a main camera. Afterwards, in response to the user's switching operation, the electronic device may switch the shooting mode to the telephoto shooting mode as shown in (b) in FIG. 1 . Afterwards, the electronic device may collect a telephoto image 102 through a telephoto camera. Then, in response to the user's switching operation, the electronic device may switch the shooting mode to the wide-angle shooting mode as shown in (c) in FIG. 1 . Afterwards, the electronic device may collect a wide-angle image 103 through the wide-angle camera.
  • the electronic device needs to respond to multiple operations of the user before it can capture multiple images with different characteristics.
  • the shooting process of the electronic device is relatively cumbersome, which affects the user's shooting experience.
  • an embodiment of the present application provides an image fusion method.
  • the electronic device may respond to the user's photographing operation and collect the main image, the telephoto image and the wide-angle image respectively through the main camera, the telephoto camera and the wide-angle camera at the same time.
  • the electronic device can perform super-resolution reconstruction on the main image and the wide-angle image, and perform image fusion on the telephoto image, the super-resolution reconstructed main image, and the super-resolution reconstructed wide-angle image to obtain the target image.
  • the image collected by the electronic device through the camera may be: an image after the ISP module processes the original image collected by the camera. That is to say, the image collected by the electronic device through the main camera is: the image after processing the original image collected by the main camera by the ISP module; the image collected by the electronic device through the telephoto camera is: the telephoto image collected by the ISP module The processed image of the original image collected by the camera; the image collected by the electronic device through the wide-angle camera is: the image processed by the ISP module on the original image collected by the wide-angle camera.
  • the image captured by the electronic device through the camera in the embodiment of the present application may be an original image (that is, an image in RAW format), which is not limited in the embodiment of the present application.
  • the image in RAW format is an image that records the original information of the camera sensor and some metadata (ISO setting, shutter speed, aperture value, white balance, etc.) generated by the image captured by the camera, and the image The image is not processed by the IPS module.
  • ISO is the abbreviation of International Organization for Standardization.
  • the aforementioned electronic device collects the main image, telephoto image and wide-angle image through the main camera, telephoto camera and wide-angle camera respectively at the same time refers to: the moment when the main camera collects the main image (such as the first moment), the long-term The moment when the focal camera collects the telephoto image (such as the second moment) and the moment when the wide-angle camera collects the wide-angle image (such as the third moment) are the same; or, the time difference between the first moment and the second moment, the first moment and the second moment The time difference between the three moments and the time difference between the second moment and the third moment are small (for example, the time difference is less than 1 millisecond, 0.5 millisecond, or 2 milliseconds, etc.).
  • the fused target image has the characteristics of main image, telephoto image and wide-angle image.
  • the target image has the feature that the overall image in the main image is relatively clear, it also has the feature of partial clarity in the telephoto image, and it also has the feature of a larger viewing angle in the wide-angle photo.
  • the electronic device only needs to respond to one camera operation of the user to obtain the target photo. In this way, the process of shooting multiple images with different characteristics can be simplified, and the user's shooting experience can be improved.
  • the electronic device in the embodiment of the present application may be a tablet computer, a mobile phone, a desktop, a laptop, a handheld computer, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, and a cellular Telephones, personal digital assistants (personal digital assistant, PDA), augmented reality (augmented reality, AR) ⁇ virtual reality (virtual reality, VR) equipment, vehicle-mounted equipment and other equipment, the embodiment of the present application does not make a special description of the specific form of the electronic equipment limit.
  • the image fusion method provided in the present application may be executed by an image fusion device, and the execution device may be the electronic device shown in FIG. 2A . Meanwhile, the executing device may also be a central processing unit (Central Processing Unit, CPU) of the electronic device, or a control module in the electronic device for fusing images.
  • CPU Central Processing Unit
  • an image fusion method performed by an electronic device is used as an example to illustrate the image fusion method provided in the embodiment of the present application.
  • this application takes the mobile phone 200 shown in FIG. 2A as an example to introduce the electronic device provided by this application.
  • the mobile phone 200 shown in FIG. 2A is only an example of an electronic device, and the mobile phone 200 may have more or fewer components than those shown in the figure, may combine two or more components, or may with different part configurations.
  • the various components shown in FIG. 2A may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • mobile phone 200 can comprise: processor 210, external memory interface 220, internal memory 221, universal serial bus (universal serial bus, USB) interface 230, charging management module 240, power management module 241, battery 242 , antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 270A, receiver 270B, microphone 270C, earphone jack 270D, sensor module 280, button 290, motor 291, indicator 292, camera 293 , a display screen 294, and a subscriber identification module (subscriber identification module, SIM) card interface 295, etc.
  • SIM subscriber identification module
  • the above-mentioned sensor module 280 may include sensors such as pressure sensor, gyroscope sensor, air pressure sensor, magnetic sensor, acceleration sensor, distance sensor, proximity light sensor, fingerprint sensor, temperature sensor, touch sensor, ambient light sensor and bone conduction sensor.
  • sensors such as pressure sensor, gyroscope sensor, air pressure sensor, magnetic sensor, acceleration sensor, distance sensor, proximity light sensor, fingerprint sensor, temperature sensor, touch sensor, ambient light sensor and bone conduction sensor.
  • the processor 210 may include one or more processing units, for example: the processor 210 may include a memory, a video codec, a baseband processor, and/or a neural-network processing unit (NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • the processor 210 may include a memory, a video codec, a baseband processor, and/or a neural-network processing unit (NPU), etc.
  • different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the handset 200 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 210 for storing instructions and data.
  • the memory in processor 210 is a cache memory.
  • processor 210 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationship between the modules shown in this embodiment is only a schematic illustration, and does not constitute a structural limitation of the mobile phone 200 .
  • the mobile phone 200 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 240 is configured to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger. While the charging management module 240 is charging the battery 242 , it can also supply power to the electronic device through the power management module 241 .
  • the power management module 241 is used for connecting the battery 242 , the charging management module 240 and the processor 210 .
  • the power management module 241 receives the input from the battery 242 and/or the charging management module 240 to provide power for the processor 210 , internal memory 221 , external memory, display screen 294 , camera 293 , and wireless communication module 260 .
  • the power management module 241 and the charging management module 240 can also be set in the same device.
  • the wireless communication function of the mobile phone 200 can be realized by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, the modem processor and the baseband processor.
  • the antenna 1 of the mobile phone 200 is coupled to the mobile communication module 250, and the antenna 2 is coupled to the wireless communication module 260, so that the mobile phone 200 can communicate with the network and other devices through wireless communication technology.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in handset 200 can be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • the mobile communication module 250 can provide wireless communication solutions including 2G/3G/4G/5G applied on the mobile phone 200 .
  • the mobile communication module 250 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 250 can also amplify the signal modulated by the modem processor, convert it into electromagnetic wave and radiate it through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 250 may be set in the processor 210 .
  • the wireless communication module 260 can provide applications on the mobile phone 200 including wireless local area networks (wireless local area networks, WLAN) (such as (wireless fidelity, Wi-Fi) network), frequency modulation (frequency modulation, FM), infrared technology (infrared, IR) ) and other wireless communication solutions.
  • wireless local area networks wireless local area networks, WLAN
  • wireless fidelity, Wi-Fi wireless fidelity, Wi-Fi
  • FM frequency modulation
  • IR infrared technology
  • the wireless communication module 260 may be one or more devices integrating at least one communication processing module.
  • the mobile phone 200 realizes the display function through the GPU, the display screen 294, and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 294 and the application processor.
  • the display screen 294 is used to display images, videos and the like.
  • the display screen 294 includes a display panel.
  • the display screen 294 may be used to display a gallery interface, a shooting interface, and the like.
  • the mobile phone 200 can realize the shooting function through ISP, camera 293 , video codec, GPU, display screen 294 and application processor.
  • the ISP is used for processing the data fed back by the camera 293 .
  • Camera 293 is used to capture still images or video.
  • the mobile phone 200 may include 1 or N cameras 293, where N is a positive integer greater than 1.
  • the N cameras 293 may include: a main camera, a telephoto camera, and a wide-angle camera.
  • the N cameras 293 may also include: at least one camera such as an infrared camera, a depth camera, or a black and white camera. The following briefly introduces the characteristics (that is, advantages and disadvantages) and applicable scenarios of the above-mentioned cameras.
  • the main camera has the characteristics of large light input, high resolution, and moderate field of view.
  • the main camera is generally used as the default camera of an electronic device (such as a mobile phone). That is to say, the electronic device (such as a mobile phone) can start the main camera by default in response to the user's operation of starting the "camera” application, and display the image captured by the main camera on the preview interface.
  • the telephoto camera has a longer focal length, which is suitable for shooting objects that are far away from the mobile phone (ie, distant objects). However, the amount of light entering the telephoto camera is small. Using the telephoto camera to capture images in low-light scenes may affect the image quality due to insufficient light input. Moreover, the telephoto camera has a small field of view, and is not suitable for capturing images of larger scenes, that is, it is not suitable for capturing larger objects (such as buildings or landscapes, etc.).
  • Wide-angle camera has a wider field of view and are good for capturing larger subjects such as landscapes.
  • the focal length of the wide-angle camera is relatively short, and when the wide-angle camera shoots an object at a short distance, the object in the captured wide-angle image is likely to be distorted (for example, the object in the image becomes wider and flatter than the original object).
  • the viewing angles in the embodiments of the present application include horizontal viewing angles and vertical viewing angles.
  • the external memory interface 220 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the mobile phone 200.
  • the external memory card communicates with the processor 210 through the external memory interface 220 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 221 may be used to store computer-executable program codes including instructions.
  • the processor 210 executes various functional applications and data processing of the mobile phone 200 by executing instructions stored in the internal memory 221 .
  • the processor 210 may execute instructions stored in the internal memory 221, and the internal memory 221 may include a program storage area and a data storage area.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data (such as audio data, phone book, etc.) created during the use of the mobile phone 200 .
  • the internal memory 221 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the mobile phone 200 can realize the audio function through the audio module 270, the speaker 270A, the receiver 270B, the microphone 270C, the earphone interface 270D, and the application processor. Such as music playback, recording, etc.
  • the keys 290 include a power key, a volume key and the like.
  • the key 290 may be a mechanical key. It can also be a touch button.
  • the motor 291 can generate a vibrating reminder.
  • the motor 291 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • the indicator 292 can be an indicator light, which can be used to indicate the charging status, the change of the battery capacity, and also can be used to indicate messages, missed calls, notifications and so on.
  • the SIM card interface 295 is used for connecting a SIM card.
  • the SIM card can be inserted into the SIM card interface 295 or pulled out from the SIM card interface 295 to realize contact and separation with the mobile phone 200 .
  • the mobile phone 200 can support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 295 can support Nano SIM card, Micro SIM card, SIM card etc.
  • the mobile phone 200 may also be a flashlight, a micro projection device, a near field communication (Near Field Communication, NFC) device, etc., which will not be repeated here.
  • NFC Near Field Communication
  • the structure shown in this embodiment does not constitute a specific limitation on the mobile phone 200 .
  • the mobile phone 200 may include more or fewer components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the electronic device in response to a user's camera operation, may capture a first image through a first camera, capture a second image through a second camera, and capture a third image through a third camera.
  • the viewing angle of the third camera is larger than the viewing angle of the first camera
  • the viewing angle of the first camera is larger than the viewing angle of the second camera.
  • the angle of view of the third camera is greater than the angle of view of the first camera means that the horizontal angle of view of the third camera is greater than that of the first camera, and/or the vertical angle of view of the third camera is greater than that of the first camera.
  • the field angle is larger than the vertical field angle of the first camera.
  • the angle of view of the first camera is greater than that of the second camera, which means that the horizontal angle of view of the first camera is greater than that of the second camera, and/or the vertical angle of view of the first camera is greater than that of the second camera.
  • the third camera is a wide-angle camera
  • the first camera is a main camera
  • the second camera is a telephoto camera.
  • the viewfinder range in which the first camera collects the first image is the first viewfinder range
  • the viewfinder range in which the second camera collects the second image is the second viewfinder range
  • the viewfinder range in which the third camera collects the third image is the third viewfinder range.
  • the third viewing range is larger than the first viewing range
  • the first viewing range is larger than the second viewing range.
  • the electronic device may process the first image and the third image, and perform image fusion on the second image, the processed first image, and the processed third image to obtain the target image. In this way, the target image can have the characteristics of the first image, the second image and the third image.
  • the viewing range of the image captured by the camera refers to the area range that the camera can capture.
  • the main camera may capture an image corresponding to the area 203 in the area 202 . That is to say, the areas in the area 202 except the area 203 are not within the viewing range of the image captured by the main camera.
  • the viewfinder range of the image corresponds to the viewfinder range of the image captured by the camera.
  • the viewfinder range of the first image may indicate the viewfinder range (that is, the first viewfinder range) in which the first camera captures the first image.
  • the viewfinder range of the second image may indicate the viewfinder range (that is, the second viewfinder range) in which the second camera captures the second image.
  • the first camera is a main camera
  • the second camera is a telephoto camera
  • the third camera is a wide-angle camera as an example to introduce the embodiments of the present application.
  • the first image is the main image.
  • the second camera is a telephoto camera
  • the second image is a telephoto image.
  • the third camera is a wide-angle camera
  • the third image is a wide-angle image.
  • the image fusion method provided in the embodiment of the present application may include S301-S306.
  • the electronic device detects a first operation.
  • the first operation is an operation of the user, such as a click operation, and is optionally an operation of the user clicking a camera button once, and the optional first operation is used to trigger the electronic device to start the camera function. That is to say, the electronic device can take a picture after receiving one operation of the user.
  • the electronic device collects a first image through a first camera, collects a second image through a second camera, and collects a third image through a third camera.
  • the third viewing range is larger than the first viewing range, and the first viewing range is larger than the second viewing range.
  • the first camera is a main camera
  • the second camera is a telephoto camera
  • the third camera is a wide-angle camera.
  • the zoom ratio of the second image (also called ZOOM ratio) is greater than that of the first image, and the zoom ratio of the first image is larger than that of the third image. That is, among the first image, the second image and the third image, the zoom ratio of the second image is the largest, the zoom ratio of the first image is moderate, and the zoom ratio of the third image is the smallest.
  • the zoom ratio of the image is 1X
  • the zoom ratio of the telephoto image 102 is 4X
  • the zoom ratio of the wide-angle image 103 is 0.5X.
  • the viewing range of the main image 101 shown in (a) in Figure 1 is larger than the viewing range of the telephoto image 102 shown in (b) in Figure 1 , as shown in (a) in Figure 1
  • the viewfinder range of the main image 101 is smaller than the viewfinder range of the wide-angle image 103 shown in (c) of FIG. 1 .
  • the first image includes a fourth area image
  • the viewing range of the fourth area image is the same as that of the second viewing area.
  • the third image includes a fifth area image
  • the viewing range of the fifth area image is the same as the shooting area of the first image.
  • the fourth area image 401 included in the main image 101 shown in (a) in FIG. 4 is the same as the viewing range of the telephoto image 102 shown in (b) in FIG. 4 ;
  • the fifth area image 402 included in the wide-angle image 103 shown in (c) of FIG. 4 is the same as the viewing range of the main image 101 .
  • the electronic device in response to the first operation, may capture the first image through the main camera, the second image through the telephoto camera, and the third image through the wide-angle camera at the same time.
  • the aforementioned electronic device captures the first image through the main camera, the second image through the telephoto camera, and the third image through the wide-angle camera at the same time.
  • the first moment), the moment when the telephoto camera collects the second image (can be referred to as the second moment), and the moment when the wide-angle camera collects the third image (such as the third moment) are the same; or, the time between the first moment and the second moment
  • the time difference between, the time difference between the first moment and the third moment, the time difference between the second moment and the third moment are all small (for example, the time difference is less than 1 millisecond).
  • the embodiment of the present application does not limit the order in which the main camera captures the first image, the telephoto camera captures the second image, and the wide-angle camera captures the third image.
  • the electronic device performs super-resolution reconstruction on the first image to obtain a fourth image.
  • the fourth image includes the first area image
  • the viewfinder range of the first area image relative to the first camera is the fourth viewfinder range
  • the first viewfinder range includes the fourth viewfinder range
  • the fourth viewfinder range is the same as the second viewfinder range coincide. That is to say, the first camera adopts the fourth viewing range to obtain an image overlapping with the first area image viewing range.
  • the first viewing range is the viewing range of the image 501 shown in (a) in FIG. 5
  • the fourth viewing range is the viewing range of the first region image 502 shown in (a) in FIG. 5 .
  • the resolution of the first area image is the same as that of the second image.
  • the image 501 includes a first area image 502, and the resolution of the first area image 502 is the same as that of the second image 102 shown in (b) in FIG. 5 same rate.
  • the resolution of the first region image is 20 million pixels
  • the resolution of the second image is 20 million pixels.
  • the same resolution of two images means that the number of pixels in the horizontal direction in image A is the same as the number of pixels in the horizontal direction in image B, and image A
  • the number of pixels in the vertical direction in is the same as the number of pixels in the vertical direction in image B.
  • the resolutions of the images in the first region are all 5000 ⁇ 4000, and the resolutions of the second images are 5000 ⁇ 4000.
  • the electronic device may adjust the resolution of the first image through a bilinear difference method to obtain the fourth image.
  • the electronic device may use the relationship between the scaling factor of the first image and the scaling factor of the second image to Perform super-resolution reconstruction to obtain a fourth image.
  • the electronic device may perform super-resolution reconstruction at a first magnification on the first image to obtain the fourth image.
  • the first magnification is a ratio between the zoom magnification of the first image and the zoom magnification of the second image.
  • the resolution of the fourth image is greater than that of the first image.
  • the electronic device may calculate the resolution of the fourth image according to the zoom ratio of the first image, the zoom ratio of the second image, and the resolution of the second image.
  • the electronic device may calculate the resolution of the fourth image by using Formula 1.
  • M is the resolution of the fourth image
  • p is the scaling factor of the second image
  • q is the scaling factor of the first image
  • b is the resolution of the second image. (p ⁇ q) is the first magnification.
  • the electronic device can determine the image
  • the resolution of 501 is:
  • the resolution of the image 501 is 320 million pixels.
  • the electronic device performs image fusion on the second image and the fourth image to obtain a fifth image.
  • the fifth image includes a seventh area image
  • the viewfinder range of the seventh area image is the same as that of the second viewfinder range (or the viewfinder range of the fourth area image).
  • the electronic device may use an image fusion algorithm to fuse the second image and the fourth image to obtain the fifth image.
  • the embodiment of the present application does not limit the image fusion algorithm.
  • the image fusion algorithm may be a high and low frequency information fusion algorithm.
  • the image fusion algorithm may be a multi-scale fusion algorithm.
  • the electronic device may fuse the second image and the fourth image by using a preset model to obtain the fifth image.
  • the preset model may be a visual geometry group network (Visual Geometry Group Network, VGG) model, an inception model, a ResNET model, etc., which is not limited in this embodiment of the present application.
  • the electronic device can perform image fusion on the second image and the fourth image.
  • the second image has a characteristic of a clear partial image (that is, an image of a distant object), and the fourth image has a characteristic of a relatively clear overall image.
  • the electronic device performs image fusion on the second image and the fourth image, and can synthesize the characteristics of the second image and the fourth image to obtain a fifth image with higher overall image definition and higher local image definition. That is to say, the fifth image combines the features of the main image and the telephoto image. In this way, the image quality captured by the electronic device can be improved.
  • the electronic device performs super-resolution reconstruction on the third image to obtain a sixth image.
  • the sixth image includes the second area image
  • the viewfinder range of the second area image relative to the third camera is the fifth viewfinder range
  • the third viewfinder range includes the fifth viewfinder range
  • the viewfinder range of the second area image (that is, the fifth viewfinder range) framing range) coincides with the framing range of the first image (or the fifth image).
  • the fifth viewing range is the viewing range of the second region image 602 .
  • the resolution of the second area image is the same as that of the fifth image.
  • the sixth image 601 shown in (a) in FIG. 6 includes a second area image 602 whose resolution is the same as that of the fifth image 501 shown in (b) in FIG. the same resolution.
  • the resolution of the second area image is 320 million pixels
  • the resolution of the fifth image is 320 million pixels.
  • the electronic device may adjust the resolution of the third image by using a bilinear difference method to obtain the sixth image.
  • the electronic device may use the relationship between the scaling factor of the first image and the scaling factor of the third image to Perform super-resolution reconstruction to obtain the sixth image.
  • the electronic device may perform super-resolution reconstruction of the second magnification on the third image to obtain the sixth image.
  • the second magnification is a ratio between the zoom magnification of the first image and the zoom magnification of the third image.
  • the resolution of the sixth image is greater than that of the third image.
  • the electronic device may calculate the resolution of the sixth image according to the zoom ratio of the first image, the zoom ratio of the third image, and the resolution of the fifth image.
  • the electronic device may obtain the resolution of the sixth image through formula two.
  • N (k ⁇ j) 2 ⁇ c Formula 2.
  • N is the resolution of the sixth image
  • k is the scaling factor of the first image
  • j is the scaling factor of the third image
  • c is the resolution of the fifth image. (k ⁇ j) is the second magnification.
  • the electronic device can determine the sixth The resolution of image 601 is:
  • the resolution of the sixth image 601 is 1.28 billion pixels.
  • the order in which the electronic device obtains the fifth image and the sixth image is not limited.
  • the electronic device may execute S305 first, and then execute S304.
  • the electronic device may execute S304 and S305 at the same time.
  • the electronic device performs image fusion on the fifth image and the sixth image to obtain a seventh image.
  • the seventh image includes an eighth area image
  • the viewfinder range of the eighth area image is the same as the first viewfinder range (or the viewfinder range of the fifth area image in the third image).
  • the electronic device can perform image fusion on the fifth image and the sixth image.
  • the fifth image has the characteristics of a relatively clear overall image and a clear partial image (that is, an image of a distant object), and the sixth image has a larger viewfinder range.
  • the electronic device performs image fusion of the fifth image and the sixth image, and can synthesize the characteristics of the fifth image and the sixth image to obtain a seventh image with a larger viewfinder range, higher overall image definition, and higher local image definition. That is to say, the seventh image synthesizes the characteristics of the main image, the telephoto image and the wide-angle image. In this way, the image quality captured by the electronic device can be improved.
  • the electronic device receives a user's camera operation, and can capture a first image, a second image, and a third image through the main camera, the telephoto camera, and the wide-angle camera, respectively.
  • the electronic device may perform image processing on the first image, the second image, and the third image to obtain a seventh image, and the seventh image has characteristics of images captured by different cameras. That is to say, the user only needs one operation to obtain an image with the characteristics of multiple images through the electronic device, which simplifies the shooting process and improves the user experience.
  • an electronic device when an electronic device captures an image, it may be affected by some factors, resulting in poor quality of the captured image. For example, when the user takes a photo, if the hand shakes, the picture taken by the electronic device may be blurred. For another example, when the light is poor, the noise of the picture taken by the electronic device may be serious.
  • the electronic device may assist in capturing the image through an auxiliary camera (that is, the fourth camera).
  • the auxiliary camera can be an infrared camera.
  • the auxiliary camera may be a black and white camera.
  • the auxiliary camera is an infrared camera as an example, that is, the electronic device assists in capturing images through the infrared camera.
  • the above-mentioned infrared camera can sense not only visible light, but also infrared light.
  • the above-mentioned infrared light may be 890 nanometer (nm)-990 nm infrared light. That is, the infrared camera can perceive infrared light with a wavelength of 890nm-990nm.
  • different infrared cameras can sense different infrared light (that is, the wavelength of the infrared light).
  • the above-mentioned visible light camera may also be a camera of a common wavelength band, where the common wavelength band is a wavelength band of visible light.
  • the intensity of visible light is low.
  • the main camera cannot perceive the light or the perceived light is weak, so it cannot capture clear images.
  • the infrared camera can sense the infrared light emitted by people or animals with temperature within the viewing range, so it can collect images of people or animals.
  • the infrared camera when the electronic device uses the main camera to collect images in a dark scene, in order to avoid affecting the image quality due to weak visible light, the infrared camera can be used as an auxiliary The camera assists the main camera to improve the image quality captured by the main camera.
  • the image fusion method may include S701-S707.
  • the electronic device detects a first operation.
  • the electronic device collects a first image through a first camera, collects a second image through a second camera, collects a third image through a third camera, and collects an eighth image through a fourth camera.
  • the viewing range of the eighth image is the same as the first viewing range.
  • the viewfinder range of the eighth image is the same as the first viewfinder range means: the similarity between the viewfinder range of the eighth image and the first viewfinder range is 100%, or the eighth image’s
  • the degree of similarity between the viewing range and the first viewing range is greater than a preset similarity threshold (for example, 99%, 95%, or 90%).
  • the area 801 is the first viewing range
  • the area 802 is the viewing range of the eighth image
  • the area 801 and the area 802 are the same.
  • the area 803 is the first viewing range
  • the area 804 is the viewing range of the eighth image
  • the area 803 and the area 804 are not completely the same.
  • the electronic device may capture the eighth image through the auxiliary camera, the first image through the main camera, the second image through the telephoto camera, and the third image through the wide-angle camera at the same time.
  • the electronic device performs image fusion on the first image and the eighth image to obtain a ninth image.
  • the viewing range of the ninth image is the same as that of the first viewing range.
  • the ninth image includes a sixth area image, and the viewing range of the sixth area image is the same as that of the second viewing area.
  • the embodiment of the present application does not limit the resolutions of the first image and the eighth image.
  • the resolution of the first image may be greater than the resolution of the eighth image.
  • the resolution of the first image may be equal to the resolution of the eighth image.
  • the resolution of the first image may be smaller than the resolution of the eighth image.
  • the electronic device may directly perform image fusion on the first image and the eighth image to obtain the ninth image.
  • the electronic device may enlarge the eighth image to have the same resolution as the first image.
  • the electronic device may enlarge the first image, and the resolution of the enlarged first image is the same as that of the eighth image . Afterwards, the electronic device may perform image fusion on the enlarged first image and the eighth image to obtain a ninth image.
  • image fusion can improve image quality.
  • the electronic device performs image fusion on the first image and the eighth image, and the image quality of the obtained ninth image will be higher than that of the first image (or the eighth image). In this way, the image quality captured by the electronic device can be further improved in a scene with dark light or a moving object.
  • the electronic device performs super-resolution reconstruction on the ninth image to obtain a fourth image.
  • the image quality of the ninth image fused by the electronic device is higher than the image quality of the first image (or the eighth image).
  • the electronic device performs super-resolution reconstruction on the ninth image, and the image quality of the obtained fourth image is also higher than that of the first image (or the eighth image). In this way, the image quality captured by the electronic device can be improved.
  • the electronic device performs image fusion on the fourth image and the second image to obtain a fifth image.
  • the electronic device performs super-resolution reconstruction on the third image to obtain a sixth image.
  • the electronic device performs image fusion on the sixth image and the fifth image to obtain a seventh image.
  • the electronic device receives the user's camera operation, and can respectively capture the first image, the second image, the third image and the eighth image through the main camera, the telephoto camera, the wide-angle camera and the auxiliary camera.
  • the electronic device may perform image processing on the first image, the second image, the third image, and the eighth image to obtain a seventh image, and the seventh image has characteristics of images captured by different cameras. That is to say, the user only needs one operation to obtain an image with the characteristics of multiple images through the electronic device, which simplifies the shooting process and improves the user experience.
  • the electronic device in order to facilitate the user to obtain an image having the characteristics of the main image, the telephoto image and the wide-angle image (ie, the seventh image) through the electronic device, the electronic device can be set to a preset shooting mode.
  • the shooting mode of the electronic device is in the preset shooting mode, the electronic device can obtain a seventh image by shooting through cameras such as a main camera, a telephoto camera, and a wide-angle camera.
  • the electronic device may start a photographing application and display an image preview interface. Afterwards, the electronic device may receive a shooting mode switching operation of the user, and the shooting mode switching operation is used to trigger the electronic device to change the shooting mode. In response to the shooting mode switching operation, the electronic device may switch the shooting mode to a preset shooting mode. Exemplarily, as shown in (a) of FIG. 9 , after the electronic device starts the shooting application, the electronic device may display an image preview interface 901 .
  • the image preview interface 901 includes a viewfinder frame 902, a camera conversion key 903, a shooting key 904, an album key 905, a preview image 906, a flash option 907, a "video” option, a "photograph” option, and a "more” option.
  • the electronic device may display a function option box 908 as shown in (b) in FIG. 9 on the upper layer of the image preview interface 901, the function
  • the option box 908 includes: the logo of the "professional” mode, the logo of the "time-lapse shooting” mode, the logo of the "panorama” mode, the logo of the "spotlight” mode, and the like.
  • the "spotlight” mode is a mode used to capture the seventh image. That is to say, when the shooting mode of the electronic device is the “spotlight” mode, the electronic device can obtain the seventh image by shooting with multiple cameras. Afterwards, in response to the user's operation (for example, operation B) on the sign of the "spotlight” mode, the electronic device may switch the shooting mode of the electronic device to the “spotlight” mode (as shown in (c) in FIG. 9 ).
  • the electronic device can receive a photographing operation of the user, and obtain the seventh image by photographing.
  • the electronic device may receive operation C acting on the shooting key 904 as shown in (c) in FIG.
  • the electronic device can start at least one camera to acquire a preview image.
  • the electronic device when the shooting mode of the electronic device is in the preset shooting mode, can start the main camera, the telephoto camera and the wide-angle camera to acquire preview images.
  • the electronic device can also start the main camera, the auxiliary camera, the telephoto camera and the wide-angle camera to obtain preview images.
  • the preview image is the seventh image. That is to say, the preview image is an image after the electronic device processes (ie, S303-S306) the images collected by multiple cameras.
  • the electronic device activates the main camera, the telephoto camera and the wide-angle camera to obtain preview images, and may display the processed image (ie, the seventh image) on the image preview interface.
  • the user can intuitively understand the image effect obtained in the preset shooting mode, which improves the user's shooting experience.
  • the electronic device may start any camera (such as a main camera, a telephoto camera or a wide-angle camera) to capture a preview image.
  • the preview image is an image collected by a camera activated by the electronic device.
  • the electronic device may start the main camera to capture a preview image (the preview image 906 shown in (a) of FIG. 9 ).
  • the electronic device may start the wide-angle camera to capture a preview image (such as the preview image 906 shown in (c) of FIG. 9 ).
  • the electronic device can start a wide-angle camera to collect a preview image. In this way, the user can know the maximum area captured by the electronic device.
  • the electronic device when the electronic device starts any camera to collect preview images, the electronic device does not need to process the collected images, which reduces the power consumption of the electronic device.
  • the electronic device can improve the image quality of the user-designated area.
  • the electronic device may receive a user's magnification setting operation, and the magnification setting operation is used to trigger the electronic device to set the zooming magnifications of the first image, the second image, and the third image.
  • the electronic device can set a first preset magnification, a second preset magnification, and a third preset magnification, wherein the first preset magnification is the zoom magnification of the first image, and the second preset magnification is the zoom ratio of the second image, and the third preset ratio is the zoom ratio of the third image.
  • the first preset magnification is the zoom magnification of the first image
  • the second preset magnification is the zoom ratio of the second image
  • the third preset ratio is the zoom ratio of the third image.
  • the electronic device in response to the user's operation D on "set magnification” 1001, the electronic device may display the magnification setting window 1002 shown in (b) in FIG. 10; or, The electronic device may display the magnification setting window 1002 in response to an operation acting on the sign of the 'spotlight' mode shown in (b) of FIG. 9 .
  • the magnification setting window 1002 includes: a magnification input box for the main image, a magnification input box for the telephoto image, and a magnification input box for the wide-angle image; where the user can input the zoom magnification of the image in the magnification input box.
  • the zoom ratio of the main image is 1X
  • the zoom ratio of the telephoto image ie the second preset ratio
  • the zoom ratio of the wide-angle image ie the third preset ratio
  • the magnification setting window 1002 may also include: a “save” button, a "reset button” and a “cancel” button.
  • the “Save” button is used to trigger the electronic device to save the zoom ratio of the image input by the user
  • the "Reset” button is used to trigger the electronic device to set the zoom ratio of the image as the default zoom ratio.
  • the default zoom ratio is not limited in the embodiment of the present application ( For example, the default zoom ratio of the main image is 1X, the default zoom ratio of the telephoto image is 3.5X, and the default zoom ratio of the wide-angle image is 0.6X).
  • the “Cancel” button is used to trigger the electronic device not to display the magnification setting window 1002 .
  • the electronic device can adjust the display areas of the first image, the second image and the third image. In this way, the electronic device can set display areas with different characteristics in the seventh image according to the intention of the user, which improves user experience.
  • the electronic device may receive a second operation, and the second operation is used to trigger the electronic device to display the seventh image.
  • the electronic device may display the seventh image in a preset mode.
  • the preset mode includes a dynamic mode and a static mode.
  • the dynamic mode is used to instruct the electronic device to automatically zoom the seventh image around the image in the third area
  • the seventh image includes the image in the third area
  • the viewing range of the third area image relative to the third camera is the sixth viewing area
  • the third viewing area includes a sixth viewfinder range
  • the sixth viewfinder range overlaps with the second viewfinder range.
  • the static mode is used to instruct the electronic device to zoom the seventh image after receiving the zoom operation input by the user.
  • the electronic device may display the seventh image in a dynamic mode. Specifically, the electronic device receives the second operation. In response to the user's second operation, the electronic device may display a dynamic image interface (that is, the first interface), and the dynamic image interface is used to play a dynamic picture in which the seventh image is automatically zoomed around the third area image. Specifically, the seventh image may be centered on the third area image, and the seventh image may be automatically scaled from large to small according to the scaling factor.
  • the maximum zoom ratio of the seventh image is the zoom ratio of the second image
  • the minimum zoom ratio of the seventh image is the zoom ratio of the third image.
  • the electronic device may display a dynamic image interface 1101, the dynamic image interface 1101 includes a dynamic image 1102, and the dynamic image 1102 may be centered on a third area image 1105 , automatically zoom the seventh image from large to small according to the zoom ratio.
  • the dynamic image interface that the user can see on the user interface can be zoomed from the image corresponding to (c) in the user interface presented in Figure 4 to the image presented in (a) in Figure 4 in the user interface, and then zoomed to present The image corresponding to (b) in Figure 4.
  • the scaling process of this dynamic image can be replayed repeatedly, for example, after the user interface presents the image corresponding to (b) in Figure 4, jump to the user interface to present the image corresponding to (c) in Figure 4, and perform Replay the dynamic change process; it is also possible not to repeat it, and when zoomed to the image corresponding to (b) in Figure 4, it stops and displays the last frame of image.
  • the displayed dynamic image can be dynamically displayed in response to the user's finger pressing any position of the dynamic image. When the user's finger leaves the screen, the long press ends to end the dynamic display of the dynamic image; it can also respond to the user's finger long press Any position of the dynamic image in 2 seconds will be dynamically displayed.
  • the dynamic image interface 1101 further includes a first speed option 1103 .
  • the first speed option 1103 may instruct the electronic device to automatically zoom the seventh image at 100 ixel/s or 20% FOV/s.
  • the dynamic image interface may also include a first magnification option, and the first magnification option is used to adjust the maximum zoom magnification of the seventh image in the first interface.
  • the maximum magnification of the seventh image is less than or equal to the second preset magnification (that is, the zoom magnification of the second image).
  • the electronic device may adjust the maximum zoom ratio of the seventh image in response to the user's adjustment operation on the first zoom ratio option.
  • the dynamic image interface 1101 also includes a first magnification option 1104 .
  • the maximum zoom ratio of the seventh image may be 4X or 3X.
  • the display of the seventh image by the electronic device in a dynamic mode can enable the user to watch the dynamic image and increase the fun of viewing the image.
  • the user can adjust the zooming speed and the maximum magnification of the image, which improves the user experience.
  • the electronic device may display the seventh image in a static mode.
  • the electronic device may display a static image interface (that is, the second interface), where the static image interface includes a seventh image that cannot be automatically zoomed.
  • the image display interface may also include a preset magnification option, and the preset magnification option is used to set the zoom magnification of the seventh image.
  • the electronic device may receive the zoom ratio set by the user in the preset ratio option, and display the seventh image according to the zoom ratio set by the user.
  • the image display interface 1201 displayed by the electronic device may include a seventh image 1202 and a preset magnification option 1203 .
  • the preset magnification option 1203 may include a first preset magnification (such as 1X), a second preset magnification (such as 4X) and a third preset magnification (such as 0.5X). That is, in response to the user's operation on the preset magnification option 1203, the electronic device may set the zoom magnification of the seventh image to 4X, 1X or 0.5X.
  • a first preset magnification such as 1X
  • a second preset magnification such as 4X
  • a third preset magnification such as 0.5X
  • the electronic device can display the image zoomed by the preset magnification. That is to say, the seventh image may display an area characterized by a different image (eg, the first image, the second image or the third image). In this way, the user can see the characteristics of different images in one image, which improves the user experience.
  • the seventh image may display an area characterized by a different image (eg, the first image, the second image or the third image). In this way, the user can see the characteristics of different images in one image, which improves the user experience.
  • the electronic device can switch the display mode of the seventh image.
  • the interface displayed by the electronic device when the interface displayed by the electronic device is the first interface, that is, the electronic device plays a dynamic picture in which the seventh image is automatically zoomed around the image in the third area; the first interface also includes a manual play button, and the manual play The button is used to trigger the electronic device to display the second interface.
  • the electronic device may receive an operation acting on the manual play button, and switch the displayed interface from the first interface to the second interface.
  • the dynamic image interface 1301 displayed by the electronic device may include a manual play button 1302 , for example, the manual play button 1302 may be "dynamic mode".
  • the interface displayed by the electronic device may be switched from a dynamic image interface 1301 to a static image interface 1303 as shown in (b) of FIG. 13 .
  • the second interface when the interface displayed by the electronic device is the second interface, the second interface may further include an automatic play button, and the automatic play button is used to trigger the electronic device to display the first interface.
  • the static image interface 1303 includes an autoplay button 1304 , for example, the autoplay button 1304 may be "static mode".
  • the electronic device may receive an operation acting on the automatic play button, and switch the displayed interface from the second interface to the first interface.
  • the electronic device may switch the display mode of the seventh image. In this way, the user can flexibly select a mode for displaying the seventh image, which improves user experience.
  • the electronic device may display conventional images in a conventional display manner.
  • the electronic device may display a telephoto image, a main image, and the like on the screen.
  • the electronic device may also display the above seventh image in a preset mode.
  • the electronic device in order to facilitate the electronic device to determine the display mode of the image, when the electronic device saves the seventh image, it may add a first identifier to the image information of the seventh image, and the first identifier is used to instruct the electronic device to follow the preset method.
  • the seventh image is displayed in the setting mode.
  • the electronic device may determine whether to display the image in a preset mode according to image information of the image. Exemplarily, after receiving the second operation of the user, the electronic device may detect whether the first identifier exists in the image information of the image. If the image information has the first identifier, the electronic device can display the image in a preset mode. If the image information does not have the first identifier, the electronic device may display the image in a conventional display manner.
  • adding the first identifier to the seventh image by the electronic device may cause the electronic device to display the seventh image in a preset mode. In this way, the fun of viewing images by the user is increased, and the user experience is improved.
  • the electronic device may share the seventh image with other electronic devices.
  • Other electronic devices may display the seventh image in different display manners according to whether the electronic device has the ability to recognize the first logo.
  • the electronic device (for example, the first device) that receives the seventh image may display the seventh image in a preset mode, and the first device has the ability to recognize the first logo. Specifically, the electronic device may share the seventh image with the first device. After receiving the seventh image, the first device may detect and identify the first identifier of the seventh image. Then, the first device may display the seventh image according to a preset mode. For example, in response to the second operation, the first device may display the seventh image in a dynamic mode.
  • the electronic device (for example, the second device) that receives the seventh image may display the seventh image in a conventional display manner, and the second device does not have the ability to recognize the first identifier. Specifically, the electronic device may share the seventh image with the second device. After receiving the seventh image, the second device cannot recognize the first identifier of the seventh image. Then, the second device may display the seventh image in a conventional display manner. For example, in response to the second operation, the second device may display the seventh image on the screen of the second device.
  • the electronic device cannot display the automatically zoomed seventh image, which affects the user experience.
  • the electronic device may convert the file format of the seventh image.
  • the interface displayed by the electronic device is the first interface, that is, the electronic device plays a dynamic picture in which the seventh image is automatically zoomed around the image in the third area; the first interface may also include a format conversion button, which is used to The electronic device is triggered to convert the file format of the seventh image.
  • the electronic device may receive the third operation acting on the format conversion button to generate the first file, the third operation is used to trigger the electronic device to convert the file format of the seventh image, the first file is the seventh image centered on the third area image Automatically scaled files.
  • the electronic device may generate video files, or the electronic device may generate dynamic image files. Afterwards, the electronic device can share the first file with other electronic devices. For example, when the first file is a Graphics Interchange Format (GIF) image, the electronic device can share the GIF image with other electronic devices.
  • GIF Graphics Interchange Format
  • the automatic scaling speed of the seventh image in the first file and the maximum magnification of the seventh image can be set by the first speed option and the first magnification option. This embodiment of the present application does not limit it.
  • the first file can automatically zoom the seventh image in any electronic device.
  • other electronic devices can display the automatically zoomed seventh image, which improves user experience.
  • the electronic device includes hardware structures and/or software modules corresponding to each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or by electronic equipment software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.
  • the image fusion device can be divided into functional modules or functional units according to the above method example.
  • each functional module or functional unit can be divided corresponding to each function, or two or more functions can be integrated. in a processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, or in the form of software function modules or functional units.
  • the division of modules or units in the embodiment of the present application is schematic, and is only a logical function division, and there may be another division manner in actual implementation.
  • an electronic device such as the mobile phone 200 shown in FIG. 2A
  • the electronic device may include: a memory and one or more processors.
  • the memory is coupled to the processor.
  • the electronic device may also include a camera. Alternatively, the electronic device can be connected with an external camera.
  • the memory is used to store computer program code comprising computer instructions.
  • the processor executes the computer instructions, the electronic device can execute various functions or steps performed by the mobile phone in the foregoing method embodiments.
  • the embodiment of the present application also provides a chip system, as shown in FIG. 14 , the chip system includes at least one processor 1401 and at least one interface circuit 1402 .
  • the processor 1401 and the interface circuit 1402 may be interconnected through wires.
  • interface circuit 1402 may be used to receive signals from other devices, such as memory of an electronic device.
  • the interface circuit 1402 may be used to send signals to other devices (such as the processor 1401).
  • the interface circuit 1402 can read instructions stored in the memory, and send the instructions to the processor 1401 .
  • the electronic device such as the mobile phone 200 shown in FIG. 2A
  • the chip system may also include other discrete devices, which is not specifically limited in this embodiment of the present application.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium includes computer instructions.
  • the device executes various functions or steps executed by the mobile phone in the foregoing method embodiments.
  • the embodiment of the present application also provides a computer program product, which, when the computer program product is run on a computer, causes the computer to execute each function or step performed by the mobile phone in the method embodiment above.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be Incorporation or may be integrated into another device, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the unit described as a separate component may or may not be physically separated, and the component displayed as a unit may be one physical unit or multiple physical units, that is, it may be located in one place, or may be distributed to multiple different places . Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the software product is stored in a storage medium Among them, several instructions are included to make a device (which may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: various media that can store program codes such as U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

一种图像的融合方法及电子设备,涉及电子设备技术领域,可以简化拍摄多张具备不同特点图像的过程。具体方案包括:响应于第一操作,电子设备可以通过第一摄像头采集第一图像,通过第二摄像头采集第二图像,通过第三摄像头采集第三图像。电子设备可以对第一图像进行处理,得到第四图像,第四图像中第一区域图像的分辨率与第二图像的分辨率相同。之后,电子设备可以对第二图像和第四图像进行图像融合,得到第五图像。电子设备可以对第三图像进行处理,得到第六图像,第六图像中第二区域图像的分辨率与第一图像的分辨率相同。之后,电子设备可以对第五图像和第六图像进行图像融合,得到第七图像。

Description

一种图像的融合方法及电子设备
本申请要求于2021年05月10日提交国家知识产权局、申请号为202110506754.4、发明名称为“一种图像的融合方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及电子设备技术领域,尤其涉及一种图像的融合方法及电子设备。
背景技术
随着电子技术的发展,电子设备(如手机、平板电脑或智能手表等)的功能越来越多。例如,大多数电子设备中均可以安装摄像头,使电子设备具有拍摄图像的功能。
以手机为例,手机中可以安装多个摄像头,如主摄像头、长焦摄像头和广角摄像头等。其中,基于上述各个摄像头的特点,手机可以在同一个拍摄场景下,采用不同的摄像头拍摄图像,以得到不同特点的图像。例如,基于长焦摄像头焦距长的特点,手机可以采用长焦摄像头,拍摄得到局部清晰的长焦图像。又例如,基于主摄像头进光量大和分辨率高的特点,手机可以采用主摄像头,拍摄整体较为清晰的图像。又例如,基于广角摄像头焦距短和视角大的特点,手机可以采用广角摄像头,拍摄视角较大的图像。
然而,常规技术中,电子设备需要响应于用户的多次操作,才可以拍摄得到多张具备不同特点的图像。电子设备的拍摄过程较为繁琐,影响用户的拍摄体验。
发明内容
本申请提供一种图像的融合方法及电子设备,能够简化拍摄多张具备不同特点图像的过程,提高用户的拍摄体验。
第一方面,本申请提供一种图像的融合的方法,该方法可以应用于电子设备,该电子设备包括第一摄像头、第二摄像头和第三摄像头,第一摄像头的视场角大于第二摄像头的视场角,第三摄像头的视场角大于第一摄像头的视场角。
该方法中,电子设备检测到第一操作。响应于第一操作,电子设备可以通过第一摄像头采集第一图像,通过第二摄像头采集第二图像,通过第三摄像头采集第三图像。其中,第一摄像头采集第一图像的取景范围是第一取景范围,第二摄像头采集第二图像的取景范围是第二取景范围,第三摄像头采集第三图像的取景范围是第三取景范围,第三取景范围大于第一取景范围,第一取景范围大于第二取景范围。电子设备可以对第一图像进行处理,得到第四图像。其中,第四图像包括第一区域图像,第四图像中第一区域图像的分辨率和第二图像的分辨率相同,第一区域图像相对于第一摄像头的取景范围是第四取景范围,第一取景范围包括第四取景范围,第四取景范围与第二取景范围重合。之后,电子设备可以对第二图像和第四图像进行图像融合,得到第五图像。
可以理解的是,由于第四图像中的第一区域图像的分辨率与第二图像的分辨率相同,且第一区域图像的取景范围与第二取景范围相同。因此,电子设备可以对第二图 像和第四图像进行图像融合。并且,第二图像具备局部图像(即远处物体的图像)清晰的特点,第四图像具备整体图像较为清晰的特点。电子设备将第二图像和第四图像进行图像融合,可以综合第二图像和第四图像的特点,得到整体图像清晰度较高和局部图像清晰度较高的第五图像。也就是说,该第五图像综合了第二图像和第一图像的特点。如此,可以提高电子设备拍摄的图像质量。
电子设备可以对第三图像进行处理,得到第六图像。其中,第六图像包括第二区域图像,第六图像中第二区域图像的分辨率和第一图像的分辨率相同,第二区域图像相对于第三摄像头的取景范围是第五取景范围,第三取景范围包括第五取景范围,第五取景范围与第一取景范围重合。之后,电子设备可以对第五图像和第六图像进行图像融合,得到第七图像。
可以理解的是,由于第六图像中的第二区域图像的分辨率与第五图像的分辨率相同,且第二区域图像的取景范围与第五图像的取景范围相同。因此,电子设备可以对第五图像和第六图像进行图像融合。并且,第五图像具备整体图像较为清晰和局部图像(即远处物体的图像)清晰的特点,第六图像的取景范围较大。电子设备将第五图像和第六图像进行图像融合,可以综合第五图像和第六图像的特点,得到取景范围较大、整体图像清晰度较高和局部图像清晰度较高的第七图像。也就是说,该第七图像综合了第一图像、第二图像和第三图像的特点。如此,可以提高电子设备拍摄的图像质量,
并且,综上可知,用户只需要一次操作,便可以通过电子设备得到一张具备多张图像特点的图像,简化了拍摄过程,提高了用户的使用体验。
结合第一方面,在一种可能的设计方式中,电子设备对第一图像进行处理,得到第四图像,包括:电子设备可以对第一图像进行超分辨率重建,得到第四图像。
可以理解的是,电子设备对第一图像进行超分辨率重建,可以扩大第一图像的分辨率,即第四图像的分辨率大于第一图像的分辨率。如此,可以保障得到的第四图像中第一区域图像的分辨率与第二图像的分辨率相同。进而可以保障电子设备将第四图像和第二图像进行融合,得到具备不同图像特点的图像。
结合第一方面,在另一种可能的设计方式中,电子设备对第三图像进行处理,得到第六图像,包括:电子设备可以对第三图像进行超分辨率重建,得到第六图像。
可以理解的是,电子设备对第三图像进行超分辨率重建,可以扩大第三图像的分辨率,即第六图像的分辨率大于第三图像的分辨率。如此,可以保障得到的第六图像中第二区域图像的分辨率与第一图像的分辨率相同。进而可以保障电子设备将第六图像和第五图像进行融合,得到具备不同图像特点的图像。
结合第一方面,在另一种可能的设计方式中,在电子设备对第五图像和第六图像进行图像融合,得到第七图像之后,该方法还包括:电子设备可以接收第二操作,第二操作用于触发电子设备显示第七图像。响应于第二操作,电子设备可以显示第一界面,第一界面用于播放第七图像以第三区域图像为中心,自动缩放的动态画面。其中,第七图像包括第三区域图像,第三区域图像相对于第三摄像头的取景范围是第六取景范围,第三取景范围包括第六取景范围,第六取景范围与第二取景范围重合。
示例性的,第七图像可以第三区域图像为中心,按照缩放倍率从大到小自动缩放 该第七图像。其中,第七图像的最大缩放倍率为第二图像的缩放倍率,第七图像的最小缩放倍率为第三图像的缩放倍率。
可以理解的是,电子设备以动态模式显示第七图像,可以使用户观看动态图像,增加了图像查看的趣味性。
结合第一方面,在另一种可能的设计方式中,第一播放界面还包括第一速度选项和第一倍率选项;其中,第一速度选项用于调整动态画面中的播放速度,第一倍率选项用于调整第一界面中第七图像的最大缩放倍率。该方法还包括:电子设备可以响应于用户对第一速度选项的调整操作,调整动态画面的播放速度。电子设备可以响应于用户对第一倍率选项的调整操作,调整第七图像的最大缩放倍率。
示例性的,第一速度选项可以指示电子设备以100ixel/s或20%FOV/s自动缩放第七图像。第二图像的缩放倍率为4X时,第七图像的最大放大倍率可以为4X。
可以理解的是,通过第一速度选项和第一倍率选项,可以使用户调整图像的缩放速度和最大放大倍率,提高了用户的使用体验。
结合第一方面,在另一种可能的设计方式中,第一界面还包括手动播放按钮,手动播放按钮用于触发电子设备显示第二界面。其中,第二界面包括第七图像和预设倍率选项,预设倍率选项用于设置第七图像的缩放倍率。
也就是说,电子设备可以从动态播放第七图像的模式切换为手动播放第七图像的模式。如此,可以使用户灵活选择显示第七图像的模式,提高了用户的使用体验。
结合第一方面,在另一种可能的设计方式中,在电子设备对第五图像和第六图像进行图像融合,得到第七图像之后,方法还包括:电子设备可以接收第二操作,第二操作用于触发电子设备显示第七图像。响应于第二操作,电子设备可以显示第二界面,该第二界面包括第七图像和预设倍率选项,预设倍率选项用于设置第七图像的缩放倍率。该方法还包括:电子设备可以接收用户在预设倍率选项设置的缩放倍率,按照用户设置的缩放倍率显示第七图像。
可以理解的是,在电子设备接收到作用于预设倍率选项的操作后,电子设备可以显示以预设倍率缩放的图像。也就是说,第七图像可以显示具备不同图像(例如第一图像、第二图像或第三图像)特点的区域。如此,可以使用户在一幅图像中就可以看到不同图像的特点,提高了用户的使用体验。
结合第一方面,在另一种可能的设计方式中,第二界面还包括自动播放按钮,自动播放按钮用于触发电子设备显示第一界面。其中,第一界面用于播放第七图像以第三区域图像为中心,自动缩放的动态画面;第七图像包括第三区域图像,第三区域图像相对于第三摄像头的取景范围是第六取景范围,第三取景范围包括第六取景范围,第六取景范围与第二取景范围重合。
也就是说,电子设备可以从手动播放第七图像的模式切换为动态播放第七图像的模式。如此,可以使用户灵活选择显示第七图像的模式,提高了用户的使用体验。
结合第一方面,在另一种可能的设计方式中,第一界面包括格式转换按钮,格式转换按钮用于触发电子设备转换第七图像的文件格式。该方法还包括:电子设备接收作用于格式转换按钮的第三操作,生成第一文件;其中,第一文件为第七图像以第三区域图像为中心、自动缩放的视频文件;或者,第一文件为以第三区域图像为中心、 自动缩放第七图像的图形交换格式GIF图像。
可以理解的是,第一文件可以在任一电子设备中自动缩放第七图像。如此,其他的电子设备接收到第一文件之后,可以显示自动缩放的第七图像,提高了用户的使用体验。
结合第一方面,在另一种可能的设计方式中,第一摄像头为主摄像头,第二摄像头为长焦摄像头,第三摄像头为广角摄像头。
其中,在第一摄像头为主摄像头时,第一图像为主图像。在第二摄像头为长焦摄像头时,第二图像为长焦图像。在第三摄像头为广角摄像头时,第三图像为广角图像。也就是说,电子设备得到的第七图像具备取景范围较大、整体图像清晰度较高和局部图像清晰度较高的多个特点。也就是说,该第七图像综合了主图像、长焦图像和广角图像的特点。如此,可以提高电子设备拍摄的图像质量。并且,用户只需要一次操作,便可以通过电子设备得到一张具备多张图像特点的图像,简化了拍摄过程,提高了用户的使用体验。
第二方面,本申请提供一种图像的融合的方法,该方法可以应用于电子设备,该电子设备包括第一摄像头、第二摄像头、第三摄像头和第四摄像头,第一摄像头的视场角大于第二摄像头的视场角,第三摄像头的视场角大于第一摄像头的视场角,第四摄像头的视场角与第一摄像头的视场角相同。
该方法中,该方法中,电子设备检测到第一操作。响应于第一操作,电子设备可以通过第一摄像头采集第一图像,通过第二摄像头采集第二图像,通过第三摄像头采集第三图像,通过第四摄像头采集第八图像。其中,第一摄像头采集第一图像的取景范围是第一取景范围,第二摄像头采集第二图像的取景范围是第二取景范围,第三摄像头采集第三图像的取景范围是第三取景范围,第三取景范围大于第一取景范围,第一取景范围大于第二取景范围,第四摄像头采集第八图像的取景范围与第一取景范围相同。电子设备可以对第一图像和第八图像进行图像融合,得到第九图像。
可以理解的是,图像融合能够提高图像质量。电子设备将第一图像和第八图像进行图像融合,得到的第九图像的图像质量将高于第一图像(或者第八图像)的图像质量。如此,能够提高电子设备拍摄的图像质量。
电子设备可以对第九图像进行处理,得到第四图像。其中,第四图像包括第一区域图像,第四图像中第一区域图像的分辨率和第二图像的分辨率相同,第一区域图像相对于第一摄像头的取景范围是第四取景范围,第一取景范围包括第四取景范围,第四取景范围与第二取景范围重合。之后,电子设备可以对第二图像和第四图像进行图像融合,得到第五图像。
可以理解的是,由于第九图像是经过图像融合得到的图像,图像质量较高。因此,第四图像的图像质量较高。并且,由于第四图像中的第一区域图像的分辨率与第二图像的分辨率相同,且第一区域图像的取景范围与第二取景范围相同。因此,电子设备可以对第二图像和第四图像进行图像融合。并且,第二图像具备局部图像(即远处物体的图像)清晰的特点,第四图像具备整体图像较为清晰的特点。电子设备将第二图像和第四图像进行图像融合,可以综合第二图像和第四图像的特点,得到整体图像清晰度较高和局部图像清晰度较高的第五图像。也就是说,该第五图像综合了第二图像 和第一图像的特点。如此,可以提高电子设备拍摄的图像质量。
电子设备可以对第三图像进行处理,得到第六图像。其中,第六图像包括第二区域图像,第六图像中第二区域图像的分辨率和第一图像的分辨率相同,第二区域图像相对于第三摄像头的取景范围是第五取景范围,第三取景范围包括第五取景范围,第五取景范围与第一取景范围重合。电子设备可以对第五图像和第六图像进行图像融合,得到第七图像。
可以理解的是,由于第六图像中的第二区域图像的分辨率与第五图像的分辨率相同,且第二区域图像的取景范围与第五图像的取景范围相同。因此,电子设备可以对第五图像和第六图像进行图像融合。并且,第五图像具备整体图像较为清晰和局部图像(即远处物体的图像)清晰的特点,第六图像的取景范围较大。电子设备将第五图像和第六图像进行图像融合,可以综合第五图像和第六图像的特点,得到取景范围较大、整体图像清晰度较高和局部图像清晰度较高的第七图像。也就是说,该第七图像综合了第一图像、第二图像和第三图像的特点。如此,可以提高电子设备拍摄的图像质量,
并且,综上可知,用户只需要一次操作,便可以通过电子设备得到一张具备多张图像特点的图像,简化了拍摄过程,提高了用户的使用体验。
结合第二方面,在一种可能的设计方式中,电子设备对第九图像进行处理,得到第四图像,包括:电子设备可以对第九图像进行超分辨率重建,得到第四图像。
可以理解的是,电子设备对第九图像进行超分辨率重建,可以扩大第九图像的分辨率,即第四图像的分辨率大于第九图像的分辨率。如此,可以保障得到的第四图像中第一区域图像的分辨率与第二图像的分辨率相同。进而可以保障电子设备将第四图像和第二图像进行融合,得到具备不同图像特点的图像。
结合第二方面,在另一种可能的设计方式中,电子设备对第三图像进行处理,得到第六图像,包括:电子设备可以对第三图像进行超分辨率重建,得到第六图像。
可以理解的是,电子设备对第三图像进行超分辨率重建,可以扩大第三图像的分辨率,即第六图像的分辨率大于第三图像的分辨率。如此,可以保障得到的第六图像中第二区域图像的分辨率与第一图像的分辨率相同。进而可以保障电子设备将第六图像和第五图像进行融合,得到具备不同图像特点的图像。
结合第二方面,在另一种可能的设计方式中,在电子设备对第五图像和第六图像进行图像融合,得到第七图像之后,该方法还包括:电子设备可以接收第二操作,第二操作用于触发电子设备显示第七图像。响应于第二操作,电子设备可以显示第一界面,第一界面用于播放第七图像以第三区域图像为中心,自动缩放的动态画面。其中,第七图像包括第三区域图像,第三区域图像相对于第三摄像头的取景范围是第六取景范围,第三取景范围包括第六取景范围,第六取景范围与第二取景范围重合。
示例性的,第七图像可以第三区域图像为中心,按照缩放倍率从大到小自动缩放该第七图像。其中,第七图像的最大缩放倍率为第二图像的缩放倍率,第七图像的最小缩放倍率为第三图像的缩放倍率。
可以理解的是,电子设备以动态模式显示第七图像,可以使用户观看动态图像,增加了图像查看的趣味性。
结合第二方面,在另一种可能的设计方式中,第一播放界面还包括第一速度选项和第一倍率选项;其中,第一速度选项用于调整动态画面中的播放速度,第一倍率选项用于调整第一界面中第七图像的最大缩放倍率。该方法还包括:电子设备可以响应于用户对第一速度选项的调整操作,调整动态画面的播放速度。电子设备可以响应于用户对第一倍率选项的调整操作,调整第七图像的最大缩放倍率。
示例性的,第一速度选项可以指示电子设备以100ixel/s或20%FOV/s自动缩放第七图像。第二图像的缩放倍率为4X时,第七图像的最大放大倍率可以为4X。
可以理解的是,通过第一速度选项和第一倍率选项,可以使用户调整图像的缩放速度和最大放大倍率,提高了用户的使用体验。
结合第二方面,在另一种可能的设计方式中,第一界面还包括手动播放按钮,手动播放按钮用于触发电子设备显示第二界面。其中,第二界面包括第七图像和预设倍率选项,预设倍率选项用于设置第七图像的缩放倍率。
也就是说,电子设备可以从动态播放第七图像的模式切换为手动播放第七图像的模式。如此,可以使用户灵活选择显示第七图像的模式,提高了用户的使用体验。
结合第二方面,在另一种可能的设计方式中,在电子设备对第五图像和第六图像进行图像融合,得到第七图像之后,方法还包括:电子设备可以接收第二操作,第二操作用于触发电子设备显示第七图像。响应于第二操作,电子设备可以显示第二界面,该第二界面包括第七图像和预设倍率选项,预设倍率选项用于设置第七图像的缩放倍率。该方法还包括:电子设备可以接收用户在预设倍率选项设置的缩放倍率,按照用户设置的缩放倍率显示第七图像。
可以理解的是,在电子设备接收到作用于预设倍率选项的操作后,电子设备可以显示以预设倍率缩放的图像。也就是说,第七图像可以显示具备不同图像(例如第一图像、第二图像或第三图像)特点的区域。如此,可以使用户在一幅图像中就可以看到不同图像的特点,提高了用户的使用体验。
结合第二方面,在另一种可能的设计方式中,第二界面还包括自动播放按钮,自动播放按钮用于触发电子设备显示第一界面。其中,第一界面用于播放第七图像以第三区域图像为中心,自动缩放的动态画面;第七图像包括第三区域图像,第三区域图像相对于第三摄像头的取景范围是第六取景范围,第三取景范围包括第六取景范围,第六取景范围与第二取景范围重合。
也就是说,电子设备可以从手动播放第七图像的模式切换为动态播放第七图像的模式。如此,可以使用户灵活选择显示第七图像的模式,提高了用户的使用体验。
结合第二方面,在另一种可能的设计方式中,第一界面包括格式转换按钮,格式转换按钮用于触发电子设备转换第七图像的文件格式。该方法还包括:电子设备接收作用于格式转换按钮的第三操作,生成第一文件;其中,第一文件为第七图像以第三区域图像为中心、自动缩放的视频文件;或者,第一文件为以第三区域图像为中心、自动缩放第七图像的图形交换格式GIF图像。
可以理解的是,第一文件可以在任一电子设备中自动缩放第七图像。如此,其他的电子设备接收到第一文件之后,可以显示自动缩放的第七图像,提高了用户的使用体验。
结合第二方面,在另一种可能的设计方式中,第一摄像头为主摄像头,第二摄像头为长焦摄像头,第三摄像头为广角摄像头。
其中,在第一摄像头为主摄像头时,第一图像为主图像。在第二摄像头为长焦摄像头时,第二图像为长焦图像。在第三摄像头为广角摄像头时,第三图像为广角图像。也就是说,电子设备得到的第七图像具备取景范围较大、整体图像清晰度较高和局部图像清晰度较高的多个特点。也就是说,该第七图像综合了主图像、长焦图像和广角图像的特点。如此,可以提高电子设备拍摄的图像质量。并且,用户只需要一次操作,便可以通过电子设备得到一张具备多张图像特点的图像,简化了拍摄过程,提高了用户的使用体验
第三方面,本申请提供一种电子设备,该电子设备包括:存储器、显示屏和处理器,上述存储器、显示屏与上述处理器耦合;存储器用于存储计算机程序代码,计算机程序代码包括计算机指令;当计算机指令被所述处理器执行时,上述处理器,用于检测到第一操作。上述处理器,还用于响应于第一操作,通过第一摄像头采集第一图像,通过第二摄像头采集第二图像,通过第三摄像头采集第三图像;其中,第一摄像头采集第一图像的取景范围是第一取景范围,第二摄像头采集第二图像的取景范围是第二取景范围,第三摄像头采集第三图像的取景范围是第三取景范围,第三取景范围大于第一取景范围,第一取景范围大于第二取景范围。上述处理器,还用于对第一图像进行处理,得到第四图像;其中,第四图像包括第一区域图像,第四图像中第一区域图像的分辨率和第二图像的分辨率相同,第一区域图像相对于第一摄像头的取景范围是第四取景范围,第一取景范围包括第四取景范围,第四取景范围与第二取景范围重合。上述处理器,还用于对第二图像和第四图像进行图像融合,得到第五图像。上述处理器,还用于对第三图像进行处理,得到第六图像;其中,第六图像包括第二区域图像,第六图像中第二区域图像的分辨率和第一图像的分辨率相同,第二区域图像相对于第三摄像头的取景范围是第五取景范围,第三取景范围包括第五取景范围,第五取景范围与第一取景范围重合。上述处理器,还用于对第五图像和第六图像进行图像融合,得到第七图像。
结合第三方面,在一种可能的设计方式中,当计算机指令被所述处理器执行时,上述处理器,具体用于对第一图像进行超分辨率重建,得到第四图像。
结合第三方面,在另一种可能的设计方式中,当计算机指令被所述处理器执行时,上述处理器,还具体用于对第三图像进行超分辨率重建,得到第六图像。
结合第三方面,在另一种可能的设计方式中,当计算机指令被所述处理器执行时,上述处理器,还用于接收第二操作,第二操作用于触发显示屏显示第七图像。响应于第二操作,显示屏显示第一界面,第一界面用于播放第七图像以第三区域图像为中心,自动缩放的动态画面;其中,第七图像包括第三区域图像,第三区域图像相对于第三摄像头的取景范围是第六取景范围,第三取景范围包括第六取景范围,第六取景范围与第二取景范围重合。
结合第三方面,在另一种可能的设计方式中,第一播放界面还包括第一速度选项和第一倍率选项;其中,第一速度选项用于调整动态画面中的播放速度,第一倍率选项用于调整第一界面中第七图像的最大缩放倍率。当计算机指令被所述处理器执行时, 上述处理器,还用于响应于用户对第一速度选项的调整操作,调整动态画面的播放速度。上述处理器,还用于响应于用户对第一倍率选项的调整操作,调整第七图像的最大缩放倍率。
结合第三方面,在另一种可能的设计方式中,第一界面还包括手动播放按钮,手动播放按钮用于触发显示屏显示第二界面。其中,第二界面包括第七图像和预设倍率选项,预设倍率选项用于设置第七图像的缩放倍率。
结合第三方面,在另一种可能的设计方式中,当计算机指令被所述处理器执行时,上述处理器,还用于接收第二操作,第二操作用于触发显示屏显示第七图像。显示屏,还用于响应于第二操作,显示第二界面;其中,第二界面包括第七图像和预设倍率选项,预设倍率选项用于设置第七图像的缩放倍率。上述显示屏,还用于接收用户在预设倍率选项设置的缩放倍率,按照用户设置的缩放倍率显示第七图像。
结合第三方面,在另一种可能的设计方式中,第二界面还包括自动播放按钮,自动播放按钮用于触发显示屏显示第一界面。其中,第一界面用于播放第七图像以第三区域图像为中心,自动缩放的动态画面;第七图像包括第三区域图像,第三区域图像相对于第三摄像头的取景范围是第六取景范围,第三取景范围包括第六取景范围,第六取景范围与第二取景范围重合。
结合第三方面,在另一种可能的设计方式中,第一界面包括格式转换按钮,格式转换按钮用于触发处理器转换第七图像的文件格式。当计算机指令被所述处理器执行时,上述处理器,还用于接收作用于格式转换按钮的第三操作,生成第一文件;其中,第一文件为第七图像以第三区域图像为中心、自动缩放的视频文件;或者,第一文件为以第三区域图像为中心、自动缩放第七图像的图形交换格式GIF图像。
结合第三方面,在另一种可能的设计方式中,第一摄像头为主摄像头,第二摄像头为长焦摄像头,第三摄像头为广角摄像头。
第四方面,本申请提供一种电子设备,该电子设备包括:存储器、显示屏和处理器,上述存储器、显示屏与上述处理器耦合;存储器用于存储计算机程序代码,计算机程序代码包括计算机指令;当计算机指令被所述处理器执行时,上述处理器,用于检测到第一操作。上述处理器,还用于响应于第一操作,通过第一摄像头采集第一图像,通过第二摄像头采集第二图像,通过第三摄像头采集第三图像,通过第四摄像头采集第八图像;其中,第一摄像头采集第一图像的取景范围是第一取景范围,第二摄像头采集第二图像的取景范围是第二取景范围,第三摄像头采集第三图像的取景范围是第三取景范围,第三取景范围大于第一取景范围,第一取景范围大于第二取景范围,第四摄像头采集第八图像的取景范围与第一取景范围相同。上述处理器,还用于对第一图像和第八图像进行图像融合,得到第九图像。上述处理器,还用于对第九图像进行处理,得到第四图像;其中,第四图像包括第一区域图像,第四图像中第一区域图像的分辨率和第二图像的分辨率相同,第一区域图像相对于第一摄像头的取景范围是第四取景范围,第一取景范围包括第四取景范围,第四取景范围与第二取景范围重合。上述处理器,还用于对第二图像和第四图像进行图像融合,得到第五图像。上述处理器,还用于对第三图像进行处理,得到第六图像;其中,第六图像包括第二区域图像,第六图像中第二区域图像的分辨率和第一图像的分辨率相同,第二区域图像相对于第 三摄像头的取景范围是第五取景范围,第三取景范围包括第五取景范围,第五取景范围与第一取景范围重合。上述处理器,还用于对第五图像和第六图像进行图像融合,得到第七图像。
结合第四方面,在一种可能的设计方式中,当计算机指令被所述处理器执行时,上述处理器,具体用于对第九图像进行超分辨率重建,得到第四图像。
结合第四方面,在另一种可能的设计方式中,当计算机指令被所述处理器执行时,上述处理器,还具体用于对第三图像进行超分辨率重建,得到第六图像。
结合第四方面,在另一种可能的设计方式中,当计算机指令被所述处理器执行时,上述处理器,还用于接收第二操作,第二操作用于触发显示屏显示第七图像。响应于第二操作,显示屏显示第一界面,第一界面用于播放第七图像以第三区域图像为中心,自动缩放的动态画面;其中,第七图像包括第三区域图像,第三区域图像相对于第三摄像头的取景范围是第六取景范围,第三取景范围包括第六取景范围,第六取景范围与第二取景范围重合。
结合第四方面,在另一种可能的设计方式中,第一播放界面还包括第一速度选项和第一倍率选项;其中,第一速度选项用于调整动态画面中的播放速度,第一倍率选项用于调整第一界面中第七图像的最大缩放倍率。当计算机指令被所述处理器执行时,上述处理器,还用于响应于用户对第一速度选项的调整操作,调整动态画面的播放速度。上述处理器,还用于响应于用户对第一倍率选项的调整操作,调整第七图像的最大缩放倍率。
结合第四方面,在另一种可能的设计方式中,第一界面还包括手动播放按钮,手动播放按钮用于触发显示屏显示第二界面。其中,第二界面包括第七图像和预设倍率选项,预设倍率选项用于设置第七图像的缩放倍率。
结合第四方面,在另一种可能的设计方式中,当计算机指令被所述处理器执行时,上述处理器,还用于接收第二操作,第二操作用于触发显示屏显示第七图像。显示屏,还用于响应于第二操作,显示第二界面;其中,第二界面包括第七图像和预设倍率选项,预设倍率选项用于设置第七图像的缩放倍率。上述显示屏,还用于接收用户在预设倍率选项设置的缩放倍率,按照用户设置的缩放倍率显示第七图像。
结合第四方面,在另一种可能的设计方式中,第二界面还包括自动播放按钮,自动播放按钮用于触发显示屏显示第一界面。其中,第一界面用于播放第七图像以第三区域图像为中心,自动缩放的动态画面;第七图像包括第三区域图像,第三区域图像相对于第三摄像头的取景范围是第六取景范围,第三取景范围包括第六取景范围,第六取景范围与第二取景范围重合。
结合第四方面,在另一种可能的设计方式中,第一界面包括格式转换按钮,格式转换按钮用于触发处理器转换第七图像的文件格式。当计算机指令被所述处理器执行时,上述处理器,还用于接收作用于格式转换按钮的第三操作,生成第一文件;其中,第一文件为第七图像以第三区域图像为中心、自动缩放的视频文件;或者,第一文件为以第三区域图像为中心、自动缩放第七图像的图形交换格式GIF图像。
结合第四方面,在另一种可能的设计方式中,第一摄像头为主摄像头,第二摄像头为长焦摄像头,第三摄像头为广角摄像头。
第五方面,本申请提供一种电子设备,该电子设备包括:存储器和处理器,上述存储器与上述处理器耦合;存储器用于存储计算机程序代码,计算机程序代码包括计算机指令;当计算机指令被所述处理器执行时,使得电子设备执行如第一方面及其任一种可能的设计方式所述的方法。
第六方面,本申请提供一种电子设备,该电子设备包括:存储器和处理器,上述存储器与上述处理器耦合;存储器用于存储计算机程序代码,计算机程序代码包括计算机指令;当计算机指令被所述处理器执行时,使得电子设备执行如第二方面及其任一种可能的设计方式所述的方法。
第七方面,本申请提供一种芯片***,该芯片***应用于电子设备。该芯片***包括一个或多个接口电路和一个或多个处理器。该接口电路和处理器通过线路互联。该接口电路用于从电子设备的存储器接收信号,并向处理器发送该信号,该信号包括存储器中存储的计算机指令。当处理器执行所述计算机指令时,电子设备执行如第一方面或者第二方面及其任一种可能的设计方式所述的方法。
第八方面,本申请提供一种计算机可读存储介质,该计算机可读存储介质包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如第一方面或者第二方面及其任一种可能的设计方式所述的方法。
第九方面,本申请提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如第一方面或者第二方面及其任一种可能的设计方式所述的方法。
可以理解地,上述提供的第二方面及其任一种可能的设计方式所述的电子设备,第三方面及其任一种可能的设计方式所述的电子设备,第四方面所述的电子设备,第五方面所述的电子设备,第六方面所述的芯片***,第七方面所述的计算机可读存储介质,第八方面所述的计算机程序产品所能达到的有益效果,可参考如第一方面或者第二方面及其任一种可能的设计方式中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种图像预览界面的实例示意图;
图2A为本申请实施例提供的一种电子设备的硬件结构示意图;
图2B为本申请实施例提供的一种取景范围的实例示意图;
图3为本申请实施例提供的一种图像的融合方法流程图;
图4为本申请实施例提供的一种图像的实例示意图;
图5为本申请实施例提供的另一种图像的实例示意图;
图6为本申请实施例提供的另一种图像的实例示意图;
图7为本申请实施例提供的一种图像的融合方法流程图;
图8为本申请实施例提供的另一种图像的实例示意图;
图9为本申请实施例提供的另一种图像预览界面的实例示意图;
图10为本申请实施例提供的一种倍率设置界面的实例示意图;
图11为本申请实施例提供的一种图像显示界面的实例示意图;
图12为本申请实施例提供的另一种图像显示界面的实例示意图;
图13为本申请实施例提供的另一种图像显示界面的实例示意图;
图14为本申请实施例提供的一种芯片***的结构组成示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
本申请中字符“/”,一般表示前后关联对象是一种“或者”的关系。例如,A/B可以理解为A或者B。
术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
此外,本申请的描述中所提到的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或模块的过程、方法、***、产品或设备没有限定于已列出的步骤或模块,而是可选地还包括其他没有列出的步骤或模块,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或模块。
另外,在本申请实施例中,“示例性的”、或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”、或者“例如”等词旨在以具体方式呈现概念。
为了便于理解本申请的技术方案,在对本申请实施例的深度图像的获取方法进行详细介绍之前,先对本申请实施例中所提到的专业名词进行介绍。
1、超分辨率重建。
超分辨率重建指的是就是利用一幅或者一组低质量、低分辨率图像生成一幅高质量、高分辨率图像。其中超分辨率重建可以包括基于重建的方法或者基于学习的方法。
2、图像信号处理(Image Signal Processing,ISP)模块。
在摄像头采集到原始图像(即RAW格式的图像)之后,电子设备可以将原始图像传送到ISP模块。其中,RAW格式是未经处理、也未经压缩的格式。之后,ISP模块可以对原始图像进行分析,检查图像中相邻像素之间的密度差距。接着,ISP模块可以使用ISP模块中的预设调节算法对该原始图像进行适当处理,以提高摄像头采集的图像质量。
对本申请实施例中所提到的专业名词进行介绍之后,下面对常规技术进行介绍。
随着电子技术的发展,电子设备(如手机、平板电脑或智能手表等)的功能越来越多。以手机为例,手机中可以安装多个摄像头,如主摄像头、长焦摄像头和广角摄像头等。手机可以在同一个拍摄场景下,采用不同的摄像头拍摄图像,以得到不同特点的图像。
常规技术中,当用户通过电子设备拍摄同一场景下的不同图像(例如主图像、长焦图像和广角图像)时,用户需要切换电子设备的拍摄模式,才可以得到同一场景下的不同图像。其中,主图像是电子设备通过主摄像头的图像,长焦图像是电子设备通 过长焦摄像头采集的图像,广角图像是电子设备通过广角摄像头采集的图像。
示例性的,如图1中的(a)所示,电子设备的拍摄模式为普通拍摄模式时,电子设备可以通过主摄像头采集主图像101。之后,响应于用户的切换操作,电子设备可以将拍摄模式切换为如图1中的(b)所示长焦拍摄模式。之后,电子设备可以通过长焦摄像头采集长焦图像102。然后,响应于用户的切换操作,电子设备可以将拍摄模式切换为如图1中的(c)所示广角拍摄模式。之后,电子设备可以通过广角摄像头采集广角图像103。
然而,上述方案中,电子设备需要响应于用户的多次操作,才可以拍摄得到多张具备不同特点的图像。电子设备的拍摄过程较为繁琐,影响用户的拍摄体验。
为此,本申请实施例提供一种图像的融合方法。该方法中,电子设备可以响应于用户的拍照操作,在同一时刻通过主摄像头、长焦摄像头和广角摄像头分别采集主图像、长焦图像和广角图像。之后,电子设备可以对主图像和广角图像进行超分辨率重建,并对长焦图、超分辨率重建后的主图像和超分辨率重建后的广角图像进行图像融合,得到目标图像。
需要说明的是,本申请实施例中电子设备通过摄像头采集的图像可以为:ISP模块对摄像头采集到原始图像进行处理后的图像。也就是说,电子设备通过主摄像头采集到的图像为:由ISP模块对主摄像头采集到的原始图像进行处理后的图像;电子设备通过长焦摄像头采集到的图像为:由ISP模块对长焦摄像头采集到的原始图像进行处理后的图像;电子设备通过广角摄像头采集到的图像为:由ISP模块对广角摄像头采集到的原始图像进行处理后的图像。可选的,本申请实施例中电子设备通过摄像头采集的图像可以为原始图像(即RAW格式的图像),本申请实施例对此不作限定。其中,RAW格式的图像是一种记录了摄像头传感器的原始信息,同时记录了由摄像头拍摄图像所产生的一些元数据(ISO的设置、快门速度、光圈值、白平衡等)的图像,且该图像未被IPS模块进行处理。其中,ISO是国际标准化组织(International Organization for Standardization)的缩写。
需要说明的是,上述电子设备在同一时刻通过主摄像头、长焦摄像头和广角摄像头分别采集主图像、长焦图像和广角图像是指:主摄像头采集主图像的时刻(如第一时刻)、长焦摄像头采集长焦图像的时刻(如第二时刻)、广角摄像头采集广角图像时的时刻(如第三时刻)相同;或者,第一时刻与第二时刻之间的时间差、第一时刻与第三时刻之间的时间差、第二时刻与第三时刻之间的时间差均较小(例如时间差均小于1毫秒、0.5毫秒或者2毫秒等)。
可以理解的是,上述目标图像是基于主图像、长焦图像和广角图像融合得到的。因此,融合得到的目标图像具备主图像、长焦图像和广角图像的特点。例如,目标图像具备主图像中图像整体较为清晰的特点,也具备长焦图像中局部清晰的特点,还具备广角照片中视角较大的特点。并且,电子设备只需要响应用户的一次拍照操作,便可以拍摄得到目标照片。如此,可以简化拍摄多张具备不同特点图像的过程,提高用户的拍摄体验。
示例性的,本申请实施例中的电子设备可以是平板电脑、手机、桌面型、膝上型、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、 上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备、车载设备等设备,本申请实施例对该电子设备的具体形态不作特殊限制。
本申请提供的图像的融合方法的执行主体可以为图像的融合装置,该执行装置可以为图2A所示的电子设备。同时,该执行装置还可以为该电子设备的中央处理器(Central Processing Unit,CPU),或者该电子设备中的用于融合图像的控制模块。本申请实施例中以电子设备执行图像的融合方法为例,说明本申请实施例提供的图像的融合方法。
请参考图2A,本申请这里以电子设备为图2A所示的手机200为例,对本申请提供的电子设备进行介绍。其中,图2A所示的手机200仅仅是电子设备的一个范例,并且手机200可以具有比图中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图2A中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
如图2A所示,手机200可以包括:处理器210,外部存储器接口220,内部存储器221,通用串行总线(universal serial bus,USB)接口230,充电管理模块240,电源管理模块241,电池242,天线1,天线2,移动通信模块250,无线通信模块260,音频模块270,扬声器270A,受话器270B,麦克风270C,耳机接口270D,传感器模块280,按键290,马达291,指示器292,摄像头293,显示屏294,以及用户标识模块(subscriber identification module,SIM)卡接口295等。
其中,上述传感器模块280可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器和骨传导传感器等传感器。
处理器210可以包括一个或多个处理单元,例如:处理器210可以包括存储器,视频编解码器,基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以是手机200的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器210中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器210中的存储器为高速缓冲存储器。
在一些实施例中,处理器210可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对手机200的结构限定。在另一些实施例中,手机200也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块240用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。充电管理模块240为电池242充电的同时,还可以通过电源管理模块241为电子设备供电。
电源管理模块241用于连接电池242,充电管理模块240与处理器210。电源管理模块241接收电池242和/或充电管理模块240的输入,为处理器210,内部存储器221,外部存储器,显示屏294,摄像头293,和无线通信模块260等供电。在一些实施例中,电源管理模块241和充电管理模块240也可以设置于同一个器件中。
手机200的无线通信功能可以通过天线1,天线2,移动通信模块250,无线通信模块260,调制解调处理器以及基带处理器等实现。在一些实施例中,手机200的天线1和移动通信模块250耦合,天线2和无线通信模块260耦合,使得手机200可以通过无线通信技术与网络以及其他设备通信。
天线1和天线2用于发射和接收电磁波信号。手机200中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。移动通信模块250可以提供应用在手机200上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块250可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块250还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块250的至少部分功能模块可以被设置于处理器210中。
无线通信模块260可以提供应用在手机200上的包括无线局域网(wireless local area networks,WLAN)(如(wireless fidelity,Wi-Fi)网络),调频(frequency modulation,FM),红外技术(infrared,IR)等无线通信的解决方案。例如,本申请实施例中,手机200可以通过无线通信模块260接入Wi-Fi网络。无线通信模块260可以是集成至少一个通信处理模块的一个或多个器件。
手机200通过GPU,显示屏294,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏294和应用处理器。
显示屏294用于显示图像,视频等。该显示屏294包括显示面板。例如,本申请实施例中,显示屏294可以用于显示图库界面和拍摄界面等。
手机200可以通过ISP,摄像头293,视频编解码器,GPU,显示屏294以及应用处理器等实现拍摄功能。ISP用于处理摄像头293反馈的数据。摄像头293用于捕获静态图像或视频。在一些实施例中,手机200可以包括1个或N个摄像头293,N为大于1的正整数。
在本申请实施例中,N个摄像头293可以包括:主摄像头、长焦摄像头和广角摄像头。可选的,N个摄像头293还可以包括:红外摄像头、深度摄像头或者黑白摄像头等至少一种摄像头。下面简单介绍上述各个摄像头的特点(即优势和劣势)以及适用场景。
(1)主摄像头。主摄像头具有进光量大、分辨率高,以及视场角适中的特点。主摄像头一般作为电子设备(如手机)的默认摄像头。也就是说,电子设备(如手机)响应于用户启动“相机”应用的操作,可以默认启动主摄像头,在预览界面显示主摄像头采集的图像。
(2)长焦摄像头。长焦摄像头的焦距较长,可适用于拍摄距离手机较远的拍摄对象(即远处的物体)。但是,长焦摄像头的进光量较小。在暗光场景下使用长焦摄像头拍摄图像,可能会因为进光量不足而影响图像质量。并且,长焦摄像头的视场角较 小,不适用于拍摄较大场景的图像,即不适用于拍摄较大的拍摄对象(如建筑或风景等)。
(3)广角摄像头。广角摄像头的视场角较大,可适用于拍摄较大的拍摄对象(例如风景)。但是,广角摄像头的焦距较短,在广角摄像头拍摄距离较近的物体时,拍摄得到的广角图像中物体容易产生畸变(例如图像中的物体相较于原物体变得宽扁)。
(4)黑白摄像头。由于黑白摄像头没有滤光片;因此,相比于彩色摄像头而言,黑白摄像头的进光量较大;并且,黑白摄像头的对焦速度比彩色摄像头的对焦速度快。但是,黑白摄像头采集到的图像只能呈现出不同等级的灰度,不能呈现出拍摄对象的真实色彩。需要说明的是,上述主摄像头、长焦摄像头等均为彩色摄像头。
需要说明的是,本申请实施例中的视场角包括水平视场角和垂直视场角。
外部存储器接口220可以用于连接外部存储卡,例如Micro SD卡,实现扩展手机200的存储能力。外部存储卡通过外部存储器接口220与处理器210通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器221可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器210通过运行存储在内部存储器221的指令,从而执行手机200的各种功能应用以及数据处理。例如,在本申请实施例中,处理器210可以通过执行存储在内部存储器221中的指令,内部存储器221可以包括存储程序区和存储数据区。
其中,存储程序区可存储操作***,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储手机200使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器221可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
手机200可以通过音频模块270,扬声器270A,受话器270B,麦克风270C,耳机接口270D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
按键290包括开机键,音量键等。按键290可以是机械按键。也可以是触摸式按键。马达291可以产生振动提示。马达291可以用于来电振动提示,也可以用于触摸振动反馈。指示器292可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口295用于连接SIM卡。SIM卡可以通过***SIM卡接口295,或从SIM卡接口295拔出,实现和手机200的接触和分离。手机200可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口295可以支持Nano SIM卡,Micro SIM卡,SIM卡等。
尽管图2A未示出,手机200还可以闪光灯、微型投影装置、近场通信(Near Field Communication,NFC)装置等,在此不再赘述。
可以理解的是,本实施例示意的结构并不构成对手机200的具体限定。在另一些实施例中,手机200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
以下实施例中的方法均可以在具有上述硬件结构的电子设备中实现。以下实施例中以上述具有上述硬件结构的电子设备为例,对本申请实施例的方法进行说明。
在本申请实施例中,响应于用户的拍照操作,电子设备可以通过第一摄像头采集第一图像,通过第二摄像头采集第二图像,通过第三摄像头采集第三图像。其中,第三摄像头的视场角大于第一摄像头的视场角,第一摄像头的视场角大于第二摄像头的视场角。
需要说明的是,第三摄像头的视场角大于第一摄像头的视场角是指,第三摄像头的水平视场角大于第一摄像头的水平视场角,和/或第三摄像头的垂直视场角大于第一摄像头的垂直视场角。第一摄像头的视场角大于第二摄像头的视场角是指,第一摄像头的水平视场角大于第二摄像头的水平视场角,和/或第一摄像头的垂直视场角大于第二摄像头的垂直视场角。例如,第三摄像头是广角摄像头,第一摄像头是主摄像头,第二摄像头是长焦摄像头。
第一摄像头采集第一图像的取景范围是第一取景范围,第二摄像头采集第二图像的取景范围是第二取景范围,第三摄像头采集第三图像的取景范围是第三取景范围。第三取景范围大于第一取景范围,第一取景范围大于第二取景范围。之后,电子设备可以对第一图像和第三图像进行处理,并对第二图像、进行处理后的第一图像和进行处理后的第三图像进行图像融合,得到目标图像。如此,该目标图像可以具备第一图像、第二图像和第三图像的特点。
需要说明的是,本申请实施例中摄像头采集图像的取景范围是指,摄像头可以拍摄的区域范围。示例性的,如图2B所示,主摄像头可以采集到区域202中区域203对应的图像。也就是说,区域202中除区域203以外的区域不在主摄像头采集图像的取景范围内。并且,本申请实施例中,图像的取景范围与摄像头采集图像的取景范围相对应。例如,第一图像的取景范围可以指示第一摄像头采集第一图像的取景范围(即第一取景范围)。又例如,第二图像的取景范围可以指示第二摄像头采集第二图像的取景范围(即第二取景范围)。
以下实施例中,以第一摄像头为主摄像头,第二摄像头为长焦摄像头,第三摄像头为广角摄像头为例,对本申请实施例进行介绍。其中,在第一摄像头为主摄像头时,第一图像为主图像。在第二摄像头为长焦摄像头时,第二图像为长焦图像。在第三摄像头为广角摄像头时,第三图像为广角图像。
如图3所示,本申请实施例提供的图像的融合方法可以包括S301-S306。
S301、电子设备检测到第一操作。
其中,该第一操作为用户的一次操作,例如一次点击操作,可选的为用户的一次点击拍照按钮的操作,可选的第一操作用于触发电子设备启动拍照功能。也就是说,电子设备接收用户的一次操作,可以拍摄图片。
S302、响应于第一操作,电子设备通过第一摄像头采集第一图像,通过第二摄像头采集第二图像,通过第三摄像头采集第三图像。
其中,第三取景范围大于第一取景范围,第一取景范围大于第二取景范围。可选的,第一摄像头是主摄像头,第二摄像头是长焦摄像头,第三摄像头是广角摄像头。
在本申请实施例中,第二图像的缩放倍率(也可以称为ZOOM倍率)大于第一图像的缩放倍率,第一图像的缩放倍率大于第三图像的缩放倍率。也就是说,在第一图像、第二图像和第三图像中,第二图像的缩放倍率最大,第一图像的缩放倍率适中, 第三图像的缩放倍率最小。
需要说明的是,图像的缩放倍率越大,图像的取景范围的范围越小。例如,结合图1,假如主图像101的缩放倍率为1X,长焦图像102的缩放倍率为4X,广角图像103的缩放倍率为0.5X。如图1所示,图1中的(a)所示的主图像101的取景范围大于图1中的(b)所示的长焦图像102的取景范围,图1中的(a)所示的主图像101的取景范围小于图1中的(c)所示的广角图像103的取景范围。
其中,第一图像包括第四区域图像,该第四区域图像的取景范围与第二取景范围相同。第三图像包括第五区域图像,该第五区域图像的取景范围与第一图像的拍摄区域相同。示例性的,如图4所示,图4中的(a)所示主图像101包括的第四区域图像401,与图4中的(b)所示的长焦图像102的取景范围相同;图4中的(c)所示的广角图像103包括的第五区域图像402,与主图像101的取景范围相同。
在一些实施例中,响应于第一操作,电子设备可以在同一时刻通过主摄像头采集第一图像,通过长焦摄像头采集第二图像,通过广角摄像头采集第三图像。
需要说明的是,上述电子设备在同一时刻通过主摄像头采集第一图像,通过长焦摄像头采集第二图像,通过广角摄像头采集第三图像是指:主摄像头采集第一图像的时刻(可以称为第一时刻)、长焦摄像头采集第二图像的时刻(可以称为第二时刻)、广角摄像头采集第三图像时的时刻(如第三时刻)相同;或者,第一时刻与第二时刻之间的时间差、第一时刻与第三时刻之间的时间差、第二时刻与第三时刻之间的时间差均较小(例如时间差均小于1毫秒)。在第一时刻、第二时刻与第三时刻之间存在时间差时,本申请实施例对主摄像头采集第一图像、长焦摄像头采集第二图像和广角摄像头采集第三图像的顺序不作限定。
S303、电子设备对第一图像进行超分辨率重建,得到第四图像。
其中,第四图像包括第一区域图像,该第一区域图像相对于第一摄像头的取景范围是第四取景范围,第一取景范围包括第四取景范围,该第四取景范围与第二取景范围重合。也就是说,第一摄像头采用第四取景范围可以得到与第一区域图像取景范围重合的图像。示例性的,第一取景范围为图5中的(a)所示的图像501的取景范围,第四取景范围即为图5中的(a)所示的第一区域图像502的取景范围。以下实施例中,对于区域图像的取景范围的介绍,可以参考对于第一区域图像的取景范围的说明。
第一区域图像的分辨率与第二图像的分辨率相同。示例性的,如图5中的(a)所示,图像501包括第一区域图像502,该第一区域图像502的分辨率与图5中的(b)所示的第二图像102的分辨率相同。例如,第一区域图像的分辨率为2000万像素,第二图像的分辨率为2000万像素。
需要说明的是,本申请实施例中,两幅图像(例如图像A和图像B)的分辨率相同是指:图像A中水平方向的像素数量与图像B中水平方向的像素数量相同,图像A中垂直方向的像素数量与图像B中垂直方向的像素数量相同。例如,第一区域图像的分辨率均为5000×4000,第二图像的分辨率为5000×4000。
需要说明的是,具体对于电子设备对第一图像进行超分辨率重建的方式,可以参考常规技术中对图像进行超分辨率重建的方法,本申请实施例对此不予赘述。例如,电子设备可以通过双线性差值法调整第一图像的分辨率,得到第四图像。
在一些实施例中,为了保障第一区域图像的分辨率与第二图像的分辨率相同,电子设备可以通过第一图像的缩放倍率和第二图像的缩放倍率之间的关系,对第一图像进行超分辨率重建,得到第四图像。
一种可能的实现方式,电子设备可以对第一图像进行第一倍率的超分辨率重建,得到第四图像。其中,第一倍率为第一图像的缩放倍率与第二图像的缩放倍率之间的比值。其中,第四图像的分辨率大于第一图像的分辨率。
一种可能的设计中,电子设备可以根据第一图像的缩放倍率、第二图像的缩放倍率和第二图像的分辨率,计算得到第四图像的分辨率。示例性的,电子设备可以通过公式一计算得到第四图像的分辨率。
M=(p÷q) 2×b              公式一。
其中,M为第四图像的分辨率,p为第二图像的缩放倍率,q为第一图像的缩放倍率,b为第二图像的分辨率。(p÷q)为第一倍率。
例如,结合图1和图5,假如长焦图像102的缩放倍率为4X,主图像101的缩放倍率为1X,长焦图像102的分辨率为2000万像素;电子设备则可以结合公式一确定图像501的分辨率为:
M=(4÷1) 2×2000
=32000。
也就是说,图像501的分辨率为3.2亿像素。
S304、电子设备对第二图像和第四图像进行图像融合,得到第五图像。
其中,第五图像包括第七区域图像,该第七区域图像的取景范围与第二取景范围(或者第四区域图像的取景范围)相同。
在本申请实施例中,电子设备可以通过图像融合算法,将第二图像和第四图像进行融合,得到第五图像。其中,本申请实施例对图像融合算法不作限定。例如,该图像融合算法可以为高低频信息融合算法。又例如,该图像融合算法可以为多尺度融合算法。又例如,电子设备可以通过预设模型将第二图像和第四图像进行融合,得到第五图像。该预设模型可以为视觉几何组网络(Visual Geometry Group Network,VGG)模型、inception模型和ResNET模型等,本申请实施例对此不作限定。
可以理解的是,由于第四图像中的第一区域图像的分辨率与第二图像的分辨率相同,且第一区域图像的取景范围与第二取景范围相同。因此,电子设备可以对第二图像和第四图像进行图像融合。
可以理解的是,第二图像具备局部图像(即远处物体的图像)清晰的特点,第四图像具备整体图像较为清晰的特点。电子设备将第二图像和第四图像进行图像融合,可以综合第二图像和第四图像的特点,得到整体图像清晰度较高和局部图像清晰度较高的第五图像。也就是说,该第五图像综合了主图像和长焦图像的特点。如此,可以提高电子设备拍摄的图像质量。
S305、电子设备对第三图像进行超分辨率重建,得到第六图像。
其中,第六图像包括第二区域图像,第二区域图像相对于第三摄像头的取景范围是第五取景范围,第三取景范围包括第五取景范围,第二区域图像的取景范围(即第五取景范围)与第一图像(或者第五图像)的取景范围重合。示例性的,如图6中的 (a)所示,第五取景范围即为第二区域图像602的取景范围。
在本申请实施例中,第二区域图像的分辨率与第五图像的分辨率相同。示例性的,如图6中的(a)所示的第六图像601包括第二区域图像602,该第二区域图像602的分辨率与图6中的(b)所示的第五图像501的分辨率相同。例如,第二区域图像的分辨率为3.2亿像素,第五图像的分辨率为3.2亿像素。
需要说明的是,对于电子设备对第三图像进行超分辨率重建,得到第六图像的过程的介绍,可以参考S303中对电子设备对第一图像进行超分辨率重建得到第四图像的说明,此处不予赘述。例如,电子设备可以通过双线性差值法调整第三图像的分辨率,得到第六图像。
在一些实施例中,为了保障第二区域图像的分辨率与第五图像的分辨率相同,电子设备可以通过第一图像的缩放倍率和第三图像的缩放倍率之间的关系,对第三图像进行超分辨率重建,得到第六图像。
一种可能的实现方式,电子设备可以对第三图像进行第二倍率的超分辨率重建,得到第六图像。其中,第二倍率为第一图像的缩放倍率与第三图像的缩放倍率之间的比值。第六图像的分辨率大于第三图像的分辨率。
一种可能的设计中,电子设备可以根据第一图像的缩放倍率、第三图像的缩放倍率和第五图像的分辨率,计算得到第六图像的分辨率。示例性的,电子设备可以通过公式二计算得到第六图像的分辨率。
N=(k÷j) 2×c              公式二。
其中,N为第六图像的分辨率,k为第一图像的缩放倍率,j为第三图像的缩放倍率,c为第五图像的分辨率。(k÷j)为第二倍率。
例如,结合图1和图6,假如主图像101的缩放倍率为1X,广角图像103的缩放倍率为0.5X,第五图像的分辨率为32000万像素;电子设备则可以结合公式二确定第六图像601的分辨率为:
M=(1÷0.5) 2×32000
=128000。
也就是说,第六图像601的分辨率为12.8亿像素。
需要说明的是,本申请实施例中对电子设备得到第五图像和第六图像的顺序不作限定。示例性的,电子设备可以先执行S305,再执行S304。又例如,电子设备可以同时执行S304和S305。
S306、电子设备对第五图像和第六图像进行图像融合,得到第七图像。
其中,第七图像包括第八区域图像,该第八区域图像的取景范围与第一取景范围(或者第三图像中第五区域图像的取景范围)相同。
需要说明的是,对于电子设备对第五图像和第六图像进行图像融合得到第七图像的过程的介绍,可以参考S304中对于电子设备对第二图像和第四图像进行图像融合得到第五图像的说明,此处不予赘述。
需要说明的是,由于第六图像中的第二区域图像的分辨率与第五图像的分辨率相同,且第二区域图像的取景范围与第五图像的取景范围相同。因此,电子设备可以对第五图像和第六图像进行图像融合。
可以理解的是,第五图像具备整体图像较为清晰和局部图像(即远处物体的图像)清晰的特点,第六图像的取景范围较大。电子设备将第五图像和第六图像进行图像融合,可以综合第五图像和第六图像的特点,得到取景范围较大、整体图像清晰度较高和局部图像清晰度较高的第七图像。也就是说,该第七图像综合了主图像、长焦图像和广角图像的特点。如此,可以提高电子设备拍摄的图像质量。
基于上述技术方案,电子设备接收到用户的拍照操作,可以通过主摄像头、长焦摄像头和广角摄像头分别采集第一图像、第二图像和第三图像。并且,电子设备可以对第一图像、第二图像和第三图像进行图像处理,得到第七图像,该第七图像具备不同摄像头拍摄得到的图像的特点。也就是说,用户只需要一次操作,便可以通过电子设备得到一张具备多张图像特点的图像,简化了拍摄过程,提高了用户的使用体验。
需要说明的是,电子设备在拍摄图像时,可能会受到一些因素的影响,导致拍摄得到的图像质量较差。例如,当用户拍照时,若手部有抖动,电子设备拍摄出的图片可能会比较模糊。又例如,当光线较差时,电子设备拍摄出的图片的噪点可能比较严重。
在另一些实施例中,为了提高电子设备拍摄得到的图像质量,电子设备可以通过辅助摄像(即第四摄像头)协助拍摄图像。其中,本申请实施例对辅助摄像头不作限定。例如,该辅助摄像头可以为红外摄像头。又例如,该辅助摄像头可以为黑白摄像头。
示例性的,以辅助摄像头为红外摄像头为例,即电子设备通过红外摄像头协助拍摄图像。其中,上述红外摄像头不仅可以感知可见光,还可以感知红外光。例如,上述红外光可以为890纳米(nm)-990nm的红外光。即红外摄像头可以感知波长为890nm-990nm的红外光。当然,不同的红外摄像头能够感知的红外光(即红外光的波长)可以不同。其中,上述可见光摄像头也可以成为普通波段的摄像头,该普通波段是可见光的波长所在的波段。
在暗光场景(如傍晚、深夜或者暗室内)下,可见光的强度较低。主摄像头无法感知到光线或者感知到的光线较弱,因此无法采集到清晰图像。而红外光摄像头可以感知取景范围内有温度的人或动物发出红外光,因此可以采集到人或动物的图像。
针对红外摄像头的上述特点,电子设备在暗光场景下,采用主摄像头采集图像时,为了避免由于可见光较弱而影响图像质量,可以借助于红外摄像头能够感知红外光的优势,将红外摄像头作为辅助摄像头协助主摄像头工作,以提升主摄像头拍摄得到的图像质量。
如图7所示,该图像的融合方法可以包括S701-S707。
S701、电子设备检测到第一操作。
需要说明的是,对S701的介绍,可以参考上述实施例中S301的说明,此处不予赘述。
S702、响应于第一操作,电子设备通过第一摄像头采集第一图像,通过第二摄像头采集第二图像,通过第三摄像头采集第三图像,通过第四摄像头采集第八图像。
其中,该第八图像的取景范围与第一取景范围相同。
需要说明的是,本申请实施例中,第八图像的取景范围与第一取景范围相同是指: 第八图像的取景范围与第一取景范围的相似程度为100%,或者,第八图像的取景范围与第一取景范围的相似程度大于预设相似阈值(例如99%、95%或90%等)。
示例性的,如图8中的(a)所示,区域801为第一取景范围,区域802为第八图像的取景范围,区域801与区域802相同。又例如,如图8中的(b)所示,区域803为第一取景范围,区域804为第八图像的取景范围,区域803与区域804不完全相同。
在本申请实施例中,电子设备可以在同一时刻通过辅助摄像头采集第八图像,通过主摄像头采集第一图像,通过长焦摄像头采集第二图像,以及通过广角摄像头采集第三图像。
需要说明的是,对于同一时刻的说明,可以参考S302中对电子设备在同一时刻通过主摄像头采集第一图像,通过长焦摄像头采集第二图像,通过广角摄像头采集第三图像的描述,此处不予赘述。
S703、电子设备对第一图像和第八图像进行图像融合,得到第九图像。
其中,第九图像的取景范围与第一取景范围相同。第九图像包括第六区域图像,第六区域图像的取景范围与第二取景范围相同。
需要说明的是,本申请实施例对第一图像和第八图像的分辨率不作限定。例如,第一图像的分辨率可以大于第八图像的分辨率。又例如,第一图像的分辨率可以等于第八图像的分辨率。又例如,第一图像的分辨率可以小于第八图像的分辨率。
在一些实施例中,若第一图像的分辨率大于(或者等于)第八图像的分辨率,电子设备可以直接将第一图像和第八图像进行图像融合,得到第九图像。
需要说明的是,在电子设备将第一图像和第八图像进行图像融合的过程中,电子设备可以将第八图像放大至与第一图像的分辨率相同。
在另一些实施例中,若第一图像的分辨率小于第八图像的分辨率,电子设备则可以对第一图像进行放大,放大后的第一图像的分辨率与第八图像的分辨率相同。之后,电子设备可以将放大后的第一图像与第八图像进行图像融合,得到第九图像。
需要说明的是,对于电子设备对第一图像和第八图像进行图像融合得到第九图像的过程的介绍,可以参考S304中对于电子设备对第二图像和第四图像进行图像融合得到第五图像的说明,此处不予赘述。
可以理解的是,图像融合能够提高图像质量。电子设备将第一图像和第八图像进行图像融合,得到的第九图像的图像质量将高于第一图像(或者第八图像)的图像质量。如此,在暗光或者拍摄物体在移动的场景下,能够进一步提高电子设备拍摄的图像质量。
S704、电子设备对第九图像进行超分辨率重建,得到第四图像。
需要说明的是,对于电子设备对第九图像进行超分辨率重建得到第四图像的过程的介绍,可以参考S303中电子设备第一图像进行超分辨率重建得到第四图像的过程的说明,此处不予赘述。
可以理解的是,电子设备融合得到的第九图像的图像质量高于第一图像(或者第八图像)的图像质量。电子设备对第九图像进行超分辨率重建,得到的第四图像的图像质量也会高于第一图像(或者第八图像)的图像质量。如此,能够提高电子设备拍摄的图像质量。
S705、电子设备对第四图像和第二图像进行图像融合,得到第五图像。
需要说明的是,对S705的介绍,可以参考上述实施例中S304的说明,此处不予赘述。
S706、电子设备对第三图像进行超分辨率重建,得到第六图像。
需要说明的是,对S706的介绍,可以参考上述实施例中S305的说明,此处不予赘述。
S707、电子设备对第六图像和第五图像进行图像融合,得到第七图像。
需要说明的是,对S707的介绍,可以参考上述实施例中S306的说明,此处不予赘述。
基于上述技术方案,电子设备接收到用户的拍照操作,可以通过主摄像头、长焦摄像头、广角摄像头和辅助摄像头分别采集第一图像、第二图像、第三图像和第八图像。并且,电子设备可以对第一图像、第二图像、第三图像和第八图像进行图像处理,得到第七图像,该第七图像具备不同摄像头拍摄得到的图像的特点。也就是说,用户只需要一次操作,便可以通过电子设备得到一张具备多张图像特点的图像,简化了拍摄过程,提高了用户的使用体验。
需要说明的是,为了便于用户通过电子设备拍摄得到具备主图像、长焦图像和广角图像特点的图像(即第七图像),电子设备可以设置预设拍摄模式。当电子设备的拍摄模式处于预设拍摄模式时,电子设备可以通过主摄像头、长焦摄像和广角摄像头等摄像头拍摄得到第七图像。
在一些实施例中,在电子设备接收用户的拍照操作之前,电子设备可以启动拍摄应用,显示图像预览界面。之后,电子设备可以接收用户的拍摄模式切换操作,该拍摄模式切换操作用于触发电子设备改变拍摄模式。响应于拍摄模式切换操作,电子设备可以将拍摄模式切换为预设拍摄模式。示例性的,如图9中的(a)所示,在电子设备启动拍摄应用之后,电子设备可以显示图像预览界面901。该图像预览界面901包括取景框902、摄像头转化键903、拍摄键904、相册键905、预览图像906、闪光灯选项907、“录像”选项、“拍照”选项、“更多”选项等。之后,响应于用户作用于“更多”选项的选择操作(例如操作A),电子设备可以在图像预览界面901的上层显示如图9中的(b)所示的功能选项框908,该功能选项框908包括:“专业”模式的标识、“延时拍摄”模式的标识、“全景”模式的标识和“聚光灯”模式的标识等。其中,该“聚光灯”模式为用于拍摄得到第七图像的模式。也就是说,电子设备的拍摄模式为“聚光灯”模式时,电子设备可以通过多个摄像头拍摄得到第七图像。之后,响应于用户作用于“聚光灯”模式的标识的操作(例如操作B),电子设备可以将电子设备的拍摄模式切换为“聚光灯”模式(如图9中的(c)所示)。
这样一来,在电子设备的拍摄模式处于预设拍摄模式时,电子设备可以接收用户的拍照操作,拍摄得到第七图像。例如,电子设备可以接收如图9中的(c)所示的作用于拍摄键904的操作C(即电子设备执行S301),拍摄得到第七图像(即电子设备执行S302-S306)。
需要说明的是,在电子设备的拍摄模式处于预设拍摄模式时,电子设备可以启动至少一个摄像头获取预览图像。
在一些实施例中,在电子设备的拍摄模式处于预设拍摄模式时,电子设备可以启动主摄像头、长焦摄像头和广角摄像头获取预览图像。可选的,电子设备也可以启动主摄像头、辅助摄像头、长焦摄像头和广角摄像头获取预览图像。其中,该预览图像为第七图像。也就是说,该预览图像为电子设备对多个摄像头采集到的图像进行处理(即S303-S306)之后的图像。
可以理解的是,电子设备启动主摄像头、长焦摄像头和广角摄像头获取预览图像,可以在图像预览界面显示已经处理后的图像(即第七图像)。如此,可以使用户直观地了解在该预设拍摄模式下得到的图像效果,提高了用户的拍摄体验。
在另一些实施例中,在电子设备的拍摄模式处于预设拍摄模式时,电子设备可以启动任一摄像头(例如主摄像头、长焦摄像头或者广角摄像头)采集预览图像。其中,该预览图像为电子设备启动的摄像头采集的图像。例如,电子设备可以启动主摄像头采集预览图像(如图9中的(a)所示的预览图像906)。又例如,电子设备可以启动广角摄像头采集预览图像(如图9中的(c)所示的预览图像906)。通常情况下,电子设备可以启动广角摄像头采集预览图像。如此,可以使用户了解电子设备拍摄的最大区域。
可以理解的是,电子设备启动任一摄像头采集预览图像时,电子设备无需对采集到的图像进行处理,降低了电子设备的功耗。
在一些实施例中,电子设备可以提高用户指定区域的图像质量。具体的,电子设备可以接收用户的倍率设置操作,该倍率设置操作用于触发电子设备设置第一图像、第二图像和第三图像的缩放倍率。响应于用户的倍率设置操作,电子设备可以设置第一预设倍率、第二预设倍率和第三预设倍率,其中,第一预设倍率为第一图像的缩放倍率,第二预设倍率为第二图像的缩放倍率,第三预设倍率为第三图像的缩放倍率。示例性的,如图10中的(a)所示,响应于用户作用于“设置倍率”1001的操作D,电子设备可以显示图10中的(b)所示的倍率设置窗口1002;或者,电子设备可以响应于作用于图9中的(b)所示的“聚光灯”模式的标识的操作,显示倍率设置窗口1002。该倍率设置窗口1002包括:主图像的倍率输入框、长焦图像的倍率输入框和广角图像的倍率输入框;其中,用户可以在倍率输入框输入图像的缩放倍率。例如,主图像的缩放倍率(即第一预设倍率)为1X,长焦图像的缩放倍率(即第二预设倍率)为4X,广角图像的缩放倍率(即第三预设倍率)为0.5X。可选的,该倍率设置窗口1002还可以包括:“保存”按钮、“重置按钮”和“取消”按钮。“保存”按钮用于触发电子设备保存用户输入的图像的缩放倍率,“重置”按钮用于触发电子设备将图像的缩放倍率设置为默认缩放倍率,本申请实施例对默认缩放倍率不作限定(例如主图像的默认缩放倍率为1X,长焦图像的默认缩放倍率为3.5X,广角图像的默认缩放倍率为0.6X)。“取消”按钮用于触发电子设备不显示该倍率设置窗口1002。
可以理解的是,通过倍率设置操作,电子设备可以调整第一图像、第二图像和第三图像的显示区域。如此,电子设备可以根据用户的意向,设置第七图像中具备不同特点的显示区域,提高了用户的使用体验。
需要说明的是,在电子设备拍摄得到第七图像(即电子设备执行S306或者电子设备执行S707)之后,电子设备可以接收第二操作,该第二操作用于触发电子设备显示 第七图像。响应于第二操作,电子设备可以按照预设模式显示第七图像。其中,预设模式包括动态模式和静态模式。动态模式用于指示电子设备以第三区域图像为中心自动缩放第七图像,第七图像包括第三区域图像,第三区域图像相对于第三摄像头的取景范围是第六取景范围,第三取景范围包括第六取景范围,第六取景范围与第二取景范围重合。静态模式用于指示电子设备在接收到用户输入的缩放操作后缩放第七图像。
在一些实施例中,电子设备可以按照动态模式显示第七图像。具体的,电子设备接收第二操作。响应于用户的第二操作,电子设备可以显示动态图像界面(即第一界面),该动态图像界面用于播放第七图像以第三区域图像为中心自动缩放的动态画面。具体的,第七图像可以第三区域图像为中心,按照缩放倍率从大到小自动缩放该第七图像。其中,第七图像的最大缩放倍率为第二图像的缩放倍率,第七图像的最小缩放倍率为第三图像的缩放倍率。
示例性的,如图11所示,响应于用户的第二操作,电子设备可以显示动态图像界面1101,该动态图像界面1101包括动态图像1102,该动态图像1102可以以第三区域图像1105为中心,按照缩放倍率从大到小自动缩放该第七图像。示例性的,用户可以在用户界面看到的动态图像界面,可以用户界面呈现图4中的(c)对应的图像缩放到用户界面呈现图4中的(a)的图像的,然后缩放到呈现图4中的(b)对应的图像。可选的,这个动态图像的缩放过程可以反复重放,例如,用户界面呈现图4中的(b)对应的图像后,跳转到用户界面呈现图4中的(c)对应的图像,进行重放该动态的变化过程;也可以不进行重复,当缩放到呈现图4中的(b)对应的图像后静止,显示最后一帧图像。该显示动态图像可以响应于用户的手指按压该动态图像的任意位置,进行动态显示,当用户的手指离开屏幕,结束长按,结束该动态图像的动态显示;也可以响应于用户的手指长按2秒该动态图像的任意位置,进行动态显示,当用户的手指离开屏幕,结束长按,该动态图像的继续进行动态显示,并不会停止;也可以设置有播放的图标,用户通过点击图标,显示该动态图像。对于该动态图像的显示方式以及响应于何种操作进行动态显示,本申请实施例不进行限定。可选的,该动态图像界面还可以包括第一速度选项,该第一速度选项用于调整动态画面中的播放速度。电子设备响应于用户对第一速度选项的调整操作,可以调整动态画面的播放速度。示例性的,结合图11,动态图像界面1101还包括第一速度选项1103。例如,第一速度选项1103可以指示电子设备以100ixel/s或20%FOV/s自动缩放第七图像。
可选的,该动态图像界面还可以包括第一倍率选项,该第一倍率选项用于调整第一界面中第七图像的最大缩放倍率。其中,第七图像的最大放大倍率小于或者等于第二预设倍率(即第二图像的缩放倍率)。电子设备响应于用户对第一倍率选项的调整操作,可以调整第七图像的最大缩放倍率。例如,结合图11,动态图像界面1101还包括第一倍率选项1104。例如,第二图像的缩放倍率为4X时,第七图像的最大放大倍率可以为4X,也可以为3X。
可以理解的是,电子设备以动态模式显示第七图像,可以使用户观看动态图像,增加了图像查看的趣味性。并且,通过第一速度选项和第一倍率选项,可以使用户调整图像的缩放速度和最大放大倍率,提高了用户的使用体验。
在一些实施例中,电子设备可以按照静态模式显示第七图像。具体的,响应于用 户的第二操作,电子设备可以显示静态图像界面(即第二界面),该静态图像界面包括不能自动缩放的第七图像。可选的,该图像显示界面还可以包括预设倍率选项,该预设倍率选项用于设置第七图像的缩放倍率。电子设备可以接收用户在预设倍率选项设置的缩放倍率,按照用户设置的缩放倍率显示第七图像。示例性的,如图12所示,电子设备显示的图像显示界面1201可以包括第七图像1202和预设倍率选项1203。例如,该预设倍率选项1203可以包括第一预设倍率(如1X)、第二预设倍率(如4X)和第三预设倍率(如0.5X)。也就是说,响应于用户对预设倍率选项1203的操作,电子设备可以将第七图像的缩放倍率设置为4X、1X或者0.5X。
可以理解的是,在电子设备接收到作用于预设倍率选项的操作后,电子设备可以显示以预设倍率缩放的图像。也就是说,第七图像可以显示具备不同图像(例如第一图像、第二图像或第三图像)特点的区域。如此,可以使用户在一幅图像中就可以看到不同图像的特点,提高了用户的使用体验。
在一些实施例中,电子设备可以切换第七图像的显示模式。一种可能的设计中,当电子设备显示的界面为第一界面,即电子设备播放第七图像以第三区域图像为中心自动缩放的动态画面;第一界面还包括手动播放按钮,该手动播放按钮用于触发电子设备显示第二界面。电子设备可以接收作用于手动播放按钮的操作,将显示的界面从第一界面切换为第二界面。示例性的,如图13中的(a)所示,电子设备显示的动态图像界面1301可以包括手动播放按钮1302,例如,该手动播放按钮1302可以为“动态模式”。响应于用户作用于手动播放按钮1302的操作,电子设备显示的界面可以从动态图像界面1301切换为如图13中的(b)所示静态图像界面1303。
另一种可能的设计中,当电子设备显示的界面为第二界面,第二界面还可以包括自动播放按钮,该自动播放按钮用于触发电子设备显示第一界面。例如,如图13中的(b)所示,静态图像界面1303包括自动播放按钮1304,例如,该自动播放按钮1304可以为“静态模式”。电子设备可以接收作用于自动播放按钮的操作,将显示的界面从第二界面切换为第一界面。
可以理解的是,电子设备接收用户的操作之后,可以切换第七图像的显示模式。如此,可以使用户灵活选择显示第七图像的模式,提高了用户的使用体验。
需要说明的是,电子设备中存储的图像类型较多。其中,电子设备可以通过常规的显示方式显示常规类型的图像。例如,电子设备可以在屏幕显示长焦图像、主图像等。电子设备也可以通过预设模式显示上述第七图像。
在一些实施例中,为了便于电子设备确定图像的显示方式,电子设备在保存第七图像时,可以在第七图像的图像信息中添加第一标识,该第一标识用于指示电子设备按照预设模式显示第七图像。响应于第二操作,电子设备可以根据图像的图像信息,确定是否以预设模式显示图像。示例性的,电子设备在接收到用户的第二操作之后,可以检测图像的图像信息是否存在第一标识。若图像信息存在第一标识,电子设备可以按照预设模式显示该图像。若图像信息不存在第一标识,电子设备可以按照常规的显示方式显示该图像。
可以理解的是,电子设备为第七图像添加第一标识,可以使电子设备按照预设模式显示第七图像。如此,增加了用户查看图像的趣味性,提高了用户的使用体验。
需要说明的是,在电子设备拍摄得到第七图像之后,电子设备可以向其他的电子设备分享该第七图像。其他的电子设备可以根据该电子设备是否具备识别第一标识的能力,按照不同的显示方式显示第七图像。
在一些实施例中,接收到第七图像的电子设备(例如第一设备)可以按照预设模式显示第七图像,该第一设备具备识别第一标识的能力。具体的,电子设备可以向第一设备分享第七图像。第一设备接收到第七图像之后,可以检测并识别第七图像的第一标识。然后,第一设备可以按照预设模式显示第七图像。例如,响应于第二操作,第一设备可以按照动态模式显示第七图像。
在另一些实施例中,接收到第七图像的电子设备(例如第二设备)可以按照常规的显示方式显示第七图像,该第二设备不具备识别第一标识的能力。具体的,电子设备可以向第二设备分享第七图像。第二设备接收到第七图像之后,无法识别第七图像的第一标识。然后,第二设备可以按照常规的显示方式显示第七图像。例如,响应于第二操作,第二设备可以在第二设备的屏幕上显示第七图像。
需要说明的是,若其他的电子设备不具备识别第一标识的能力,该电子设备则无法显示自动缩放的第七图像,影响用户的使用体验。
在另一些实施例中,为了使无法识别第一标识的电子设备能够查看自动缩放的第七图像,电子设备可以转换第七图像的文件格式。具体的,当电子设备显示的界面为第一界面,即电子设备播放第七图像以第三区域图像为中心自动缩放的动态画面;第一界面还可以包括格式转换按钮,该格式转换按钮用于触发电子设备转换第七图像的文件格式。电子设备可以接收作用于格式转换按钮的第三操作,生成第一文件,该第三操作用于触发电子设备转换第七图像的文件格式,第一文件为第七图像以第三区域图像为中心自动缩放的文件。示例性的,电子设备可以生成视频文件,或者,电子设备可以生成动态图像文件。之后,电子设备可以向其他的电子设备分享第一文件。例如,第一文件为图形交换格式(Graphics Interchange Format,GIF)图像时,电子设备可以向其他的电子设备分享GIF图。
需要说明的是,第一文件中第七图像自动缩放的速度和第七图像能够放大的最大倍率可以由第一速度选项和第一倍率选项设定。本申请实施例对此不作限定。
可以理解的是,第一文件可以在任一电子设备中自动缩放第七图像。如此,其他的电子设备接收到第一文件之后,可以显示自动缩放的第七图像,提高了用户的使用体验。
上述主要从电子设备的角度对本申请实施例提供的方案进行了介绍。可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本申请所公开的实施例描述的各示例的一种图像的融合方法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是电子设备软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对图像的融合装置进行功能模块或者功能单元的划分,例如,可以对应各个功能划分各个功能模块或者功能单元,也可以将两个 或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块或者功能单元的形式实现。其中,本申请实施例中对模块或者单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
本申请另一些实施例提供了一种电子设备(如图2A所示的手机200),该电子设备可以包括:存储器和一个或多个处理器。该存储器和处理器耦合。该电子设备还可以包括摄像头。或者,该电子设备可以外接摄像头。该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令。当处理器执行计算机指令时,电子设备可执行上述方法实施例中手机执行的各个功能或者步骤。该电子设备的结构可以参考图2A所示的手机200的结构。
本申请实施例还提供一种芯片***,如图14所示,该芯片***包括至少一个处理器1401和至少一个接口电路1402。处理器1401和接口电路1402可通过线路互联。例如,接口电路1402可用于从其它装置(例如电子设备的存储器)接收信号。又例如,接口电路1402可用于向其它装置(例如处理器1401)发送信号。示例性的,接口电路1402可读取存储器中存储的指令,并将该指令发送给处理器1401。当所述指令被处理器1401执行时,可使得电子设备(如图2A所示的手机200)执行上述实施例中的各个步骤。当然,该芯片***还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质包括计算机指令,当所述计算机指令在上述电子设备(如图2A所示的手机200)上运行时,使得该电子设备执行上述方法实施例中手机执行的各个功能或者步骤。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中手机执行的各个功能或者步骤。
通过以上实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成 的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (22)

  1. 一种图像的融合方法,其特征在于,应用于电子设备,所述电子设备包括第一摄像头、第二摄像头和第三摄像头,所述第一摄像头的视场角大于所述第二摄像头的视场角,所述第三摄像头的视场角大于所述第一摄像头的视场角,所述方法包括:
    所述电子设备检测到第一操作;
    响应于所述第一操作,所述电子设备通过所述第一摄像头采集所述第一图像,通过所述第二摄像头采集第二图像,通过所述第三摄像头采集第三图像;其中,所述第一摄像头采集所述第一图像的取景范围是第一取景范围,所述第二摄像头采集所述第二图像的取景范围是第二取景范围,所述第三摄像头采集所述第三图像的取景范围是第三取景范围,所述第三取景范围大于所述第一取景范围,所述第一取景范围大于所述第二取景范围;
    所述电子设备对所述第一图像进行处理,得到第四图像;其中,所述第四图像包括第一区域图像,所述第四图像中所述第一区域图像的分辨率和所述第二图像的分辨率相同,所述第一区域图像相对于所述第一摄像头的取景范围是第四取景范围,所述第一取景范围包括所述第四取景范围,所述第四取景范围与所述第二取景范围重合;
    所述电子设备对所述第二图像和所述第四图像进行图像融合,得到第五图像;
    所述电子设备对所述第三图像进行处理,得到第六图像;其中,所述第六图像包括第二区域图像,所述第六图像中所述第二区域图像的分辨率和所述第一图像的分辨率相同,所述第二区域图像相对于所述第三摄像头的取景范围是第五取景范围,所述第三取景范围包括所述第五取景范围,所述第五取景范围与所述第一取景范围重合;
    所述电子设备对所述第五图像和所述第六图像进行图像融合,得到第七图像。
  2. 根据权利要求1所述的方法,其特征在于,所述电子设备对所述第一图像进行处理,得到第四图像,包括:
    所述电子设备对所述第一图像进行超分辨率重建,得到所述第四图像。
  3. 根据权利要求1或2所述的方法,其特征在于,所述电子设备对所述第三图像进行处理,得到第六图像,包括:
    所述电子设备对所述第三图像进行超分辨率重建,得到所述第六图像。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,在所述电子设备对所述第五图像和所述第六图像进行图像融合,得到第七图像之后,所述方法还包括:
    所述电子设备接收第二操作,所述第二操作用于触发所述电子设备显示所述第七图像;
    响应于第二操作,所述电子设备显示第一界面,所述第一界面用于播放所述第七图像以第三区域图像为中心,自动缩放的动态画面;其中,所述第七图像包括所述第三区域图像,所述第三区域图像相对于所述第三摄像头的取景范围是第六取景范围,所述第三取景范围包括所述第六取景范围,所述第六取景范围与所述第二取景范围重合。
  5. 根据权利要求4所述的方法,其特征在于,所述第一播放界面还包括第一速度选项和第一倍率选项;其中,所述第一速度选项用于调整所述动态画面中的播放速度,所述第一倍率选项用于调整所述第一界面中所述第七图像的最大缩放倍率;
    所述方法还包括:
    所述电子设备响应于所述用户对所述第一速度选项的调整操作,调整所述动态画面的播放速度;
    所述电子设备响应于所述用户对所述第一倍率选项的调整操作,调整所述第七图像的最大缩放倍率。
  6. 根据权利要求4或5所述的方法,其特征在于,所述第一界面还包括手动播放按钮,所述手动播放按钮用于触发所述电子设备显示第二界面;
    其中,所述第二界面包括所述第七图像和预设倍率选项,所述预设倍率选项用于设置所述第七图像的缩放倍率。
  7. 根据权利要求1-3中任一项所述的方法,其特征在于,在所述电子设备对所述第五图像和所述第六图像进行图像融合,得到第七图像之后,所述方法还包括:
    所述电子设备接收第二操作,所述第二操作用于触发所述电子设备显示所述第七图像;
    响应于所述第二操作,所述电子设备显示第二界面;其中,所述第二界面包括所述第七图像和预设倍率选项,所述预设倍率选项用于设置所述第七图像的缩放倍率;
    所述方法还包括:
    所述电子设备接收所述用户在所述预设倍率选项设置的缩放倍率,按照所述用户设置的缩放倍率显示所述第七图像。
  8. 根据权利要求7所述的方法,其特征在于,所述第二界面还包括自动播放按钮,所述自动播放按钮用于触发所述电子设备显示第一界面;
    其中,所述第一界面用于播放所述第七图像以第三区域图像为中心,自动缩放的动态画面;所述第七图像包括所述第三区域图像,所述第三区域图像相对于所述第三摄像头的取景范围是第六取景范围,所述第三取景范围包括所述第六取景范围,所述第六取景范围与所述第二取景范围重合。
  9. 根据权利要求4-6中任一项所述的方法,其特征在于,所述第一界面包括格式转换按钮,所述格式转换按钮用于触发所述电子设备转换所述第七图像的文件格式;
    所述方法还包括:
    所述电子设备接收作用于所述格式转换按钮的第三操作,生成第一文件;其中,所述第一文件为所述第七图像以所述第三区域图像为中心、自动缩放的视频文件;或者,所述第一文件为以所述第三区域图像为中心、自动缩放所述第七图像的图形交换格式GIF图像。
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,所述第一摄像头为主摄像头,所述第二摄像头为长焦摄像头,所述第三摄像头为广角摄像头。
  11. 一种图像的融合方法,其特征在于,应用于电子设备,所述电子设备包括第一摄像头、第二摄像头、第三摄像头和第四摄像头,所述第一摄像头的视场角大于所述第二摄像头的视场角,所述第三摄像头的视场角大于所述第一摄像头的视场角,所述第四摄像头的视场角与所述第一摄像头的视场角相同,所述方法包括:
    所述电子设备检测到第一操作;
    响应于所述第一操作,所述电子设备通过所述第一摄像头采集所述第一图像,通 过所述第二摄像头采集第二图像,通过所述第三摄像头采集第三图像,通过所述第四摄像头采集第八图像;其中,所述第一摄像头采集所述第一图像的取景范围是第一取景范围,所述第二摄像头采集所述第二图像的取景范围是第二取景范围,所述第三摄像头采集所述第三图像的取景范围是第三取景范围,所述第三取景范围大于所述第一取景范围,所述第一取景范围大于所述第二取景范围,所述第四摄像头采集所述第八图像的取景范围与所述第一取景范围相同;
    所述电子设备对所述第一图像和所述第八图像进行图像融合,得到第九图像;
    所述电子设备对所述第九图像进行处理,得到第四图像;其中,所述第四图像包括第一区域图像,所述第四图像中所述第一区域图像的分辨率和所述第二图像的分辨率相同,所述第一区域图像相对于所述第一摄像头的取景范围是第四取景范围,所述第一取景范围包括所述第四取景范围,所述第四取景范围与所述第二取景范围重合;
    所述电子设备对所述第二图像和所述第四图像进行图像融合,得到第五图像;
    所述电子设备对所述第三图像进行处理,得到第六图像;其中,所述第六图像包括第二区域图像,所述第六图像中所述第二区域图像的分辨率和所述第一图像的分辨率相同,所述第二区域图像相对于所述第三摄像头的取景范围是第五取景范围,所述第三取景范围包括所述第五取景范围,所述第五取景范围与所述第一取景范围重合;
    所述电子设备对所述第五图像和所述第六图像进行图像融合,得到第七图像。
  12. 根据权利要求11所述的方法,其特征在于,所述电子设备对所述第九图像进行处理,得到第四图像,包括:
    所述电子设备对所述第九图像进行超分辨率重建,得到所述第四图像。
  13. 根据权利要求11或12所述的方法,其特征在于,所述电子设备对所述第三图像进行处理,得到第六图像,包括:
    所述电子设备对所述第三图像进行超分辨率重建,得到所述第六图像。
  14. 根据权利要求11-13中任一项所述的方法,其特征在于,在所述电子设备对所述第五图像和所述第六图像进行图像融合,得到第七图像之后,所述方法还包括:
    所述电子设备接收第二操作,所述第二操作用于触发所述电子设备显示所述第七图像;
    响应于第二操作,所述电子设备显示第一界面,所述第一界面用于播放所述第七图像以第三区域图像为中心,自动缩放的动态画面;其中,所述第七图像包括所述第三区域图像,所述第三区域图像相对于所述第三摄像头的取景范围是第六取景范围,所述第三取景范围包括所述第六取景范围,所述第六取景范围与所述第二取景范围重合。
  15. 根据权利要求14所述的方法,其特征在于,所述第一播放界面还包括第一速度选项和第一倍率选项;其中,所述第一速度选项用于调整所述动态画面中的播放速度,所述第一倍率选项用于调整所述第一界面中所述第七图像的最大缩放倍率;
    所述方法还包括:
    所述电子设备响应于所述用户对所述第一速度选项的调整操作,调整所述动态画面的播放速度;
    所述电子设备响应于所述用户对所述第一倍率选项的调整操作,调整所述第七图 像的最大缩放倍率。
  16. 根据权利要求14或15所述的方法,其特征在于,所述第一界面还包括手动播放按钮,所述手动播放按钮用于触发所述电子设备显示第二界面;
    其中,所述第二界面包括所述第七图像和预设倍率选项,所述预设倍率选项用于设置所述第七图像的缩放倍率。
  17. 根据权利要求11-13中任一项所述的方法,其特征在于,在所述电子设备对所述第五图像和所述第六图像进行图像融合,得到第七图像之后,所述方法还包括:
    所述电子设备接收第二操作,所述第二操作用于触发所述电子设备显示所述第七图像;
    响应于所述第二操作,所述电子设备显示第二界面;其中,所述第二界面包括所述第七图像和预设倍率选项,所述预设倍率选项用于设置所述第七图像的缩放倍率;
    所述方法还包括:
    所述电子设备接收所述用户在所述预设倍率选项设置的缩放倍率,按照所述用户设置的缩放倍率显示所述第七图像。
  18. 根据权利要求17所述的方法,其特征在于,所述第二界面还包括自动播放按钮,所述自动播放按钮用于触发所述电子设备显示第一界面;
    其中,所述第一界面用于播放所述第七图像以第三区域图像为中心,自动缩放的动态画面;所述第七图像包括所述第三区域图像,所述第三区域图像相对于所述第三摄像头的取景范围是第六取景范围,所述第三取景范围包括所述第六取景范围,所述第六取景范围与所述第二取景范围重合。
  19. 一种电子设备,其特征在于,所述电子设备包括:存储器、显示屏和一个或多个处理器;所述存储器、所述显示屏与所述处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述计算机指令被所述一个或多个处理器执行时,使得所述电子设备执行如权利要求1-10中任一项所述的方法。
  20. 一种电子设备,其特征在于,所述电子设备包括:存储器、显示屏和一个或多个处理器;所述存储器、所述显示屏与所述处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述计算机指令被所述一个或多个处理器执行时,使得所述电子设备执行如权利要求11-18中任一项所述的方法。
  21. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-18中任一项所述的方法。
  22. 一种计算机程序产品,其特征在于,所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-18中任一项所述的方法。
PCT/CN2022/079139 2021-05-10 2022-03-03 一种图像的融合方法及电子设备 WO2022237286A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US17/915,580 US20240212100A1 (en) 2021-05-10 2022-03-03 Image fusion method and electronic device
MX2022011895A MX2022011895A (es) 2021-05-10 2022-03-03 Metodo de fusion de imagenes y dispositivo electronico.
EP22743697.9A EP4117275A4 (en) 2021-05-10 2022-03-03 IMAGE FUSION METHOD AND ELECTRONIC DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110506754.4A CN113364975B (zh) 2021-05-10 2021-05-10 一种图像的融合方法及电子设备
CN202110506754.4 2021-05-10

Publications (1)

Publication Number Publication Date
WO2022237286A1 true WO2022237286A1 (zh) 2022-11-17

Family

ID=77526208

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/079139 WO2022237286A1 (zh) 2021-05-10 2022-03-03 一种图像的融合方法及电子设备

Country Status (5)

Country Link
US (1) US20240212100A1 (zh)
EP (1) EP4117275A4 (zh)
CN (1) CN113364975B (zh)
MX (1) MX2022011895A (zh)
WO (1) WO2022237286A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113364975B (zh) * 2021-05-10 2022-05-20 荣耀终端有限公司 一种图像的融合方法及电子设备
CN115550570B (zh) * 2022-01-10 2023-09-01 荣耀终端有限公司 图像处理方法与电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204392356U (zh) * 2015-03-12 2015-06-10 王欣东 一种带有多个不同固定焦距摄像头的手机
JP2018113683A (ja) * 2017-01-06 2018-07-19 キヤノン株式会社 画像処理装置、画像処理方法及びプログラム
CN108712608A (zh) * 2018-05-16 2018-10-26 Oppo广东移动通信有限公司 终端设备拍摄方法和装置
CN110248081A (zh) * 2018-10-12 2019-09-17 华为技术有限公司 图像捕捉方法及电子设备
CN111062881A (zh) * 2019-11-20 2020-04-24 RealMe重庆移动通信有限公司 图像处理方法及装置、存储介质、电子设备
CN113364975A (zh) * 2021-05-10 2021-09-07 荣耀终端有限公司 一种图像的融合方法及电子设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7469064B2 (en) * 2003-07-11 2008-12-23 Panasonic Corporation Image display apparatus
WO2020019356A1 (zh) * 2018-07-27 2020-01-30 华为技术有限公司 一种终端切换摄像头的方法及终端
CN111294517B (zh) * 2020-03-03 2021-12-17 荣耀终端有限公司 一种图像处理的方法及移动终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204392356U (zh) * 2015-03-12 2015-06-10 王欣东 一种带有多个不同固定焦距摄像头的手机
JP2018113683A (ja) * 2017-01-06 2018-07-19 キヤノン株式会社 画像処理装置、画像処理方法及びプログラム
CN108712608A (zh) * 2018-05-16 2018-10-26 Oppo广东移动通信有限公司 终端设备拍摄方法和装置
CN110248081A (zh) * 2018-10-12 2019-09-17 华为技术有限公司 图像捕捉方法及电子设备
CN111062881A (zh) * 2019-11-20 2020-04-24 RealMe重庆移动通信有限公司 图像处理方法及装置、存储介质、电子设备
CN113364975A (zh) * 2021-05-10 2021-09-07 荣耀终端有限公司 一种图像的融合方法及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4117275A4

Also Published As

Publication number Publication date
MX2022011895A (es) 2022-12-02
EP4117275A4 (en) 2023-11-29
CN113364975A (zh) 2021-09-07
US20240212100A1 (en) 2024-06-27
CN113364975B (zh) 2022-05-20
EP4117275A1 (en) 2023-01-11

Similar Documents

Publication Publication Date Title
WO2020186969A1 (zh) 一种多路录像方法及设备
WO2021093793A1 (zh) 一种拍摄方法及电子设备
WO2022262260A1 (zh) 一种拍摄方法及电子设备
EP2215843B1 (en) System and method for generating a photograph with variable image quality
WO2022237286A1 (zh) 一种图像的融合方法及电子设备
US20220321797A1 (en) Photographing method in long-focus scenario and terminal
WO2022237287A1 (zh) 一种图像的显示方法及电子设备
WO2020078273A1 (zh) 一种拍摄方法及电子设备
WO2022252660A1 (zh) 一种视频拍摄方法及电子设备
CN113596316B (zh) 拍照方法及电子设备
CN115484375A (zh) 拍摄方法及电子设备
WO2022257687A1 (zh) 一种视频拍摄方法及电子设备
WO2022252649A1 (zh) 一种视频的处理方法及电子设备
CN114531539B (zh) 拍摄方法及电子设备
CN117278850A (zh) 一种拍摄方法及电子设备
CN114466101B (zh) 显示方法及电子设备
RU2807091C1 (ru) Способ слияния изображений и электронное устройство
CN114979458A (zh) 一种图像的拍摄方法及电子设备
CN116703701B (zh) 一种图片裁剪方法及电子设备
RU2809660C1 (ru) Способ кадрирования для записи многоканального видео, графический пользовательский интерфейс и электронное устройство
RU2789447C1 (ru) Способ и устройство многоканальной видеозаписи
RU2822535C2 (ru) Способ и устройство многоканальной видеозаписи
CN118354200A (zh) 一种视频拍摄方法及电子设备
CN115802144A (zh) 视频拍摄方法及相关设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022743697

Country of ref document: EP

Effective date: 20220802

WWE Wipo information: entry into national phase

Ref document number: 17915580

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE