WO2022228274A1 - 一种变焦拍摄场景下的预览图像显示方法与电子设备 - Google Patents

一种变焦拍摄场景下的预览图像显示方法与电子设备 Download PDF

Info

Publication number
WO2022228274A1
WO2022228274A1 PCT/CN2022/088235 CN2022088235W WO2022228274A1 WO 2022228274 A1 WO2022228274 A1 WO 2022228274A1 CN 2022088235 W CN2022088235 W CN 2022088235W WO 2022228274 A1 WO2022228274 A1 WO 2022228274A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
preview image
electronic device
displayed
Prior art date
Application number
PCT/CN2022/088235
Other languages
English (en)
French (fr)
Inventor
唐锦华
祝炎明
张亚运
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to US18/557,205 priority Critical patent/US20240244311A1/en
Priority to EP22794745.4A priority patent/EP4311224A1/en
Publication of WO2022228274A1 publication Critical patent/WO2022228274A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming

Definitions

  • the present application relates to the field of electronic technologies, and in particular, to a preview image display method and electronic device in a zoom shooting scene.
  • Zoom photography enables electronic devices to capture distant scenes. Taking a mobile phone as an example, when the zoom ratio is increased, the object to be shot in the shooting interface is "zoomed in”. This zoom shooting provides great convenience for users who like to shoot distant scenes.
  • the current zoom shooting method is to enlarge the object in the center area of the image when the zoom magnification is increased, but the object in the center area may not be the object the user wants to shoot, so the user needs to move the mobile phone, Finding the object you want to shoot is a time-consuming and laborious process, and once the mobile phone moves a little faster, it is easy to miss the target object, which is not very convenient to operate, and the experience is poor.
  • the purpose of the present application is to provide a preview image display method and electronic device in a zoom shooting scene, so as to improve the shooting experience when shooting distant objects.
  • a method for displaying a preview image in a zoom shooting scene is provided.
  • the method can be applied to electronic devices, such as mobile phones, tablet computers, and the like.
  • the method includes: starting a camera application in the electronic device, and the camera on the electronic device captures a first image; in a zoom shooting mode, identifying a target shooting object on the first image; displaying a first preview image,
  • the first preview image is a preview image corresponding to at least one target shooting object on the first image.
  • the displayed preview image is the preview image corresponding to the image block in the center area on the first image.
  • the user moves the position of the electronic device to find the photographed object.
  • the first preview image is a preview image corresponding to at least one target shooting object on the first image.
  • the at least one target photographing object is a person
  • the first preview image is the preview image corresponding to the person
  • the at least one target photographing object is an animal
  • the first preview image is the preview image corresponding to the animal. Similar to the previous zoom shooting mode, there is no need to move the position of the electronic device to find the target shooting object, and the operation is convenient.
  • the size of the at least one target photographing object on the first preview image is larger than the size of the at least one target photographing object on the first image.
  • the magnification of the first image captured by the camera is 1 times, and the magnification of the first preview image may be 15 times, 20 times, 25 times, etc.
  • the first preview image is the magnified first image.
  • the corresponding preview image, and what is enlarged is the at least one target photographing object on the first image.
  • the method further includes: displaying the first preview image and also displaying a first window, where the first image and the first mark are displayed in the first window, and the first preview image is displayed in the first window.
  • a marker is used to mark the at least one target photographing object on the first image. That is to say, the user can determine which target shooting object on the first image the current preview image (ie, the first preview image) corresponds to through the first mark in the first window, and the user experience is better.
  • the method further includes: a second mark is also displayed in the first window, the second mark is used to mark other objects other than the at least one target photographed object on the first image
  • the target photographing object, the first mark is different from the second mark. That is to say, through the first window, the user can not only determine which target shooting objects are on the first image, but also which target shooting object on the first image the current preview image (such as the first preview image) corresponds to. better.
  • the first image includes a first target shooting object and a second target shooting object, and the first preview image is a preview image corresponding to the first target shooting object;
  • the target shooting object is switched, a second preview image is displayed, and the second preview image is a preview image corresponding to the second target shooting object.
  • the size of the second target shooting object on the second preview image is larger than the size of the second target shooting object on the first image.
  • the first target photographing object is zoomed in to photograph, and the second target photographing object is enlarged and photographed through the target photographing object switching operation, and the user does not need to move the electronic device to find the target photographing object, and the operation is convenient.
  • the method further includes: displaying a first window while displaying the second preview image, where the first image and the second mark are displayed in the first window, and the first window displays the first image and the second mark.
  • the second mark is used to mark the second target shooting object on the first image; wherein, when the second mark is displayed in the first window, the first mark in the first window is canceled or displayed.
  • the second mark is displayed differently from the first mark, and the first mark is used to mark the first target photographing object on the first image. That is to say, when the first target photographing object is originally zoomed in, a first mark is displayed in the first window to mark the first target photographing object.
  • the second mark displayed in the middle is used to mark the second target shooting object, and at the same time, the first mark is canceled and displayed or the first mark and the second mark are displayed differently. In this way, it is more convenient for the user to distinguish which target shooting object on the first image corresponds to the current preview image (eg, the second preview image).
  • the first image includes a first target shooting object and a second target shooting object
  • the first preview image is a preview image corresponding to the first target shooting object
  • a third preview image is displayed, and the third preview image is the preview image corresponding to the first target shooting object and the second target shooting object; wherein, the The sizes of the first target photographing object and the second target photographing object on the third preview image are larger than the sizes of the first target photographing object and the second target photographing object on the first image.
  • the operation for increasing the number of target photographing objects in the preview image is increased to enlarge the photographing of the first target photographing object and the second target photographing object.
  • the user does not need to move the electronic device to search for the first target shooting object and the second target shooting object, and the operation is convenient.
  • the first image includes a first target shooting object and a second target shooting object
  • the first preview image is a preview image corresponding to the first target shooting object
  • the first preview image is displayed in the first area on the display screen of the electronic device
  • the fourth preview image is displayed in the second area
  • the fourth preview image is displayed in the second area.
  • the operation for increasing the number of target shooting objects in the shooting interface is increased to zoom in and shoot the first target shooting object and the second target shooting object, and the first target shooting object and the second target subject are displayed in a split screen.
  • the user does not need to move the electronic device to search for the first target shooting object and the second target shooting object, and the operation is convenient.
  • the method further includes: in the case where the target shooting object on the first image is not recognized, displaying a fifth preview image, where the fifth preview image is the first image The preview image corresponding to the image block in the upper center area. That is, if the target photographing object is not identified, then zoom in to photograph the object in the central area on the first image.
  • the method further includes: detecting a photographing instruction; in response to the photographing instruction, photographing to obtain the first image and a second image, where the second image is the first preview image corresponding captured image. That is to say, during zoom shooting, if the shooting button is clicked, a complete image (ie, the first image) and an enlarged image of at least one target object (ie, the second image) are captured, which is convenient for Compared with users, the experience is better.
  • the at least one target photographing object may be the photographing object occupying the largest or the smallest area on the first image; alternatively, the at least one target photographing object may be an area close to the center or close to the first image in the first image
  • the photographing object in the edge area alternatively, the at least one target photographing subject may be a photographing subject of interest to the user in the first image; or, the at least one target photographing subject may be a target photographing subject specified by the user.
  • the method further includes: when a window hiding operation is detected, hiding the first window; when detecting a window calling operation, displaying the first window. That is to say, the first window can be called up or hidden. For example, when the user wants to check which target shooting object on the first image corresponds to the current preview image (eg, the first preview image), the first window can be called up. When the user does not want the first window to block the first preview image, the first window can be hidden.
  • an electronic device comprising:
  • processor memory, and, one or more programs
  • the one or more programs are stored in the memory, and the one or more programs include instructions that, when executed by the processor, cause the electronic device to perform the following steps:
  • a first preview image is displayed, where the first preview image is a preview image corresponding to at least one target shooting object on the first image.
  • the size of the at least one target photographing object on the first preview image is larger than the size of the at least one target photographing object on the first image.
  • the electronic device when the instructions are executed by the processor, the electronic device is caused to perform the following step: displaying the first preview image and also displaying a first window, the first window The first image and a first marker are displayed in the first image, and the first marker is used to mark the at least one target photographing object on the first image.
  • the first image includes a first target shooting object and a second target shooting object
  • the first preview image is a preview image corresponding to the first target shooting object
  • a second preview image is displayed, and the second preview image is a preview image corresponding to the second target shooting object.
  • the size of the second target photographing object on the second preview image is larger than the size of the second target photographing object on the first image.
  • the electronic device when the instructions are executed by the processor, the electronic device is caused to perform the following steps:
  • a first window is also displayed, in which the first image and a second mark are displayed, and the second mark is used to mark the first image on the first image.
  • the display of the first mark in the first window is canceled, and the first mark is used to mark the first target on the first image subject.
  • the first image includes a first target shooting object and a second target shooting object
  • the first preview image is a preview image corresponding to the first target shooting object
  • a third preview image is displayed, and the third preview image is a preview image corresponding to the first target photographing object and the second target photographing object;
  • the size of the first target shooting object and the second target shooting object on the third preview image is larger than the size of the first target shooting object and the second target shooting object on the first image.
  • the first image includes a first target shooting object and a second target shooting object
  • the first preview image is a preview image corresponding to the first target shooting object
  • the first preview image is displayed in the first area on the display screen of the electronic device, the fourth preview image is displayed in the second area, and the first preview image is displayed in the second area.
  • the four preview images are the preview images corresponding to the second target shooting object;
  • the size of the second target shooting object on the fourth preview image is larger than the size of the second target shooting object on the first image.
  • the electronic device when the instructions are executed by the processor, the electronic device is caused to perform the following step: in the case where the target photographed object on the first image is not recognized, display a fifth A preview image, where the fifth preview image is a preview image corresponding to an image block in a central area on the first image.
  • the electronic device when the instruction is executed by the processor, the electronic device is caused to perform the following steps: detecting a photographing instruction; in response to the photographing instruction, photographing to obtain the first image and the first image Two images, where the second image is a captured image corresponding to the first preview image.
  • the electronic device when the instruction is executed by the processor, the electronic device is caused to perform the following steps: when a window hiding operation is detected, the first window is hidden; when a window calling-out operation is detected to display the first window.
  • a computer-readable storage medium is also provided, the computer-readable storage medium is used to store a computer program, and when the computer program is run on a computer, the computer is made to execute the above-mentioned first aspect. method.
  • a computer program product comprising a computer program, which, when the computer program is run on a computer, causes the computer to execute the method provided in the above-mentioned first aspect.
  • a graphical user interface on an electronic device, the electronic device having a display screen, a memory, and a processor for executing one or more computer programs stored in the memory,
  • the graphical user interface includes a graphical user interface displayed when the electronic device executes the method described in the first aspect.
  • an embodiment of the present application further provides a chip, which is coupled to a memory in an electronic device and is used to call a computer program stored in the memory and execute the technical solution of the first aspect of the embodiment of the present application, which is implemented in the present application.
  • “coupled” means that two components are directly or indirectly joined to each other.
  • FIG. 1 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for displaying a preview image in a zoom shooting scene according to an embodiment of the present application
  • 3 to 4 are schematic diagrams of a photographing interface of an electronic device according to an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a subject identification algorithm provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an electronic device prompting a user to select a target photographing object provided by an embodiment of the present application
  • FIGS. 7A to 7B are schematic diagrams of matching a marker frame and a preview image according to an embodiment of the application.
  • FIGS. 8 to 12 are some schematic diagrams of a photographing interface of an electronic device according to an embodiment of the application.
  • FIGS. 13A to 13B are other schematic diagrams of a photographing interface of an electronic device according to an embodiment of the present application.
  • FIG. 14 is another schematic diagram of a photographing interface of an electronic device provided by an embodiment of the application.
  • 15 is another schematic flowchart of a method for displaying a preview image in a zoom shooting scene according to an embodiment of the present application
  • 16 is a schematic diagram of a zoom shooting manual mode provided by an embodiment of the application.
  • 17 is a schematic diagram of a software structure of an electronic device according to an embodiment of the application.
  • 18 is a schematic diagram of information interaction between different modules in an electronic device according to an embodiment of the application.
  • 19 is another schematic flowchart of a method for displaying a preview image in a zoom shooting scene provided by an embodiment of the present application.
  • FIG. 20 is another schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the preview image involved in the embodiments of the present application refers to an image displayed in a shooting interface (or called a viewfinder interface) of an electronic device.
  • the electronic device is a mobile phone as an example.
  • the mobile phone starts the camera application, turns on the camera, and displays a shooting interface.
  • the shooting interface displays a preview image, and the preview image is an image collected by the camera.
  • the field of view involved in the embodiments of the present application is an important performance parameter of the camera.
  • Field of view may also be referred to as “viewing angle”, “field of view”, “field of view” and other terms, which are not limited in this document.
  • the field of view is used to indicate the maximum angle range that the camera can capture. If the object is within this angular range, the object will be captured by the camera and presented in the preview image. If the object is outside this angular range, the object will not be captured by the camera and will not appear in the preview image.
  • the larger the field of view of the camera the larger the shooting range and the shorter the focal length; and the smaller the field of view of the camera, the smaller the shooting range and the longer the focal length.
  • cameras can be divided into ordinary cameras, wide-angle cameras, ultra-wide-angle cameras, etc. due to different field of view.
  • the focal length of a normal camera can be 45 to 40 mm, and the viewing angle can be 40 degrees to 60 degrees;
  • the focal length of a wide-angle camera can be 38 to 24 mm, and the viewing angle can be 60 degrees to 84 degrees;
  • the focal length of an ultra-wide-angle camera can be 20 degrees.
  • the viewing angle can be 94 to 118 degrees.
  • the method for displaying a preview image in a zoom shooting scene can be applied to an electronic device.
  • the electronic device includes a camera, and the camera may be a wide-angle camera or an ultra-wide-angle camera; of course, it may also be an ordinary camera.
  • this application does not limit the number of cameras, which may be one or multiple. If there are multiple, at least one wide-angle camera or ultra-wide-angle camera may be included in the multiple.
  • the electronic device may be a portable electronic device, such as a mobile phone, a tablet computer, a portable computer, a wearable device with wireless communication function (such as a smart watch, smart glasses, smart bracelet, or smart helmet, etc.), or a vehicle-mounted device.
  • Exemplary embodiments of portable electronic devices include, but are not limited to, carry-on Or portable electronic devices with other operating systems.
  • FIG. 1 shows a schematic structural diagram of an electronic device.
  • the electronic device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, Antenna 1, Antenna 2, Mobile Communication Module 150, Wireless Communication Module 160, Audio Module 170, Speaker 170A, Receiver 170B, Microphone 170C, Headphone Interface 170D, Sensor Module 180, Key 190, Motor 191, Indicator 192, Camera 193, Display screen 194, and subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller can be the nerve center and command center of the electronic device. The controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device, and can also be used to transmit data between the electronic device and peripheral devices.
  • the charging management module 140 is used to receive charging input from the charger.
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the wireless communication function of the electronic device can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in an electronic device can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G etc. applied on the electronic device.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the wireless communication module 160 can provide applications on electronic devices including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • frequency modulation frequency modulation
  • FM near field communication technology
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through
  • the antenna 1 of the electronic device is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou navigation satellite system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • global positioning system global positioning system, GPS
  • global navigation satellite system global navigation satellite system, GLONASS
  • Beidou navigation satellite system beidou navigation satellite system, BDS
  • quasi-zenith satellite system quadsi -zenith satellite system, QZSS
  • SBAS satellite based augmentation systems
  • the display screen 194 is used to display the display interface of the application and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the electronic device may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • the electronic device can realize the shooting function through the ISP, the camera 193, the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device by executing the instructions stored in the internal memory 121 .
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area may store the operating system, and the software code of at least one application (eg, iQIYI application, WeChat application, etc.).
  • the storage data area can store data (such as images, videos, etc.) generated during the use of the electronic device.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. Such as saving pictures, videos and other files in an external memory card.
  • the electronic device can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone jack 170D, and the application processor. Such as music playback, recording, etc.
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the pressure sensor 180A may be provided on the display screen 194 .
  • the gyro sensor 180B can be used to determine the motion attitude of the electronic device.
  • the angular velocity of the electronic device about three axes ie, the x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device calculates the altitude from the air pressure value measured by the air pressure sensor 180C to assist in positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device can use the magnetic sensor 180D to detect the opening and closing of the flip holster.
  • the electronic device when the electronic device is a flip machine, the electronic device can detect the opening and closing of the flip according to the magnetic sensor 180D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • Distance sensor 180F for measuring distance.
  • Electronic devices can measure distances by infrared or laser. In some embodiments, when shooting a scene, the electronic device can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • Electronic devices emit infrared light outward through light-emitting diodes.
  • Electronic devices use photodiodes to detect reflected infrared light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object in the vicinity of the electronic device.
  • the electronic device can determine that there is no object in the vicinity of the electronic device.
  • the electronic device can use the proximity light sensor 180G to detect that the user holds the electronic device close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 180G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device is in the pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints. Electronic devices can use the collected fingerprint characteristics to unlock fingerprints, access application locks, take photos with fingerprints, and answer incoming calls with fingerprints.
  • the temperature sensor 180J is used to detect the temperature.
  • the electronic device utilizes the temperature detected by the temperature sensor 180J to implement a temperature handling strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device may reduce the performance of the processor located near the temperature sensor 180J in order to reduce power consumption and implement thermal protection.
  • the electronic device when the temperature is lower than another threshold, the electronic device heats the battery 142 to avoid abnormal shutdown of the electronic device caused by the low temperature.
  • the electronic device boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device, which is different from the location where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the pulse of the human body and receive the blood pressure beating signal.
  • the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • the electronic device may receive key input and generate key signal input related to user settings and function control of the electronic device.
  • Motor 191 can generate vibrating cues.
  • the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback. For example, touch operations acting on different applications (such as taking pictures, playing audio, etc.) can correspond to different vibration feedback effects.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card. The SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the electronic device.
  • FIG. 1 do not constitute a specific limitation on the electronic device.
  • Electronic devices in embodiments of the present invention may include more or fewer components than those in FIG. 1 .
  • the combination/connection relationship between the components in FIG. 1 can also be adjusted and modified.
  • a general zoom shooting process is that the user opens the camera application in the electronic device, starts the camera, the electronic device displays a shooting interface, and the shooting interface displays a preview image, that is, the image captured by the camera.
  • the user may want to zoom in on the object, and then perform an operation of magnifying the zoom ratio (such as the sliding operation of the index finger and the middle finger relatively far away) ).
  • the preview image displayed in the shooting interface is updated to the image block in the center area on the image captured by the camera, that is, the image block in the center area is enlarged and displayed.
  • the object the user wants to shoot such as the flower
  • the updated preview image does not have the object the user wants to shoot (such as the flower)
  • the user needs to Moving the mobile phone to find the subject is time-consuming and laborious, and once the mobile phone moves a little faster, it is easy to miss the target object, which is not very convenient to operate, and the experience is poor.
  • an embodiment of the present application provides a method for displaying a preview image in a zoom shooting scene.
  • a shooting interface is displayed, and a first preview image is displayed in the shooting interface, and the first preview image is a camera.
  • the first image may be an image or an image stream collected by a camera.
  • the target photographic subject on the first image is determined in response to the operation.
  • the target photographing object may be at any position on the first image, but not necessarily in the central area on the first image.
  • a second preview image is displayed in the shooting interface of the camera application, where the second preview image is an image block in a first area on the first image, and the first area is an area on the first image where at least one target shooting object is located. That is, when the electronic device detects an operation for increasing the zoom magnification, in response to the operation, the first preview image in the shooting interface is updated to a second preview image, and the second preview image is at least one target shooting object on the first image The image block within the region.
  • the second preview image is an image block in the upper left corner area of the first image; if the at least one target photographing object is located on the first image , then the second preview image is the image block in the upper and lower right corner of the first image. It is different from the previous general zoom shooting process because, in the general zoom shooting process, when an operation for increasing the zoom magnification is detected, the displayed second preview image must be the image block in the center area of the first image, Then, if there is no object that the user wants to photograph in the central area, the user needs to move the position of the electronic device to find the photographed object.
  • the second preview image is an image block in the area where at least one target photographing object is located on the first image, and there is no need to search for the position of the mobile electronic device Target shooting object, easy to operate.
  • FIG. 2 is a schematic flowchart of a method for displaying a preview image in a zoom shooting scene according to an embodiment of the present application.
  • the method can be applied to electronic devices having the hardware structure shown in FIG. 1 , such as mobile phones, tablet computers, etc.
  • the following description will mainly take the mobile phone as an example.
  • the process includes:
  • S201 start a camera application, turn on the camera, and display a shooting interface, where a first preview image is displayed, and the first preview image is the first image collected by the camera.
  • FIG. 3 shows a graphical user interface (graphical user interface, GUI) of the mobile phone, where the GUI is the desktop 301 of the mobile phone.
  • GUI graphical user interface
  • the mobile phone detects that the user clicks the icon 302 of the camera application (application, APP) on the desktop 301, it can start the camera application and display another GUI as shown in (b) in FIG. 3, which can be called as A shooting interface (or called a viewfinder interface) 303 .
  • a shooting interface or called a viewfinder interface
  • a preview image can be displayed in real time in the shooting interface 303 .
  • a first preview image is displayed in the shooting interface 303
  • the first preview image is the first image collected by the camera.
  • the first image may refer to an image, or may refer to an image stream collected by a camera. If the camera is a wide-angle camera, then the first image is a field of vision (FOV) image.
  • the photographing interface 303 may further include a control 305 for indicating a photographing mode, a control 306 for indicating a recording mode, and a photographing control 307 .
  • the camera mode when the mobile phone detects that the user clicks on the shooting control 307, the mobile phone performs a photo-taking operation; in the video recording mode, when the mobile phone detects that the user clicks on the shooting control 307, the mobile phone performs a video shooting operation.
  • the trigger condition may be detection of an operation for increasing the zoom magnification of the camera application, and/or detection of the increase of the zoom magnification reaching a preset zoom magnification (or a threshold value).
  • the preset zoom magnification may be any value between the minimum magnification and the maximum magnification.
  • the range of zoom magnification is 1X-10X, and the preset zoom magnification can be 5X, 6X, 7X, 9X, 10X and so on.
  • the specific value of the preset zoom magnification may be set by default after the electronic device leaves the factory, or may also be set by the user, which is not limited in this application.
  • the preset zoom magnification may also be a lossless zoom magnification.
  • zoom shooting includes physical zoom and electronic zoom
  • physical zoom refers to the use of physical changes in the camera (such as the position of the lens, etc.) to achieve the purpose of zooming
  • electronic zoom refers to the use of image processing algorithms (such as pixel interpolation algorithms). ) to process the image collected by the camera to achieve the purpose of zooming.
  • Lossless zoom magnification can be understood as the dividing point between physical zoom and electronic zoom. That is, in the process of increasing the zoom magnification, the physical zoom method can be used before the zoom magnification reaches the lossless zoom magnification. In this way, the image captured by the camera itself is physically zoomed, and the definition is relatively high.
  • the electronic zoom can be used because the physical zoom has reached its limit.
  • the electronic zoom needs to perform post-processing (such as pixel interpolation processing) on the image captured by the camera, the clarity of the image will be reduced. Therefore, if the threshold of the zoom magnification (ie, the preset zoom magnification) is set to a lossless zoom magnification, it can be ensured that the captured first image has undergone physical zooming without reducing the clarity.
  • the first image has a higher definition The accuracy of target photographing object recognition in the subsequent processing flow can be improved.
  • the preset zoom ratio can be this value; if the lossless zoom ratio is a range, then the preset zoom ratio can be any value within the range, such as the maximum value. value.
  • the operation for increasing the zoom magnification of the camera application may be a sliding operation in which the user's index finger and thumb are relatively far apart as shown in FIG.
  • the operation for increasing the zoom magnification of the camera application may also be a preset gesture operation.
  • the preset gesture operation can be a single/double click, circle drawing, knuckle tapping, multi-finger tapping, multi-finger tapping at a certain position in the shooting interface (this position can be set in advance, or any position) Slip, etc., which are not limited in the embodiments of the present application.
  • the operation for increasing the zoom magnification of the camera application may also be an operation for a specific physical key or virtual key.
  • a virtual key may also be displayed in the shooting interface, and when an operation on the virtual key is detected, it is determined to increase the zoom ratio of the camera application.
  • the electronic device detects that a certain physical key is triggered, or multiple physical keys are triggered in combination, it determines to increase the zoom ratio of the camera application.
  • the operation for increasing the zoom magnification of the camera application may also be a voice instruction for instructing to increase the zoom magnification.
  • the voice instruction may be a voice instruction of "zoom in to shoot" or "zoom in on an image", or the like.
  • the target photographing object is one or more photographic subjects in the first image.
  • the target photographic object may be one or more objects on the first image.
  • the one or more objects may be one or more objects of the same type or one or more objects of different types, which is not limited.
  • the target photographing object may be a target object among all the objects.
  • the target object may be a preset object.
  • the electronic device determines that a preset object exists on the first image, it determines that the preset object is the target photographing object.
  • the preset object may be an object set by default; or, it may be preset by a user, which is not limited in this embodiment of the present application.
  • the target object may also be an object of interest to the user. In other words, the electronic device determines the target photographing object on the first image according to the object of interest to the user.
  • the object of interest to the user may be an object recorded by the electronic device that is often photographed by the user, or an object that is frequently retouched.
  • the electronic device determines that the number of images of cats in the images stored in the gallery is large, and then determines that the object of interest to the user is a cat.
  • the electronic device records the objects that are retouched frequently when the user uses the modification software to retouch the image, and determines that the objects that are retouched frequently are objects of interest to the user.
  • the electronic device determines that an object of interest to the user exists on the first image, the object is determined to be a target photographing object.
  • the target photographing object may also be one or more object types on the first image.
  • one object type may correspond to one or more objects belonging to this type.
  • the target shooting object when the target shooting object is an object type, the target shooting object includes all objects belonging to the object type on the first image. For example, an image includes Person 1 and Person 2. If the target photographing object is an object type of "person", it is recognized that the target photographing object on the first image includes Person 1 and Person 2.
  • the target photographing object may be a target object type among the multiple object types.
  • the target object type may be any one or multiple object types among multiple object types, and if the target object type is multiple object types, multiple object types are identified simultaneously.
  • the object type with high priority among the multiple object types of the target object type For example, the priority relationship is: People > Animals > Text > Food > Flowers > Green Plants > Buildings.
  • the electronic device can first identify whether the "person” type is included on the first image, and if the "person” type is included, determine that all objects belonging to the "person” type on the first image (that is, everyone on the first image) are targeted for shooting Object; if it does not include the "people” type, continue to identify whether the first image includes the "animal” type, if it includes the "animal” type, then determine that all objects belonging to the "animal” type on the first image are the target shooting objects, of course , if the "animal” type is not included, continue to identify the next level of object type, and so on.
  • the priority relationship may be set by factory default, or may also be set by a user, which is not limited in this application.
  • the target object type may also be a preset object type.
  • the preset object type may be an object type set by factory default or an object type set by a user, which is not limited in this embodiment of the present application.
  • the target object type may also be an object type that the user is interested in.
  • the electronic device determines the target photographing object on the first image according to the type of object that the user is interested in.
  • One possible implementation is that, taking the object as a cat as an example, the electronic device determines that there are more images of cats in the images stored in the gallery, and then determines that the type of object that the user is interested in is the "animal" type.
  • the electronic device records the objects that are retouched more frequently when the user uses the modification software to retouch the image, and determines that the object type to which the frequently retouched object belongs is the object type that the user is interested in.
  • the target shooting object may also have other determining methods, and the embodiments of the present application will not give examples one by one.
  • the algorithm for identifying the target photographing object on the first image is called a subject recognition algorithm, and the subject recognition algorithm may be an image semantic analysis algorithm.
  • FIG. 5 is a schematic flowchart of an image semantic analysis algorithm. In short, it includes three steps: image binarization, semantic segmentation, and detection and positioning.
  • the binarization process refers to setting the grayscale value of the pixel on the image to 0 or 255. A total of 256 grayscale values from 0 to 255 are reduced to only two grayscale values of 0 and 255.
  • the binarization process can simplify the gray value of the image, if the image after the binarization process is used as the input image for the subsequent processing (ie, semantic segmentation and detection and positioning), the speed of the subsequent processing can be improved.
  • Semantic segmentation can be understood as dividing an image into different segmentation areas, and adding corresponding semantic labels to each segmentation area, for example, to represent the features in the segmentation area.
  • the texture and gray level of each segmented region obtained by semantic segmentation are similar; and the boundaries of different segmented regions are clear.
  • the next processing flow that is, detection and localization, can be detected in each segmented area on the image, and specifically, the target shooting object in each segmented area is designated to be located.
  • the present application does not limit the execution order of S202 and S203. If S202 is performed first and then S203 is performed, an example is that when the electronic device detects a trigger condition (i.e., S202), it starts the subject recognition algorithm, and then uses the algorithm to recognize the target shooting object on the first image. In this way, the subject recognition algorithm can be turned off when not in use, and activated when a trigger condition is detected, helping to save power consumption. In other examples, the subject recognition algorithm is always in the activated state, and when a trigger condition is detected (ie, S202 ), the subject recognition algorithm starts to be used for recognition. This method saves the time required to start the algorithm and is more efficient.
  • a trigger condition i.e., S202
  • S203 is performed first and then S204 is performed, an example is that when the electronic device opens the camera application (ie, S201 ), the subject recognition algorithm is started, and the algorithm is used to identify the target shooting object on the first image (ie, S203 ).
  • a trigger condition ie, S202
  • it goes to S204.
  • the subject recognition algorithm is always in the activated state, and when the electronic device opens the camera application (ie, S201 ), it starts to use the subject recognition algorithm for recognition. This method saves the time required to start the algorithm and is more efficient.
  • a second preview image is displayed on the shooting interface, where the second preview image is an image block in a first area on the first image, and the first area is an area where at least one target shooting object is located.
  • the at least one target shooting object is referred to as the first target shooting object, that is, the number of the first target shooting object may be one or more.
  • the following mainly takes the number of the first target shooting object as one example is introduced.
  • a step may be further included: determining a first target photographing subject among the plurality of target photographing subjects.
  • determining a first target photographing subject among the plurality of target photographing subjects There are various ways to determine the first target shooting object among the multiple target shooting objects, for example:
  • Manner 1 Determine the first target photographing object according to the positions of the multiple target photographing objects.
  • the first target photographing subject is the target photographing subject closest to the center of the image among all the target photographing subjects, or the target photographing subject closest to the edge of the image.
  • Mode 2 Determine the first target shooting object according to the size of the area occupied by the multiple target shooting objects.
  • the first target photographing object is the target photographing object that occupies the largest or smallest area among all the target photographing objects.
  • the first target shooting object is determined according to the object types of the multiple target shooting objects.
  • the object types corresponding to multiple target shooting objects include human type and animal type.
  • the priority relationship it can be determined that the priority of the human type is higher than that of the animal type, then the shooting object of the human type among the multiple target shooting objects is the first target. subject.
  • the priority relationship may include characters>animals>text>food>flowers>green plants>buildings.
  • the priority relationship can be adjusted by the user.
  • Manner 4 According to the user's designated operation, determine the first target photographing object among the plurality of target photographing objects.
  • the electronic device detects an operation for enlarging the zoom ratio (such as a sliding operation of moving the index finger and the thumb away from each other), and recognizes that the first image includes two target shooting objects (such as two Birds), and then the shooting interface shown in Figure 6 is displayed.
  • the shooting interface displays prompt information: Please select the first target shooting object, and the numbers of all identified target shooting objects are also displayed, for example, the number of the bird 1 1, bird 2 is numbered 2.
  • the electronic device detects the operation of clicking the number 2 in the user, it is determined that the bird 2 is the first target photographing object designated by the user.
  • the first target photographing object can be determined according to the user's selection, which is in line with the user's preference.
  • the step may further include: determining the first area according to the first target shooting object. Specifically, it may include determining the first area according to the resolution of the first target shooting object and the preview image.
  • the way of determining the first area includes: setting a marker frame to surround the bird 2, please refer to FIG. 7A .
  • the present application does not limit the shape of the marker frame, which may be a rectangle, a square, a circle, an ellipse, etc., or the marker frame may be the smallest circumscribed polygon of the edge contour of the target photographing object.
  • a first area is determined, the first area includes an area surrounded by the mark frame, and the resolution of the first area matches that of the preview image.
  • the matching between the resolution of the first region and the resolution of the preview image may include the following two cases:
  • the length of the preview image is greater than the width, that is, the resolution (aspect ratio) is greater than 1, then the aspect ratio of the first area is also greater than 1, for example, it can be equal to the preview image.
  • resolution For example, if the resolution of the preview image is 16:4, the size of the first area is 16:4 in aspect ratio. In this case, there is no need to adjust the aspect ratio of the image blocks in the first area to adapt to the size requirements of the preview image, which is more convenient.
  • the length of the preview image is less than the width, that is, the resolution (aspect ratio) is less than 1, then the aspect ratio of the first area is also less than 1, for example, it can be equal to the preview image.
  • resolution for example, if the resolution of the preview image is 4:16, then the size of the first area is the aspect ratio of 4:16, that is, it is not necessary to adjust the aspect ratio of the image blocks in the first area to adapt to the size of the preview image. Size requirements.
  • the electronic device displays the image block in the first area as a second preview image on the shooting interface.
  • a second preview image is displayed in the shooting interface, and the second preview image is an image block in the first area where the bird 2 is located in the first image.
  • the electronic device detects an operation for increasing the zoom magnification (such as a sliding operation in which the index finger and the thumb are relatively separated)
  • the first target object on the first preview image is captured by the electronic device. (such as bird 2) enlarged display is shown in Figure 8.
  • the first target shooting object is the target shooting object identified from the first preview image, it is the shooting object that the user may want to zoom in for shooting, compared with the general zoom shooting method (enlarged display of the image block in the central area of the image) , the user needs to move the position of the electronic device to find the object to be photographed, the probability is low, and the operation is convenient.
  • the shooting interface in Figure 8 is to zoom in on the bird 2 to shoot.
  • the user may also switch the first target shooting object (eg, bird 2 ) in the shooting interface to the second target shooting object (eg, bird 1 ).
  • the electronic device displays the second preview image
  • a third preview image is displayed in the shooting interface
  • the third preview image is the second area on the first image where the second target shooting object is located
  • the second target photographing object is another target photographing object that is different from the first target photographing object on the first image.
  • Fig. 9 is displayed, and the third preview image is displayed in the shooting interface, and the third preview image is the first image on the first image.
  • the bird 2 (the bird on the left) is enlarged and displayed in the shooting interface.
  • Fig. 9 is displayed, that is, the bird in the shooting interface is displayed. 1 (the bird on the right) is magnified.
  • the zoom shooting scene when the electronic device remains stationary, the switching of the target shooting object in the shooting interface can be realized, and the user experience is high.
  • the general zoom shooting mode enlarged display of the image block in the central area of the image
  • the user needs to manually move the position of the electronic device to find the desired object, which is cumbersome to operate.
  • the third preview image may further include the step of: determining a second target photographing object from the remaining target photographing objects except the first target photographing object among the plurality of target photographing objects, according to the second Target the subject, and determine the second area.
  • the method for determining the second target photographing object is the same as the foregoing method for determining the first target photographing object, and details are not repeated. For example, if the first target photographing object is bird 2 , and the remaining target photographing object is only bird 1 , then it is determined that the second target photographing object is bird 1 .
  • the process of determining the second area according to the second target photographing object is the same as the foregoing principle of determining the first area according to the first target photographing object, and will not be repeated.
  • the switching operation of the target photographing object may be an operation on a specific button in the photographing interface.
  • a second preview image is displayed in the shooting interface, and a button 801 is also displayed in the shooting interface.
  • the first target shooting object is switched to the second target shooting object.
  • the second target photographing object is switched to the third target photographing object, and the third target photographing object is the first and second target photographing objects excluding the first and second photographing target objects. other than the target subject. That is, each target photographing object in the multiple target photographing objects on the first image can be traversed by pressing the key 801 .
  • the switching operation of the target photographing object may also be an operation on a physical button.
  • the operation of the volume button if it is detected that the button for increasing the volume is clicked twice in a row, the first target object is switched to the second target object.
  • the switching operation of the target photographic subject may also be the detection of a voice instruction for instructing the switching of the target photographic subject. For example, a voice command including "next".
  • a first window may also be displayed in the shooting interface, and a first image is displayed in the first window and the target shooting object displayed in the current preview image on the first image is highlighted.
  • the highlighted display may be understood as the target photographing object being marked, and the marking method is not limited, such as being circled.
  • a first window 1001 is displayed on the second preview image
  • the first window 1001 includes a first image and the bird 2 (the bird on the left) is displayed on the first image is circled to remind the user that the current preview image (ie, the second preview image) is the circled image block.
  • the current preview image ie, the second preview image
  • the third preview image as shown in (b) in FIG. 10 is displayed, and the third preview image corresponds to the bird 1 (the bird on the right) on the first image.
  • the bird 1 is circled on the first image in the first window 1001 to remind the user that the current preview image (ie, the third preview image) is the circled image block.
  • the circled state of the bird 2 can be canceled.
  • the first window 1001 may appear automatically. For example, comparing FIG. 4 with (a) in FIG. 10 , when the electronic device displays the first preview image in FIG. 4 , if an operation for enlarging the zoom magnification is detected, the second preview image shown in (a) in FIG. 10 is displayed. The image is previewed, and the first window automatically appears on the second preview image. Or, comparing FIG. 4 , FIG. 8 and (a) in FIG. 10 , when the electronic device displays the first preview image in FIG. 4 , if an operation for enlarging the zoom ratio is detected, it displays the second preview image in FIG.
  • the operation for calling up the first window may be, for example, an operation for a specific key in the shooting interface.
  • a specific key is displayed in the shooting interface, and when an operation for the specific key is detected, the first window is displayed.
  • the first window can also be hidden and displayed. For example, when an operation for hiding the first window is detected, the first window is hidden.
  • the operation for hiding the first window may be an operation for the specific button, that is, when an operation for the specific button is detected, the first window is displayed, and when an operation for the specific button is detected again, Hide the first window.
  • the operation for hiding the first window may also be an operation of pressing and holding the first window to move out of the screen, or a click operation on the pop-up delete button when the first window is long-pressed; or, see ( As shown in a), when it is detected in the shooting interface that the user slides from left to right at the upper left corner, the first window is called up to display the interface shown in (b) in Figure 11. When it is detected that the user is at the upper left corner When the sliding operation is performed from right to left, the first window is hidden, and the interface shown in (c) of FIG. 11 is displayed.
  • the target shooting object displayed in the current preview image is marked, and the target shooting object not displayed in the current preview image is not marked.
  • the target shooting object not displayed in the current preview image is not marked.
  • all target photographing objects on the first image in the first window 1001 can be marked.
  • both bird 1 and bird 2 are marked on the first image in the first window 1001 .
  • each corresponds to a marker frame (the bird 2 on the left corresponds to the marker frame 1002, and the bird 1 on the right corresponds to the marker frame 1003). Since the current preview image is the second preview image, which corresponds to the bird 2 on the left, the marker frame 1002 is highlighted relative to the marker frame 1003 . In this way, the user can determine that the current preview image corresponds to the bird 2.
  • the first target shooting object ie the bird 2 on the left
  • the second target shooting object ie the bird 1 on the right
  • the marked frame 1003 of the bird 1 on the right side of the first image in the first window 1001 is highlighted relative to the marked frame 1002 of the bird 2 on the left side. In this way, the user can determine that the current preview image corresponds to the bird 1 .
  • the switching operation of the target photographing objects may also be an operation of clicking the marked frame in the first window 1001 .
  • the current preview corresponds to the bird 2 in the marker frame 1002.
  • the third preview image is displayed as shown in Fig. In (b) of 12, the third preview image corresponds to the bird 1 in the marked frame 1003 , that is, the switching of the target shooting object is completed by clicking the marked frame in the first window 1001 .
  • only one target photographing object ie, a bird
  • the number of target shooting objects on the shooting interface may also be increased.
  • the second preview image corresponds to bird 2 .
  • two target photographing objects namely bird 1 and bird 2 are displayed in the photographing interface.
  • One way is to determine area 1 according to bird 1 (including bird 1 in area 1), determine area 2 according to bird 2 (including bird 2 in area 2), and then divide the shooting interface into two areas (ie Split-screen display), the first area displays image blocks in area 1, and the second area displays image blocks in area 2, such as FIG. 13A.
  • both of the two marked boxes in the first window 1001 in the photographing interface are highlighted.
  • the first area and the second area may be two areas divided up and down, or two areas divided left and right, which are not limited in this application. For example, it can be divided left and right when shooting in landscape, and can be divided up and down when shooting in portrait.
  • the third area is determined according to the two target shooting objects, and the image blocks in the third area are displayed in the shooting interface.
  • the third area is the smallest area that can enclose the two target photographing objects (bird 1 and bird 2), and the resolution of the third area matches that of the preview image.
  • the display effect after increasing the number of target shooting objects is shown in Figure 13B.
  • a first window 1001 is also displayed in the shooting interface
  • a marker frame 1004 is displayed in the first window 1001
  • the marker frame 1004 surrounds the bird 1 and the bird 2 .
  • the operation for increasing the number of target shooting objects in the shooting interface may include: taking (a) in FIG. 12 as an example, the marked frame 1002 has been selected, when it is detected that the marked frame 1003 is selected in the first window 1001 Operation (such as the operation of long pressing the marker frame 1003), confirm that the marker frame 1002 and the marker frame 1003 are both selected, then the bird 1 in the marker frame 1003 is added to the shooting interface, that is, Figure 13A or Figure 13B is displayed.
  • the operation for increasing the number of target photographing objects in the photographing interface may also be an operation of reducing the zoom magnification, such as a sliding operation in which the thumb and the index finger are relatively close.
  • the operation for increasing the number of target shooting objects in the shooting interface may also be an operation for a preset button in the shooting interface.
  • a preset button is displayed in the shooting interface, and when an operation for the button is detected, it is determined to increase The number of target subjects in the capture interface.
  • the operation for increasing the number of target photographing objects in the photographing interface may also be a voice instruction for instructing to increase the number of target photographing objects.
  • FIG. 12 when a photographing operation (such as clicking on the photographing control 307 ) is detected, one or two images are obtained by photographing. If it is an image, the image may be an image block in the marked frame 1002, that is, an image obtained by zooming in on the bird 2 on the left. If there are two images, one may be the image block within the marked frame 1002, and the other may be the first image, ie, the complete image. Or, continuing to take (a) in FIG.
  • One of the images is the first image, that is, the complete image, and the N images correspond to the image blocks in the N marked frames.
  • the electronic device may perform processing according to the general zoom shooting process, that is, the image block in the central area on the first image is enlarged and displayed.
  • the photographing interface shown in (a) of FIG. 14 can be displayed, and the preview image displayed in the photographing interface is the image block in the central area on the first image.
  • a first window may also be displayed in the photographing interface, see (b) in FIG.
  • the first window 1401 displays a first image and a mark is displayed on the first image Block 1402, the marker block 1402 is used to indicate the location of the center region.
  • the position of the mark frame 1402 on the first window 1401 changes.
  • the user can determine the position of the current preview image on the first image through the position of the mark frame 1402 in the first window 1401, and give an indication in the process of finding the object the user wants to photograph. For example, when there is no object the user wants to photograph in the central area, the user determines through the first window 1401 that the object the user wants to photograph is located in the left area of the marked frame 1402 on the first image, so that the user moves the electronic device to the left.
  • the object you want to shoot can be quickly found, so that the user does not know where the object you want to shoot is and blindly searches.
  • the electronic device may use the first zoom shooting mode or the second zoom shooting mode by default. Alternatively, it may also be determined to use the first zoom shooting mode or the second zoom shooting mode according to a user's designated operation. For example, a switching control is displayed in the shooting interface of the electronic device, and the switching control is used to switch between the first zoom shooting mode and the second zoom shooting mode. For another example, when an operation for enlarging the zoom magnification is detected, prompt information is displayed, where the prompt information is used to prompt the user to select the first zoom shooting mode or the second zoom shooting mode.
  • the electronic device automatically recognizes the target photographing object on the first image, and zooms in at least one of the target photographing objects.
  • the target photographing object automatically recognized by the electronic device is not the one the user wants to photograph at the moment. In this case, it is difficult to photograph the object that the user wants to photograph by using the method of the first embodiment.
  • the zoom shooting mode of the electronic device may include a manual mode and an automatic mode. When the automatic mode is selected, the electronic device uses the method of Embodiment 1, that is, automatically recognizes the target photographing object on the first image, and then zooms in at least one target photographing object for photographing.
  • the electronic device may not automatically identify the target photographing subject on the first image, but prompt the user to manually select the target photographing subject.
  • the target photographing subject is automatically recognized by the electronic device, and in the manual mode, the target photographing subject is manually selected by the user.
  • FIG. 15 is a schematic flowchart of a method for displaying a preview image in a zoom shooting scene provided by the second embodiment.
  • S202 - 1 is added between S202 and S203 , that is, the manual mode or the automatic mode is determined.
  • a shooting interface as shown in (b) in FIG.
  • S203 and S204 are executed.
  • S203 and S204 please refer to the first embodiment.
  • S205 When an operation of selecting the manual mode by the user is detected, S205 is performed, and the user is prompted to select a target photographing object. For example, continuing to take (b) in FIG. 16 as an example, after the user selects the manual mode, a shooting interface as shown in (c) in FIG. 16 is displayed, and a prompt message is displayed in the shooting interface: please select the target shooting object.
  • the electronic device determines the target photographing object according to the user's operation. For example, when a circle drawing operation is detected, it is determined that the shooting object in the area surrounded by the circle drawing operation is the target shooting object, and then the shooting interface as shown in (d) in FIG. 16 is displayed, and the preview image in the shooting interface is The user draws a circle to operate the image blocks in the circled area. That is, during zoom shooting, the target shooting subject manually selected by the user is zoomed in for shooting.
  • FIG. 17 shows a software structural block diagram of an electronic device provided by an embodiment of the present application.
  • the software structure of the electronic device can be a layered architecture, for example, the software can be divided into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system may include three layers, from top to bottom, they are an application layer, a hardware abstraction layer (HAL), and a hardware layer.
  • HAL hardware abstraction layer
  • this article takes the three-layer Android system as an example.
  • the Android system may also include more or less layers.
  • a kernel layer may be included between the hardware layer and the HAL layer, or,
  • An application framework layer (framework, FWK) and the like may also be included between the application layer and the HAL layer, which is not limited in this application.
  • the application layer can include a series of application packages. Such as camera applications, settings, skin modules, user interface (UI), third-party applications, etc. Among them, third-party applications can include Gallery, Calendar, Call, Map, Navigation, WLAN, Bluetooth, Music, Video, SMS, etc. Only the camera application is shown in FIG. 12 .
  • UI user interface
  • third-party applications can include Gallery, Calendar, Call, Map, Navigation, WLAN, Bluetooth, Music, Video, SMS, etc. Only the camera application is shown in FIG. 12 .
  • the hardware abstraction layer is used to establish the interface layer with the hardware circuit. Its purpose is to abstract the hardware and provide a virtual hardware platform for the operating system, so that it is hardware independent and can be transplanted on various platforms.
  • the hardware abstraction layer includes the camera application HAL (camera HAL), which is used to realize the information interaction between the camera application in the application layer and the hardware in the hardware layer.
  • the HAL layer also includes a subject recognition module, a preview cropping module, and a pre-display (prview) module.
  • the subject recognition module is used to recognize the target shooting object on the image collected by the camera (for example, the output image of the ISP), and can also be used to set a marking frame to mark the recognized target shooting object, and can also be used to identify the target shooting object according to the
  • the marked box determines the first area, and the process of determining the first area will be described later.
  • the preview cropping module is used for cropping the image blocks in the first area from the image collected by the camera (for example, the output image of the ISP) to serve as the preview image.
  • the pre-display module is used to send the image blocks cropped by the preview acquisition module to the display screen in the hardware layer for display.
  • the hardware layer may include various types of sensors, such as image sensors, image signal processors (image signal processors, ISPs), and the like.
  • the image sensor can refer to the photosensitive element set in the camera. The light is transmitted to the photosensitive element through the lens to form an electrical signal, and the electrical signal is transmitted to the ISP.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP.
  • ISP converts electrical signals into images visible to the naked eye (ie digital image signals). ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. For example, ISP outputs digital image signals to DSP for processing. DSP converts digital image signals into standard RGB, YUV and other formats of image signals. In some embodiments, the ISP may be located in the camera.
  • the following describes a method for displaying a preview image in a zoom shooting scene provided by an embodiment of the present application with reference to the software architecture shown in FIG. 17 .
  • FIG. 18 is a schematic flowchart of a method for displaying a preview image in a zoom scene according to an embodiment of the present application. As shown in Figure 18, the process includes:
  • the camera application detects a startup instruction, where the startup instruction is used to instruct to start the camera application.
  • the camera application sends a startup command to the camera HAL.
  • the camera HAL sends a start command to the ISP.
  • the ISP obtains the first image.
  • the ISP sends the first image to the display screen.
  • a first preview image is displayed on the display screen, and the first preview image is the first image.
  • the camera application detects a trigger condition.
  • the camera application sends a capability query request to the camera HAL, where the capability query request is used to request to query whether the electronic device has the function of the first auxiliary preview mode.
  • camera HAL sends query feedback to the camera application.
  • S8-S9 are optional steps that may or may not be performed.
  • the camera application sends a start instruction to start the body recognition algorithm to the body recognition module.
  • the subject recognition module recognizes the target photographing object on the first image.
  • the subject identification module determines the first target photographing object.
  • the subject identification module sends the location of the first target photographing object to the preview cropping module.
  • the first area is determined according to the first target shooting object, and then the position of the first area is sent to the preview cropping module.
  • the preview cropping module performs image cropping.
  • the preview cropping module sends the cropped image block to the display screen.
  • a second preview image is displayed on the display screen, and the second preview image is an image block in the first area on the first image.
  • the camera application detects a target shooting object switching operation.
  • the camera application sends a target photographing object switching instruction to the subject identification module.
  • the subject recognition module switches the first target shooting object to the second target shooting object, and sends the location of the second target shooting object to the preview cropping module.
  • the second area is determined according to the second target shooting object, and then the position of the second area is sent to the preview cropping module.
  • the preview cropping module performs image cropping.
  • the preview cropping module sends the cropped image block to the display screen.
  • a third preview image is displayed on the display screen, and the third preview image is an image block in the second area on the first image.
  • the camera application detects a photographing instruction.
  • the camera application sends a photographing instruction and the position of the target photographing object corresponding to the current preview image to the ISP.
  • the location of the first area may be sent.
  • the current preview image is the third preview image
  • the corresponding target photographing object is the second target photographing object, that is, the location of the second area
  • the ISP captures the image.
  • the ISP sends the captured image and the position to the preview cropping module.
  • the preview cropping module crops the captured image to obtain a cropped image.
  • the image block in the first image is cropped out as the captured image; if the position of the second area is received, the image block in the second area is cropped out as the captured image obtained image.
  • the preview cropping module sends the cropped image to the camera application.
  • the camera application stores the cropped image.
  • FIG. 19 is a schematic flowchart of a method for displaying a preview image in a zoom shooting scene provided by an embodiment of the present application.
  • This method is suitable for electronic devices, such as mobile phones, tablet computers, etc.
  • the process includes:
  • S1901 start a camera application in an electronic device, where a first image is collected by a camera on the electronic device.
  • a possible implementation manner is that when the electronic device detects a trigger condition (for example, an operation for enlarging the zoom ratio), it identifies the target photographing object on the first image. This way helps save power consumption.
  • a trigger condition for example, an operation for enlarging the zoom ratio
  • Another possible implementation is that after the camera of the electronic device captures the first image, it can identify the target shooting object on the first image, and when a trigger condition (for example, an operation for enlarging the zoom ratio) is detected, execute S1903 .
  • a trigger condition for example, an operation for enlarging the zoom ratio
  • S1903 Display a first preview image, where the first preview image is a preview image corresponding to at least one target shooting object on the first image.
  • the at least one target photographing object may be the photographing object occupying the largest or the smallest area on the first image; or, the at least one target photographing object may be the photographing object near the center area or near the edge area in the first image; Alternatively, the at least one target photographing object may be a photographing object that the user is interested in in the first image (for the determination principle of the user's interest object, please refer to the foregoing); or, the at least one target photographing object may be a target specified by the user subject.
  • Several methods for determining the at least one target photographing object are exemplified above. The embodiments of the present application are not limited to the above several methods, and there may be other methods for determining the at least one target photographing object.
  • the size of the at least one target shooting object on the first preview image is larger than the size of the at least one target shooting object on the first image. It can be understood that the at least one target photographing object on the first image is displayed in an enlarged manner. 8, the first image is an image including a tree and two birds. During zoom shooting, the electronic device displays a preview image corresponding to the bird 2 on the first image, that is, the enlarged bird is displayed. 2.
  • the bird 2 is marked in the first window.
  • the user can determine which target shooting object on the first image the current preview image (that is, the first preview image) corresponds to through the first mark in the first window, and the user experience is better.
  • S1904 is an optional step, which can be performed or not, so it is represented by a dotted line in the figure.
  • a second mark is also displayed in the first window, and the second mark is used to mark other target photographing objects other than the at least one target photographing object on the first image, and the first mark and the second Marks are different.
  • the first window 1101 displays a marker 1002 and a marker 1003, and the marker 1002 is used to mark the bird 2 on the first image.
  • the marker 1003 is used to mark the bird 1 on the first image.
  • the marker 1002 is different from the marker 1003 (for example, the marker 1002 is bold, and the marker 1003 is not bold), so that the user can distinguish the current preview image (that is, the first The preview image) corresponds to which target photographing object on the first image.
  • the first image includes a first target shooting object and a second target shooting object
  • the first preview image is a preview image corresponding to the first target shooting object; when it is detected that the target shooting object switches During operation, a second preview image is displayed, and the second preview image is a preview image corresponding to the second target photographing object.
  • the size of the second target shooting object on the second preview image is larger than the size of the second target shooting object on the first image.
  • the electronic device displays the enlarged bird 2 , and when detecting the switching operation of the target photographing object, the electronic device displays the enlarged bird 1 , that is, FIG. 9 . That is to say, originally, the first target photographing object is zoomed in to photograph, and the second target photographing object is enlarged and photographed through the target photographing object switching operation, and the user does not need to move the electronic device to find the target photographing object, and the operation is convenient.
  • a second mark is displayed in the first window, and the second mark is used to mark the second target shooting object on the first image.
  • the display of the first mark for marking the first target shooting object on the first image
  • the bird 2 is marked in the first window (for example, marked by a mark box), and when the zoomed bird 1 is displayed , the bird 1 in the first window is marked (eg, marked by another marking box), and the bird 2 is unmarked, as shown in (b) in Figure 10 .
  • a third preview image is displayed, and the third preview image is the first target photographing object and the second target photographing The preview image corresponding to the object; wherein the size of the first target shooting object and the second target shooting object on the third preview image is larger than that of the first target shooting object and the second target shooting object on the first image The size of the second target subject.
  • the electronic device displays the enlarged bird 1 and bird 2 .
  • the first preview image is displayed in the first area on the display screen of the electronic device, and the fourth preview image is displayed in the second area, so
  • the fourth preview image is a preview image corresponding to the second target shooting object; wherein, the size of the second target shooting object on the fourth preview image is larger than that of the second shooting target on the first image. size of.
  • the electronic device under the action of the operation for adding the target shooting object in the preview image, the electronic device is displayed in a split screen, the first area displays the enlarged bird 2, and the second partition displays the enlarged small bird 2. bird 1.
  • the electronic device displays a fifth preview image, and the fifth preview image is the central area on the first image
  • the preview image corresponding to the image block inside.
  • the electronic device enlarges and displays the image block in the central area of the first image.
  • a photographing instruction is detected; in response to the photographing instruction, the first image and a second image are obtained by photographing, and the second image is a photographed image corresponding to the first preview image. That is to say, during zoom shooting, if the shooting button is clicked, a complete image (ie, the first image) and an enlarged image of at least one target object (ie, the second image) are captured, which is convenient for Compared with users, the experience is better.
  • FIG. 20 shows an electronic device 2000 provided by the present application.
  • the electronic device 2000 may be the aforementioned mobile phone.
  • the electronic device 2000 may include: one or more processors 2001; one or more memories 2002; Bus 2005 connection.
  • the one or more computer programs 2004 are stored in the aforementioned memory 2002 and configured to be executed by the one or more processors 2001, the one or more computer programs 2004 comprise instructions that can be used to perform the above Relevant steps of the mobile phone in the corresponding embodiment.
  • the communication interface 2003 is used to implement communication with other devices, for example, the communication interface may be a transceiver.
  • the methods provided by the embodiments of the present application have been introduced from the perspective of an electronic device (such as a mobile phone) as an execution subject.
  • the electronic device may include a hardware structure and/or software modules, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above functions is performed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present invention are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like.
  • magnetic media eg, floppy disks, hard disks, magnetic tapes
  • optical media eg, DVD
  • semiconductor media eg, Solid State Disk (SSD)

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

一种变焦拍摄场景下的预览图像显示方法与电子设备。该方法包括:启动电子设备中的相机应用,所述电子设备上的摄像头采集第一图像;在变焦拍摄模式下,识别所述第一图像上的目标拍摄对象;显示第一预览图像,所述第一预览图像是所述第一图像上至少一个目标拍摄对象所对应的预览图像。通过这种方式可以提升变焦拍摄时的拍摄体验。

Description

一种变焦拍摄场景下的预览图像显示方法与电子设备
相关申请的交叉引用
本申请要求在2021年04月27日提交中国专利局、申请号为202110460565.8、申请名称为“一种变焦拍摄场景下的预览图像显示方法与电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子技术领域,尤其涉及一种变焦拍摄场景下的预览图像显示方法与电子设备。
背景技术
变焦拍摄使得电子设备能够拍摄远处的景物。以手机为例,在变焦倍率增大时,拍摄界面中待拍摄物体被“放大”。这种变焦拍摄对于喜欢拍摄远景的用户提供了很大的方便。
然而,目前的变焦拍摄方式是,在增大变焦倍率时,将处于图像中心区域内的物体放大,但是中心区域内的物体可能并不是用户想要拍摄的物体,那么,就需要用户移动手机,寻找想要拍摄的物体,整个寻找过程费时费力,而且一旦手机移动速度稍快,很容易错过目标物体,操作起来不是很便捷,体验较差。
发明内容
本申请的目的在于提供了一种变焦拍摄场景下预览图像显示方法与电子设备,提升拍摄远景物体时的拍摄体验。
第一方面,提供一种变焦拍摄场景下的预览图像显示方法。该方法可以应用于电子设备,比如手机、平板电脑等等。该方法包括:启动所述电子设备中的相机应用,所述电子设备上的摄像头采集第一图像;在变焦拍摄模式下,识别所述第一图像上的目标拍摄对象;显示第一预览图像,所述第一预览图像是所述第一图像上至少一个目标拍摄对象所对应的预览图像。
需要说明的是,一般的在变焦拍摄模式中,显示的预览图像是第一图像上处于中心区域内的图像块所对应的预览图像,如果中心区域内没有用户想要拍摄的物体时,就需要用户移动电子设备的位置来寻找拍摄物体。而本申请提供的变焦场景下预览图像的显示方法中,在变焦拍摄模式下,第一预览图像是第一图像上至少一个目标拍摄对象所对应的预览图像。比如,如果所述至少一个目标拍摄对象是人,那么第一预览图像就是人所对应的预览图像,如果所述至少一个目标拍摄对象是动物,那么第一预览图像就是动物所对应的预览图像。与前面的一般在变焦拍摄模式,不需要移动电子设备的位置寻找目标拍摄对象,操作便捷。
在一种可能的设计中,所述第一预览图像上所述至少一个目标拍摄对象的尺寸大于所述第一图像上所述至少一个目标拍摄对象的尺寸。
示例性的,假设摄像头采集的第一图像的放大倍率是1倍,第一预览图像的放大倍率 可能是15倍、20倍、25倍等等,总之,第一预览图像是第一图像放大后对应的预览图像,且放大的是第一图像上的所述至少一个目标拍摄对象。
在一种可能的设计中,所述方法还包括:在显示所述第一预览图像的同时还显示第一窗口,所述第一窗口中显示所述第一图像和第一标记,所述第一标记用于在所述第一图像上标记出所述至少一个目标拍摄对象。也就是说,用户可以通过第一窗口内的第一标记确定当前预览图像(即第一预览图像)对应第一图像上的哪一个目标拍摄对象,用户体验较好。
在一种可能的设计中,所述方法还包括:所述第一窗口中还显示第二标记,所述第二标记用于标记所述第一图像上所述至少一个目标拍摄对象以外的其它目标拍摄对象,所述第一标记与所述第二标记不同。也就是说,用户通过第一窗口不仅可以确定第一图像上都有哪些目标拍摄对象,还可以确定当前预览图像(如第一预览图像)对应第一图像上的哪一个目标拍摄对象,用户体验较好。
在一种可能的设计中,所述第一图像上包括第一目标拍摄对象和第二目标拍摄对象,所述第一预览图像是所述第一目标拍摄对象所对应的预览图像;当检测到目标拍摄对象切换操作时,显示第二预览图像,所述第二预览图像是所述第二目标拍摄对象所对应的预览图像。其中,所述第二预览图像上所述第二目标拍摄对象的尺寸大于所述第一图像上所述第二目标拍摄对象的尺寸。
也就是说,原本放大拍摄第一目标拍摄对象,通过目标拍摄对象切换操作放大拍摄第二目标拍摄对象,用户不需要移动电子设备寻找目标拍摄对象,操作便捷。
在一种可能的设计中,所述方法还包括:在显示所述第二预览图像的同时还显示第一窗口,所述第一窗口中显示所述第一图像和第二标记,所述第二标记用于在所述第一图像上标记出所述第二目标拍摄对象;其中,所述第一窗口显示所述第二标记时,将所述第一窗口中的第一标记取消显示或第二标记与第一标记区别显示,所述第一标记用于在所述第一图像上标记出所述第一目标拍摄对象。也就是说,原本放大拍摄第一目标拍摄对象时,第一窗口中显示第一标记用于标出第一目标拍摄对象,通过目标拍摄对象切换操作放大拍摄第二目标拍摄对象时,第一窗口中显示第二标记用于标记出第二目标拍摄对象,同时,第一标记取消显示或第一标记与第二标记区别显示。这样更方便用户区别当前预览图像(如第二预览图像)对应第一图像上哪一个目标拍摄对象。
在一种可能的设计中,所述第一图像上包括第一目标拍摄对象和第二目标拍摄对象,所述第一预览图像是所述第一目标拍摄对象所对应的预览图像;在检测到用于增加预览图像中目标拍摄对象的操作时,显示第三预览图像,所述第三预览图像是所述第一目标拍摄对象和所述第二目标拍摄对象所对应的预览图像;其中,所述第三预览图像上所述第一目标拍摄对象和所述第二目标拍摄对象的尺寸大于所述第一图像上所述第一目标拍摄对象和所述第二目标拍摄对象的尺寸。
也就是说,原本放大拍摄第一目标拍摄对象时,通过用于增加预览图像中目标拍摄对象数量的操作,增加到放大拍摄第一目标拍摄对象和第二目标拍摄对象。不需要用户移动电子设备寻找第一目标拍摄对象和第二目标拍摄对象,操作便捷。
在一种可能的设计中,所述第一图像上包括第一目标拍摄对象和第二目标拍摄对象,所述第一预览图像是所述第一目标拍摄对象所对应的预览图像;在检测到用于增加预览图像中目标拍摄对象数量的操作时,在所述电子设备显示屏上第一区域显示所述第一预览图 像,在第二区域中显示第四预览图像,所述第四预览图像是所述第二目标拍摄对象对应的预览图像;其中,所述第四预览图像上所述第二目标拍摄对象的尺寸大于所述第一图像上所述第二目标拍摄对象的尺寸。
也就是说,原本放大拍摄第一目标拍摄对象时,通过用于增加拍摄界面中目标拍摄对象数量的操作,增加到放大拍摄第一目标拍摄对象和第二目标拍摄对象,而且第一目标拍摄对象和第二目标拍摄对象是分屏显示的。不需要用户移动电子设备寻找第一目标拍摄对象和第二目标拍摄对象,操作便捷。
在一种可能的设计中,所述方法还包括:在未识别出所述第一图像上的目标拍摄对象的情况下,显示第五预览图像,所述第五预览图像是所述第一图像上中心区域内的图像块所对应的预览图像。也就是说,如果没有识别出目标拍摄对象,那么放大拍摄第一图像上中心区域内物体。
在一种可能的设计中,所述方法还包括:检测到拍摄指令;响应于所述拍摄指令,拍摄得到所述第一图像以及第二图像,所述第二图像是所述第一预览图像对应的拍摄图像。也就是说,在变焦拍摄时,如果点击拍摄按键,那么拍摄到一张完整图像(即第一图像),还拍摄到一张对至少一个目标拍摄对象的放大图像(即第二图像),方便用户对比,体验比较好。
在一种可能的设计中,所述至少一个目标拍摄对象可以是第一图像上占用面积最大或最小的拍摄对象;或者,所述至少一个目标拍摄对象可以是第一图像中靠近中心区域或靠近边缘区域的拍摄对象;或者,所述至少一个目标拍摄对象可以是第一图像中用户感兴趣的拍摄对象;或者,所述至少一个目标拍摄对象可以是根据用户指定的目标拍摄对象。
以上举例了几种确定至少一个目标拍摄对象的方式,本申请实施例不限定于上述几种方式,还可以有其它方式确定至少一个目标拍摄对象。
在一种可能的设计中,所述方法还包括:检测到窗口隐藏操作时,隐藏所述第一窗口;检测到窗口唤出操作时,显示所述第一窗口。也就是说,第一窗口可以调出也可以隐藏,比如,用户想要查看当前预览图像(如第一预览图像)对应第一图像上的哪一个目标拍摄对象时,可以调出第一窗口。当用户不希望第一窗口遮挡第一预览图像时,可以隐藏第一窗口。
第二方面,提供一种电子设备,包括:
处理器,存储器,以及,一个或多个程序;
其中,所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述处理器执行时,使得所述电子设备执行如下步骤:
启动所述电子设备中的相机应用,所述电子设备上的摄像头采集第一图像;
在变焦拍摄模式下,识别所述第一图像上的目标拍摄对象;
显示第一预览图像,所述第一预览图像是所述第一图像上至少一个目标拍摄对象所对应的预览图像。
在一种可能的设计中,所述第一预览图像上所述至少一个目标拍摄对象的尺寸大于所述第一图像上所述至少一个目标拍摄对象的尺寸。
在一种可能的设计中,当所述指令被所述处理器执行时,使得所述电子设备执行如下步骤:在显示所述第一预览图像的同时还显示第一窗口,所述第一窗口中显示所述第一图像和第一标记,所述第一标记用于在所述第一图像上标记出所述至少一个目标拍摄对象。
在一种可能的设计中,所述第一图像上包括第一目标拍摄对象和第二目标拍摄对象,所述第一预览图像是所述第一目标拍摄对象所对应的预览图像;当所述指令被所述处理器执行时,使得所述电子设备执行如下步骤:
当检测到目标拍摄对象切换操作时,显示第二预览图像,所述第二预览图像是所述第二目标拍摄对象所对应的预览图像。
在一种可能的设计中,所述第二预览图像上所述第二目标拍摄对象的尺寸大于所述第一图像上所述第二目标拍摄对象的尺寸。
在一种可能的设计中,当所述指令被所述处理器执行时,使得所述电子设备执行如下步骤:
在显示所述第二预览图像的同时还显示第一窗口,所述第一窗口中显示所述第一图像和第二标记,所述第二标记用于在所述第一图像上标记出所述第二目标拍摄对象;
其中,所述第一窗口显示所述第二标记时,将所述第一窗口中的第一标记取消显示,所述第一标记用于在所述第一图像上标记出所述第一目标拍摄对象。
在一种可能的设计中,所述第一图像上包括第一目标拍摄对象和第二目标拍摄对象,所述第一预览图像是所述第一目标拍摄对象所对应的预览图像;当所述指令被所述处理器执行时,使得所述电子设备执行如下步骤:
在检测到用于增加预览图像中目标拍摄对象的操作时,显示第三预览图像,所述第三预览图像是所述第一目标拍摄对象和所述第二目标拍摄对象所对应的预览图像;
其中,所述第三预览图像上所述第一目标拍摄对象和所述第二目标拍摄对象的尺寸大于所述第一图像上所述第一目标拍摄对象和所述第二目标拍摄对象的尺寸。
在一种可能的设计中,所述第一图像上包括第一目标拍摄对象和第二目标拍摄对象,所述第一预览图像是所述第一目标拍摄对象所对应的预览图像;当所述指令被所述处理器执行时,使得所述电子设备执行如下步骤:
在检测到用于增加预览图像中目标拍摄对象数量的操作时,在所述电子设备显示屏上第一区域显示所述第一预览图像,在第二区域中显示第四预览图像,所述第四预览图像是所述第二目标拍摄对象对应的预览图像;
其中,所述第四预览图像上所述第二目标拍摄对象的尺寸大于所述第一图像上所述第二目标拍摄对象的尺寸。
在一种可能的设计中,当所述指令被所述处理器执行时,使得所述电子设备执行如下步骤:在未识别出所述第一图像上的目标拍摄对象的情况下,显示第五预览图像,所述第五预览图像是所述第一图像上中心区域内的图像块所对应的预览图像。
在一种可能的设计中,当所述指令被所述处理器执行时,使得所述电子设备执行如下步骤:检测到拍摄指令;响应于所述拍摄指令,拍摄得到所述第一图像以及第二图像,所述第二图像是所述第一预览图像对应的拍摄图像。
在一种可能的设计中,当所述指令被所述处理器执行时,使得所述电子设备执行如下步骤:检测到窗口隐藏操作时,隐藏所述第一窗口;检测到窗口唤出操作时,显示所述第一窗口。
第三方面,还提供一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述第一方面提供的方法。
第四方面,还提供一种计算机程序产品,包括计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述第一方面提供的方法。
第五方面,还提供一种电子设备上的图形用户界面,所述电子设备具有显示屏、存储器、以及处理器,所述处理器用于执行存储在所述存储器中的一个或多个计算机程序,所述图形用户界面包括所述电子设备执行上述第一方面所述的方法时显示的图形用户界面。
第六方面,本申请实施例还提供一种芯片,所述芯片与电子设备中的存储器耦合,用于调用存储器中存储的计算机程序并执行本申请实施例第一方面的技术方案,本申请实施例中“耦合”是指两个部件彼此直接或间接地结合。
上述第二方面至第六方面的有益效果,参见第一方面的有益效果,不重复赘述。
附图说明
图1为本申请一实施例提供的电子设备的硬件结构的示意图;
图2为本申请一实施例提供的变焦拍摄场景下的预览图像显示方法的一种流程示意图;
图3至图4为本申请一实施例提供的电子设备的拍摄界面的示意图;
图5为本申请一实施例提供的主体识别算法的流程示意图;
图6为本申请一实施例提供的电子设备提示用户选择目标拍摄对象的示意图;
图7A至图7B为本申请一实施例的标记框与预览图像匹配的示意图;
图8至图12为本申请一实施例提供的电子设备的拍摄界面的一些示意图;
图13A至图13B为本申请一实施例提供的电子设备的拍摄界面的另一些示意图;
图14为本申请一实施例提供的电子设备的拍摄界面的又一些示意图;
图15为本申请一实施例提供的一种变焦拍摄场景下的预览图像显示方法的另一种流程示意图;
图16为本申请一实施例提供的变焦拍摄手动模式的示意图;
图17为本申请一实施例提供的电子设备的软件结构的示意图;
图18为本申请一实施例提供的电子设备内不同模块之间的信息交互示意图;
图19为本申请一实施例提供的变焦拍摄场景下的预览图像显示方法的另一种流程示意图;
图20为本申请一实施例提供的电子设备的另一种结构示意图。
具体实施方式
以下,对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解。
本申请实施例涉及的预览图像,是指电子设备的拍摄界面(或称为取景界面)中显示的图像。比如,电子设备是手机为例,手机启动相机应用,打开摄像头,显示拍摄界面,该拍摄界面中显示预览图像,预览图像是摄像头采集的图像。
本申请实施例涉及的视场角,是摄像头的一个重要的性能参数。“视场角”也可以称为“视角”、“视场范围”、“视野范围”等词汇,本文对于该名称不作限制。视场角用于指示摄像头所能拍摄到的最大的角度范围。若物体处于这个角度范围内,该物体便会被摄像头捕捉到,进而呈现在预览图像中。若物体处于这个角度范围之外,该物体便不会被摄像头捕捉到,即不会呈现在预览图像中。通常,摄像头的视场角越大,则拍摄范围就越大,焦距 就越短;而摄像头的视场角越小,则拍摄范围就越小,焦距就越长。因此,摄像头因视场角不同可分为普通摄像头、广角摄像头、超广角摄像头等。例如,普通摄像头的焦距可以是45至40毫米,视角可以是40度至60度;广角摄像头的焦距可以是38至24毫米,视角可以是60度至84度;超广角摄像头的焦距可以是20至13毫米,视角可以是94至118度。
本申请实施例提供的变焦拍摄场景下预览图像显示方法可以应用于电子设备。所述电子设备中包括摄像头,所述摄像头可以是广角摄像头或超广角摄像头;当然,也可以是普通摄像头。至于摄像头的数量,本申请不作限定,可以是一个,也可以是多个。如果是多个,多个中可以包括至少一个广角摄像头或超广角摄像头。所述电子设备可以是便携式电子设备,诸如手机、平板电脑、便携计算机、具备无线通讯功能的可穿戴设备(如智能手表、智能眼镜、智能手环、或智能头盔等)、或车载设备等。便携式电子设备的示例性实施例包括但不限于搭载
Figure PCTCN2022088235-appb-000001
或者其它操作***的便携式电子设备。
图1示出了电子设备的结构示意图。如图1所示,电子设备可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是电子设备的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了***的效率。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备充电,也可以用于电子设备与***设备之间传输数据。充电管理模块140用于从充电器接收充电输入。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。
电子设备的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。天线1和天线2用于发射和接收电磁波信 号。电子设备中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在电子设备上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidou navigation satellite system,BDS),准天顶卫星***(quasi-zenith satellite system,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。
显示屏194用于显示应用的显示界面等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP 处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备可以包括1个或N个摄像头193,N为大于1的正整数。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,以及至少一个应用程序(例如爱奇艺应用,微信应用等)的软件代码等。存储数据区可存储电子设备使用过程中所产生的数据(例如图像、视频等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将图片,视频等文件保存在外部存储卡中。
电子设备可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。陀螺仪传感器180B可以用于确定电子设备的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备围绕三个轴(即,x,y和z轴)的角速度。
陀螺仪传感器180B可以用于拍摄防抖。气压传感器180C用于测量气压。在一些实施例中,电子设备通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。磁传感器180D包括霍尔传感器。电子设备可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备是翻盖机时,电子设备可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。加速度传感器180E可检测电子设备在各个方向上(一般为三轴)加速度的大小。当电子设备静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备可以利用距离传感器180F测距以实现快速对焦。接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备通过发光二极管向外发射红外光。电子设备使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备附近有物体。当检测到不充分的反射光时,电子设备可以确定电子设备附近没有物体。电子设备可以利用接近光传感器180G检测用户手持电子设备贴近耳朵通话,以便自动熄灭屏幕达到 省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备是否在口袋里,以防误触。指纹传感器180H用于采集指纹。电子设备可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备对电池142加热,以避免低温导致电子设备异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备可以接收按键输入,产生与电子设备的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。SIM卡可以通过***SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备的接触和分离。
可以理解的是,图1所示的部件并不构成对电子设备的具体限定。本发明实施例中的电子设备可以包括比图1中更多或更少的部件。此外,图1中的部件之间的组合/连接关系也是可以调整修改的。
电子设备具有变焦拍摄功能时,当电子设备检测到用于放大变焦倍率的操作时,可以将远处的景物放大拍摄。一般的变焦拍摄流程为,用户打开电子设备中的相机应用,启动摄像头,电子设备显示拍摄界面,拍摄界面中显示预览图像,即该摄像头采集的图像。当预览图像中有用户想要拍摄的对象(比如一朵花)时,用户可能想要将该拍摄对象放大拍摄,那么就执行放大变焦倍率的操作(如,食指与中指的相对远离的滑动操作)。电子设备检测到用于增大变焦倍率的操作时,拍摄界面中显示的预览图像更新为摄像头采集的图像上处于中心区域内的图像块,即将中心区域内的图像块放大显示。但是用户想要拍摄的对象(比如那朵花)可能并不位于中心区域内,这样的话,更新的预览图像上就没有用户想要拍摄的对象(比如那朵花)了,此时用户就需要移动手机寻找拍摄对象,但是寻找的 过程费时费力,而且一旦手机移动速度稍快,很容易错过目标物体,操作起来不是很便捷,体验较差。
鉴于此,本申请实施例提供一种变焦拍摄场景下预览图像的显示方法,该方法中,电子设备启动相机应用之后,显示拍摄界面,拍摄界面中显示第一预览图像,第一预览图像是摄像头采集的第一图像。第一图像可以是一张图像或者是摄像头采集的图像流。当检测到用于增大变焦倍率的操作时,响应于该操作,确定第一图像上的目标拍摄对象。目标拍摄对象可能处于第一图像上的任意位置,不一定在第一图像上的中心区域内。然后,相机应用的拍摄界面中显示第二预览图像,第二预览图像是第一图像上第一区域内的图像块,第一区域是第一图像上至少一个目标拍摄对象所在区域。即,电子设备检测到用于增大变焦倍率的操作时,响应于该操作,拍摄界面中第一预览图像更新为第二预览图像,且第二预览图像是第一图像上至少一个目标拍摄对象所在区域内的图像块。比如,如果所述至少一个目标拍摄对象位于第一图像上的左上角,那么第二预览图像就是第一图像上左上角区域内的图像块;如果所述至少一个目标拍摄对象位于第一图像上的右下角,那么第二预览图像就是第一图像上右下角区域内图像块。与前面的一般变焦拍摄流程不同,因为,一般的变焦拍摄流程中,检测到用于增大变焦倍率的操作时,显示的第二预览图像一定是第一图像上处于中心区域内的图像块,那么如果中心区域内没有用户想要拍摄的物体时,就需要用户移动电子设备的位置来寻找拍摄物体。而本申请提供的变焦场景下预览图像的显示方法中,在放大变焦倍率时,第二预览图像是第一图像上至少一个目标拍摄对象所在区域内的图像块,不需要移动电子设备的位置寻找目标拍摄对象,操作便捷。
为了便于理解,本申请以下实施例将以电子设备是手机为例,结合附图对本申请实施例提供的变焦拍摄场景下预览图像的显示方法进行具体阐述。
实施例一
请参见图2,为本申请实施例提供的变焦拍摄场景下预览图像显示方法的流程示意图。该方法可以适用于具有图1所示的硬件结构的电子设备,比如手机、平板电脑等等,下文主要以手机为例进行介绍。如图2所示,所述流程包括:
S201,启动相机应用,打开摄像头,显示拍摄界面,拍摄界面中显示第一预览图像,第一预览图像是摄像头采集的第一图像。
示例性的,请参见图3中的(a)示出了手机的一种图形用户界面(graphical user interface,GUI),该GUI为手机的桌面301。当手机检测到用户点击桌面301上的相机应用(application,APP)的图标302的操作后,可以启动相机应用,显示如图3中的(b)所示的另一GUI,该GUI可以称为拍摄界面(或称为取景界面)303。在预览状态下,该拍摄界面303内可以实时显示预览图像。示例性的,参见图3中的(b),拍摄界面303内显示有第一预览图像,该第一预览图像为摄像头采集的第一图像。需要说明的是,所述第一图像可以是指一张图像,还可以是指摄像头采集的图像流。如果摄像头是广角摄像头,那么第一图像是大视角(field of vision,FOV)图像。继续参见图3中的(b),拍摄界面303上还可以包括用于指示拍照模式的控件305,用于指示录像模式的控件306,以及拍摄控件307。在拍照模式下,当手机检测到用户点击该拍摄控件307的操作后,手机执行拍照操作;在录像模式下,当手机检测到用户点击该拍摄控件307的操作后,手机执行拍摄视频的操作。
S202,判断是否检测到触发条件。
所述触发条件可以是检测到用于增大相机应用的变焦倍率的操作,和/或,检测到增大变焦倍率达到预设变焦倍率(或门限值)。其中,预设变焦倍率可以是最小倍率到最大倍率之间的任一值。比如,变焦倍率的范围是1X-10X,预设变焦倍率可以是5X、6X、7X、9X,10X等。预设变焦倍率的具体取值可以是电子设备出厂之后默认设置好的,或者,也可以是用户设置的,本申请对此不作限定。或者,预设变焦倍率还可以是无损变焦倍率。由于变焦拍摄包括物理变焦和电子变焦,其中,物理变焦是指利用摄像头的物理上的变化(比如镜头的位置移动等等)达到变焦的目的,电子变焦是指利用图像处理算法(比如像素插值算法)对摄像头采集的图像进行处理达到变焦的目的。无损变焦倍率可以理解为是物理变焦和电子变焦的分界点。即,在变焦倍率增大的过程中,在变焦倍率达到无损变焦倍率之前可以使用物理变焦的方式,这样的话摄像头采集的图像本身是经过物理变焦的,清晰度比较高。在变焦倍率超过无损变焦倍率时由于物理变焦已经达到极限,可以使用电子变焦。但是由于电子变焦需要对摄像头采集的图像进行后处理(比如像素插值处理),会降低图像的清晰度。因此,变焦倍率的门限(即预设变焦倍率)设置为无损变焦倍率的话,可以保证采集的第一图像既经过了物理变焦而且清晰度不会降低,第一图像的清晰度较高的话一方面可以提升后续处理流程中目标拍摄对象识别的准确性,另一方面从第一图像上裁剪图像块放大为预览图像时,不会因清晰度太低而影响用户拍摄体验。可以理解的是,如果无损变焦倍率是一个值的话,那么预设变焦倍率可以是该值;如果无损变焦倍率是一个范围的话,那么预设变焦倍率可以是该范围内的任一值,比如最大值。
其中,用于增加相机应用的变焦倍率的操作可以是图4中用户食指与拇指相对远离的滑动操作(图4中两个黑点分别代表不同手指与屏幕的接触点,箭头代表滑动方向)。或者,用于增加相机应用的变焦倍率的操作还可以是预设手势操作。所述预设手势操作可以是在拍摄界面内某个位置(该位置可以是事先设置好的,或任意位置)的单/双击、画圈、指关节敲击、多指敲击、多指上滑等等,本申请实施例不作限定。或者,用于增加相机应用的变焦倍率的操作还可以是针对特定的物理按键或虚拟按键的操作。比如,拍摄界面中还可以显示一个虚拟按键,检测到针对该虚拟按键的操作时确定增加相机应用的变焦倍率。再比如,电子设备检测到某个物理按键被触发,或多个物理按键被组合触发时,确定增加相机应用的变焦倍率。或者,用于增加相机应用的变焦倍率的操作还可以是用于指示增大变焦倍率的语音指令。比如,所述语音指令可以是“放大拍摄”或“将图像放大”的语音指令等。
S203,如果检测到触发条件,识别第一图像上的目标拍摄对象。所述目标拍摄对象是第一图像中的一个或多个拍摄对象。
目标拍摄对象可以是第一图像上的一个或多个物体。所述一个或多个物体可以是同一种类型的一个或多个物体或不同类型的一个或多个物体,对此不作限定。
第一图像上存在多个物体时,目标拍摄对象可以是所有物体中的目标物体。比如,目标物体可以是预设物体。当电子设备确定第一图像上存在预设物体时,确定该预设物体为目标拍摄对象。所述预设物体可以是默认设置好的物体;或者,是用户预先设置的,本申请实施例对此不作限定。再比如,目标物体还可以是用户感兴趣的物体。换句话说,电子设备根据用户感兴趣的物体,确定第一图像上的目标拍摄对象。其中,用户感兴趣的物体可以是电子设备记录的用户经常拍摄的物体,或者经常修图的物体。一种可实现方式为,以物体是猫为例,电子设备确定图库中存储的图像中猫的图像数量较多,那么确定用户感 兴趣的物体是猫。另一种可实现方式为,电子设备记录用户使用修改软件对图像进行修图时修图次数较多的物体,确定该修图次数较多的物体是用户感兴趣的物体。当电子设备确定第一图像上存在用户感兴趣的物体时,确定所述物体为目标拍摄对象。
目标拍摄对象还可以是第一图像上一个或多个物体类型。其中,一种物体类型可以对应一个或多个属于该种类型的物体,换句话说,当目标拍摄对象是物体类型时,该目标拍摄对象包括第一图像上属于该物体类型的所有物体。例如,一张图像上包括人物1和人物2,如果目标拍摄对象是“人物”这一物体类型时,那么识别出第一图像上的目标拍摄对象包括人物1和人物2这两个物体。
第一图像上存在多种物体类型时,目标拍摄对象可以是多种物体类型中的目标物体类型。目标物体类型可以是多种物体类型中的任意一种或多种物体类型,如果目标物体类型是多种物体类型的话,即同时识别多种物体类型。比如,目标物体类型多种物体类型中具有高优先级的物体类型。比如,优先级关系是:人物>动物>文字>美食>花卉>绿植>建筑物。电子设备可以先识别第一图像上是否包括“人物”类型,如果包括“人物”类型,则确定第一图像上属于“人物”类型的所有物体(即第一图像上的所有人)为目标拍摄对象;如果不包括“人物”类型,则继续识别第一图像是否包括“动物”类型,如果包括“动物”类型,则确定第一图像上属于“动物”类型的所有物体为目标拍摄对象,当然,如果不包括“动物”类型,则继续识别下一个等级的物体类型,以此类推。其中,所述优先级关系可以是出厂默认设置好的,或者,也可以是用户设置的,本申请对此不作限定。再比如,目标物体类型还可以是预设物体类型。所述预设物体类型可以是出厂默认设置好的物体类型或用户设置的物体类型,本申请实施例对此不作限定。再比如,目标物体类型还可以是用户感兴趣的物体类型。换句话说,电子设备根据用户感兴趣的物体类型,确定第一图像上的目标拍摄对象。一种可实现方式为,以物体是猫为例,电子设备确定图库中存储的图像中猫的图像数量较多,那么确定用户感兴趣的物体类型是“动物”类型。另一种可实现方式为,电子设备记录用户使用修改软件对图像进行修图时修图次数较多的物体,确定该修图次数较多的物体所属的物体类型是用户感兴趣的物体类型。
以上是列举确定目标拍摄对象的方式,在实际应用中,目标拍摄对象还可以有其他的确定方式,本申请实施例不再一一举例。
在本申请实施例中,用于识别第一图像上的目标拍摄对象的算法称为主体识别算法,主体识别算法可以是图像语义分析算法。请参见图5,为图像语义分析算法的流程示意图。简单来说,包括:图像的二值化处理、语义分割、检测定位三个步骤。其中,二值化处理是指将图像上的像素点的灰度值设置为0或255。从0至255的共256个灰度值减少到只有0和255这两个灰度值。由于二值化处理可以将图像的灰度值简化,所以经过二值化处理后的图像作为后续处理(即语义分割和检测定位)的输入图像的话,可以提升后续处理的速度。语义分割可以理解为将图像划分为不同的分割区域,每个分割区域上加上对应的语义标签,比如,表征该分割区域内的特征。一般来说,经过语义分割得到的每个分割区域内部的纹理和灰度有相似性;而且不同分割区域的边界明确。这样的话,下一个处理流程即检测定位可以在图像上每个分割区域进行检测,具体是指定位出每个分割区域内的目标拍摄对象。
本申请不限定S202和S203的执行顺序。如果先执行S202后执行S203,一种示例为,电子设备检测到触发条件(即S202)时,启动主体识别算法,然后使用该算法识别第一图 像上的目标拍摄对象。这种方式主体识别算法在不使用时可以关闭,当检测到触发条件时启动该算法,有助于节省功耗。另一些示例为,主体识别算法一直处于启动状态,当检测到触发条件(即S202)时,开始使用主体识别算法进行识别。这种方式节省启动算法所需要的时间,效率更高。如果先执行S203后执行S204,一种示例为,电子设备打开相机应用(即S201)时,启动主体识别算法,使用该算法识别第一图像上的目标拍摄对象(即S203)。当检测到触发条件(即S202)时进入S204。这种方式由于在触发条件之前,主体识别算法已经在识别,所以检测到触发条件之后能够较快进入S204,用户体验较好。另一种示例为,主体识别算法一直处于启动状态,电子设备打开相机应用(即S201)时,开始使用主体识别算法进行识别。这种方式节省启动算法所需要的时间,效率更高。
S204,拍摄界面中显示第二预览图像,第二预览图像是第一图像上第一区域内的图像块,第一区域是至少一个目标拍摄对象所在区域。
为了方便描述,将至少一个目标拍摄对象称为第一目标拍摄对象,即第一目标拍摄对象的数量可以是一个或多个,为了方便描述,下文主要以第一目标拍摄对象的数量是一个为例进行介绍。
需要说明的是,S203识别出的目标拍摄对象可能有多个。因此,在S204之前,还可以包括步骤:在多个目标拍摄对象中确定第一目标拍摄对象。其中,在多个目标拍摄对象中确定第一目标拍摄对象的方式可以有多种,比如:
方式1,根据多个目标拍摄对象的位置,确定第一目标拍摄对象。比如,第一目标拍摄对象是所有目标拍摄对象中最靠近图像中心的目标拍摄对象,或者,是最靠近图像边缘的目标拍摄对象。
方式2、根据多个目标拍摄对象所占面积大小,确定第一目标拍摄对象。比如,第一目标拍摄对象是所有目标拍摄对象中占用面积最大或最小的目标拍摄对象。
方式3、如果多个目标拍摄对象对应有多种物体类型时,根据多个目标拍摄对象的物体类型,确定第一目标拍摄对象。比如,多个目标拍摄对象对应的物体类型包括人物类型和动物类型,根据优先级关系可确定人物类型的优先级高于动物类型,那么多个目标拍摄对象中人物类型的拍摄对象为第一目标拍摄对象。其中,所述优先级关系可以包括人物>动物>文字>美食>花卉>绿植>建筑物,可选的,该优先级关系用户可以调整。
方式4、根据用户的指定操作,在多个目标拍摄对象中确定出第一目标拍摄对象。
举例来说,继续以图4为例,电子设备检测到用于放大变焦倍率的操作(比如食指与拇指相对远离的滑动操作),识别出第一图像上包括两个目标拍摄对象(比如两只小鸟)时,然后显示参见图6所示的拍摄界面,该拍摄界面中显示提示信息:请选择第一目标拍摄对象,还显示识别出的所有目标拍摄对象的编号,比如,小鸟1编号为1,小鸟2编号为2。电子设备检测到用户内点击编号2的操作时,确定该小鸟2即为用户指定的第一目标拍摄对象。方式4可以根据用户选择来确定第一目标拍摄对象,符合用户喜好。
可选的,在S204之前,还可以包括步骤:根据第一目标拍摄对象,确定第一区域。具体的,可以包括根据第一目标拍摄对象和预览图像的分辨率,确定第一区域。
以图6为例,假设第一目标拍摄对象为第一图像上的小鸟2。那么确定第一区域的方式包括:设置标记框来围住小鸟2,请参见图7A。本申请不限定标记框的形状,可以是长方形、正方形、圆形、椭圆形等等,或者,标记框可以是目标拍摄对象的边缘轮廓的最小 外接多边形。然后,确定第一区域,第一区域包括标记框所围区域,且第一区域的分辨率与预览图像的分辨率匹配。其中,第一区域的分辨率与预览图像的分辨率匹配可以包括如下两种情况:
(1)如果是横屏拍摄时,请参见图7A,预览图像的长大于宽,即分辨率(长宽比)大于1,那么第一区域的长宽比也是大于1,比如可以等于预览图像的分辨率。举例来说,预览图像的分辨率是16:4,那么第一区域的尺寸为长宽比16:4。这样的话,不需要再调整第一区域内图像块的长宽比来适配预览图像的尺寸要求,比较方便。
(2)如果是竖屏拍摄,请参见图7B,预览图像的长小于宽,即分辨率(长宽比)小于1,那么第一区域的长宽比也是小于1,比如可以等于预览图像的分辨率。举例来说,预览图像的分辨率是4:16,那么第一区域的尺寸为长宽比4:16,即,不需要再调整第一区域内图像块的长宽比来适配预览图像的尺寸要求。
确定第一区域之后,电子设备将第一区域内的图像块作为第二预览图像显示在拍摄界面中。比如,请参见图8所示,拍摄界面中显示第二预览图像,第二预览图像是第一图像中小鸟2所在的第一区域内的图像块。通过对比图4和图8可知,图4中当电子设备检测到用于增大变焦倍率的操作(比如食指与拇指相对远离的滑动操作)时,将第一预览图像上的第一目标拍摄对象(如小鸟2)放大显示即图8。由于第一目标拍摄对象是从第一预览图像上识别出的目标拍摄对象,是用户可能要放大拍摄的拍摄对象,相对于一般的变焦拍摄方式(将图像中心区域的图像块放大显示)而言,需要用户移动电子设备的位置寻找想要拍摄的对象的概率较低,操作便捷。
图8的拍摄界面是将小鸟2放大拍摄。可选的,用户还可以将拍摄界面中第一目标拍摄对象(如小鸟2)切换为第二目标拍摄对象(如小鸟1)。比如,电子设备显示第二预览图像时,当检测到目标拍摄对象的切换操作时,拍摄界面中显示第三预览图像,第三预览图像是第一图像上第二目标拍摄对象所在的第二区域内的图像块,第二目标拍摄对象是第一图像上与第一目标拍摄对象不同的其它目标拍摄对象。举例来说,参见图8,当电子设备检测到目标拍摄对象的切换操作时,显示图9所示的拍摄界面,该拍摄界面中显示第三预览图像,第三预览图像是第一图像上第二目标拍摄对象(即小鸟1)所在的第二区域内的图像块。请对比图8和图9,图8中拍摄界面中将小鸟2(左侧的小鸟)放大显示,当电子设备检测到目标拍摄对象的切换操作时,显示图9,即拍摄界面中小鸟1(右侧的小鸟)被放大显示。通过这种方式,在变焦拍摄场景中,电子设备保持不动的情况下,可以实现拍摄界面中目标拍摄对象的切换,用户体验较高。而一般的变焦拍摄方式(将图像中心区域的图像块放大显示)中,如果需要切换拍摄对象,需要用户手动移动电子设备的位置来寻找想要拍摄的对象,操作繁琐。
可以理解的是,在显示第三预览图像之前,还可以包括步骤:在多个目标拍摄对象中除去第一目标拍摄对象之外的剩余的目标拍摄对象中确定第二目标拍摄对象,根据第二目标拍摄对象,确定第二区域。其中,确定第二目标拍摄对象的方式与前面的确定第一目标拍摄对象的方式原理相同,不重复赘述。比如,第一目标拍摄对象是小鸟2,剩余的目标拍摄对象只有小鸟1,那么确定第二目标拍摄对象是小鸟1。其中,根据第二目标拍摄对象确定第二区域的过程与前面的根据第一目标拍摄对象,确定第一区域的原理相同,不重复赘述。
其中,目标拍摄对象的切换操作可以是针对拍摄界面内特定按键的操作。比如,请继续参见图8,拍摄界面中显示第二预览图像,拍摄界面中还显示一按键801,检测到针对该按键801的操作时,从第一目标拍摄对象切换到第二目标拍摄对象。当再次检测到针对该按键901的操作时,从第二目标拍摄对象切换到第三目标拍摄对象,第三目标拍摄对象是多个目标拍摄对象中除去第一目标拍摄对象、第二目标拍摄对象之外其它的目标拍摄对象。也就是说,通过按键801可以遍历第一图像上多个目标拍摄对象中的每个目标拍摄对象。或者,目标拍摄对象的切换操作还可以是针对物理按键的操作。比如,针对音量按键的操作,如检测到连续点击两次用于增加音量的按键时,从第一目标拍摄对象切换到第二目标拍摄对象。当再次检测到连续点击两次用于增加音量的按键时,从第二目标拍摄对象切换到第三目标拍摄对象。或者,目标拍摄对象的切换操作还可以是检测到用于指示切换目标拍摄对象的语音指令。比如,包括“下一个”的语音指令。
为了方便用户区别当前预览图像(比如第二预览图像或第三预览图像)上显示的是第一图像上哪一个目标拍摄对象。拍摄界面中还可以显示第一窗口,该第一窗口内显示第一图像且该第一图像上当前预览图像所显示的目标拍摄对象被突出显示。所述突出显示可以理解为目标拍摄对象被标记出,标记的方式不限定,比如被圈出。
举例来说,请参见图10中的(a),第二预览图像上显示第一窗口1001,该第一窗口1001中包括第一图像且第一图像上小鸟2(左侧的小鸟)被圈出,以提示用户当前预览图像(即第二预览图像)是被圈出的图像块。这样的话,通过第一窗口1001用户可以看到当前预览图像对应哪一个拍摄对象。当检测到目标拍摄对象切换操作时,显示如图10中的(b)所示的第三预览图像,第三预览图像对应第一图像上的小鸟1(右侧的小鸟),此时,第一窗口1001中第一图像上小鸟1被圈出,以提示用户当前预览图像(即第三预览图像)是被圈出的图像块。可选的,小鸟1(右侧的小鸟)被圈出时,可以取消小鸟2(左侧的小鸟)的圈中状态。
可选的,第一窗口1001可以自动出现。比如,对比图4和图10中的(a),电子设备显示图4的第一预览图像时,若检测到用于放大变焦倍率的操作,显示图10中的(a)所示的第二预览图像,且第二预览图像上自动出现第一窗口。或者,对比图4、图8和图10中的(a),电子设备显示图4的第一预览图像时,若检测到用于放大变焦倍率的操作,显示图8所示的第二预览图像,即第二预览图像上暂未出现第一窗口,当检测到用于调出第一窗口的操作时,显示图10中的(a)所示的界面,第一窗口出现。其中,用于调出第一窗口的操作比如可以是针对拍摄界面中特定按键的操作,比如,拍摄界面中显示特定按键,当检测到针对该特定按键的操作时,显示第一窗口。当然,为了避免遮挡预览图像,第一窗口还可以隐藏显示。比如,检测到用于隐藏第一窗口的操作时,隐藏第一窗口。其中,所述用于隐藏第一窗口的操作可以是针对所述特定按键的操作,即当检测到针对该特定按键的操作时,显示第一窗口,当再次检测到针对特定按键的操作时,隐藏第一窗口。或者,用于隐藏第一窗口的操作还可以是按住第一窗口移出屏幕的操作,或者,在长按第一窗口时弹出的删除按键上的点击操作;或者,请参见图11中的(a)所示,拍摄界面中检测到用户在左上角位置从左到右滑动时,调出第一窗口,显示如图11中的(b)所示的界面,当检测到用户在左上角位置处从右到左的滑动操作时,隐藏第一窗口,显示如图11中的(c)所示的界面。
在图10所示的实施例中,第一窗口1001内仅有当前预览图像显示的目标拍摄对象被标记出,当前预览图像不显示的目标拍摄对象未被标记出。比如图10中的(a)中第一窗口1001中仅标记出左侧的小鸟2,因为第二预览图像上没有小鸟1,所以第一窗口1001中小鸟1没有标记出。可选的,在另一些实施例中,第一窗口1001内第一图像上的所有目标拍摄对象都可以被标记出。比如,请参见图12中的(a),第一窗口1001内第一图像上小鸟1和小鸟2均被标出。比如各自对应一个标记框(左侧的小鸟2对应标记框1002,右侧的小鸟1对应标记框1003)。由于当前预览图像是第二预览图像即对应左侧的小鸟2,所以标记框1002相对于标记框1003突出显示。这样的话,用户可以确定当前预览图像对应小鸟2。当检测到目标拍摄对象的切换操作时,第一目标拍摄对象(即左侧的小鸟2)切换为第二目标拍摄对象(即右侧的小鸟1)时,请参见图12中的(b),第一窗口1001内第一图像上右侧的小鸟1的标记框1003相对于左侧小鸟2的标记框1002突出显示。这样的话,用户可以确定当前预览图像对应小鸟1。
第一窗口1001内所有的目标拍摄对象都可以被标记出的情况下,所述目标拍摄对象的切换操作还可以是在第一窗口1001内点击标记框的操作。比如,以图12中的(a)为例,当前预览对应标记框1002中的小鸟2,当检测到用户在第一窗口1001中点击标记框1003的操作时,显示第三预览图像即图12中的(b),第三预览图像对应标记框1003中的小鸟1,即通过在第一窗口1001内点击标记框的操作完成目标拍摄对象的切换。
在上面的实施例中,拍摄界面上只显示一个目标拍摄对象(即一只小鸟)。可选的,还可以增加拍摄界面上目标拍摄对象的数量。
作为一种示例,以图12中的(a)为例,第二预览图像对应小鸟2。当检测用于增加拍摄界面中目标拍摄对象的数量的操作时,拍摄界面中显示两个目标拍摄对象,即小鸟1和小鸟2。一种方式为,根据小鸟1确定区域1(区域1内包括小鸟1),根据小鸟2确定区域2(区域2内包括小鸟2),然后将拍摄界面中划分两个区域(即分屏显示),第一区域显示区域1内的图像块,第二区域显示区域2内的图像块,比如图13A。此时,拍摄界面中第一窗口1001中两个标记框都突出显示。其中,第一区域和第二区域可以是上下划分的两个区域,或者,左右划分的两个区域,本申请不作限定。比如,在横屏拍摄下可以左右划分,在竖屏拍摄下可以上下划分。另一种方式为,与分屏显示不同,根据两个目标拍摄对象确定第三区域,拍摄界面中显示第三区域内的图像块。比如,第三区域是能够围住两个目标拍摄对象(小鸟1和小鸟2)的最小区域,且第三区域的分辨率与预览图像的分辨率匹配。这种方式下,增加目标拍摄对象的数量之后的显示效果如图13B。如图13B,拍摄界面中还显示第一窗口1001,第一窗口1001内显示标记框1004,这个标记框1004围住小鸟1和小鸟2。
其中,用于增加拍摄界面中目标拍摄对象的数量的操作可以包括:以图12中的(a)为例,标记框1002已被选中,当检测到在第一窗口1001内选择标记框1003的操作(比如长按标记框1003的操作),确定标记框1002和标记框1003都被选中,那么拍摄界面中增加标记框1003中的小鸟1,即显示图13A或图13B。或者,用于增加拍摄界面中目标拍摄对象的数量的操作还可以是缩小变焦倍率的操作,比如拇指与食指相对靠近的滑动操作。或者,用于增加拍摄界面中目标拍摄对象的数量的操作还可以是针对拍摄界面中预设按键的操作,比如,拍摄界面中显示预设按键,当检测到针对该按键的操作时,确定增加拍摄 界面中目标拍摄对象的数量。或者,用于增加拍摄界面中目标拍摄对象的数量的操作还可以是用于指示用于增加目标拍摄对象的数量的语音指令。
继续以图12中的(a)为例,当检测到拍摄操作(比如点击拍照控件307)时,拍摄得到一张或两张图像。如果是一张图像,该图像可以是标记框1002内的图像块,即左侧的小鸟2被放大拍摄得到的图像。如果是两张图像,一张可以是标记框1002内的图像块,另一张可以是第一图像,即完整图像。或者,继续以图12中的(a)为例,当检测到拍摄操作(比如点击拍照控件307)时,确定第一窗口1001中所有标记框的数量或者所有标记框中被选中(或突出显示)的标记框的数量,假设确定出标记框数量为N(N=2),那么拍摄得到的图像有N+1张图像。其中一张图像是第一图像即完整图像,N张图像对应N个标记框内的图像块。
需要说明的是,存在一种情况,电子设备未识别出目标拍摄对象(即图2中S203未识别出目标拍摄对象)。这种情况下,电子设备可以按照一般的变焦拍摄流程处理,即将第一图像上中心区域内的图像块放大显示。举例来说,如果没有识别出目标拍摄对象,可以显示如图14中的(a)所示的拍摄界面,该拍摄界面中显示的预览图像是第一图像上中心区域内的图像块。或者,在未识别出的目标拍摄对象的情况下,拍摄界面中还可以显示第一窗口,请参见图14中的(b),第一窗口1401中显示第一图像且第一图像上显示标记框1402,标记框1402用于指示中心区域的位置。当电子设备位置移动时,第一窗口1401上的标记框1402的位置变化。这样的话,用户可以通过第一窗口1401中标记框1402的位置确定当前预览图像在第一图像上的哪一个位置,在寻找用户想要拍摄的对象的过程中给出指示作用。比如,中心区域内没有用户想要拍摄的对象时,用户通过第一窗口1401判断出用户想要拍摄的对象处于第一图像上标记框1402的左侧区域,这样,用户向左移动电子设备就可以很快的寻找到想要拍摄的对象,避免用户不知道想要拍摄的对象在哪个位置,而盲目的寻找。
可选的,如果将本申请实施例提供的变焦拍摄场景中将目标拍摄对象放大拍摄的方式作为第一种变焦拍摄模式。将前面提到的一般的变焦拍摄处理方式(即变焦拍摄时将中心区域内的图像块放大拍摄)作为第二种变焦拍摄模式。那么,电子设备可以默认使用第一种变焦拍摄模式或第二种变焦拍摄模式。或者,还可以根据用户的指定操作确定使用第一种变焦拍摄模式或第二种变焦拍摄模式。比如,电子设备的拍摄界面中显示切换控件,该切换控件用于实现第一种变焦拍摄模式和第二种变焦拍摄模式的切换。再比如,在检测到用于放大变焦倍率的操作时,显示提示信息,该提示信息用于提示用户选择第一种变焦拍摄模式还是第二种变焦拍摄模式。
实施例二
在前面的实施例一中,电子设备自动识别出第一图像上的目标拍摄对象,将至少一个目标拍摄对象放大拍摄。但是存在一种情况,电子设备自动识别出的目标拍摄对象并不是用户当下想要拍摄的对象。这样的话,使用实施例一的方式较难拍摄到用户想要拍摄的对象。本实施例二中,电子设备的变焦拍摄模式可以包括手动模式和自动模式。当选择自动 模式时,电子设备使用实施例一的方式处理,即自动识别第一图像上的目标拍摄对象,然后将至少一个目标拍摄对象放大拍摄。当选择手动模式时,电子设备可以不自动识别第一图像上的目标拍摄对象,而是提示用户手动选择目标拍摄对象。换言之,自动模式下目标拍摄对象是电子设备自动识别的,手动模式是目标拍摄对象是用户手动选择的。
比如,请参见图15,为本实施例二提供的变焦拍摄场景下预览图像显示方法的流程示意图。相对于实施例一即图2中的步骤,图15在S202和S203之间增加了S202-1,即判断手动模式还是自动模式。举例来说,请参见图16中的(a),当检测到用于增大变焦倍率的操作时,显示如图16中的(b)所示的拍摄界面,该拍摄界面中显示手动模式的按键和自动模式的按键,当检测到用户选择自动模式时,执行S203和S204,关于S203和S204请参见实施例一。当检测到用户选择手动模式的操作时,执行S205,提示用户选择目标拍摄对象。比如,继续以图16中的(b)为例,用户选择手动模式之后,显示如图16中的(c)所示的拍摄界面,该拍摄界面中显示提示信息:请选择目标拍摄对象。电子设备根据用户的操作,确定目标拍摄对象。比如,当检测到画圈操作时,确定画圈操作所围区域内的拍摄对象为目标拍摄对象,然后显示如图16中的(d)所示的拍摄界面,该拍摄界面中的预览图像是用户画圈操作所圈中的区域内的图像块。也就是说,在变焦拍摄时,将用户手动选择的目标拍摄对象放大拍摄。
图17示出了本申请一实施例提供的电子设备的软件结构框图。
如图17所示,电子设备的软件结构可以是分层架构,例如可以将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。以Android***为例,Android***可以包括三层,从上至下分别为应用程序层,硬件抽象层(hardware abstraction layer,HAL),硬件层(hardware layer)。需要说明的是,本文以三层的Android***为例进行介绍,实际上Android***还可以包括更多层或更少层,比如,在硬件层与HAL层之间还可以包括内核层,或者,在应用程序层和HAL层之间还可以包括应用程序框架层(framework,FWK)等,本申请不作限定。
应用程序层可以包括一系列应用程序包。比如相机应用、设置、皮肤模块、用户界面(user interface,UI)、三方应用程序等。其中,三方应用程序可以包括
Figure PCTCN2022088235-appb-000002
图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等。图12中仅示出相机应用。
硬件抽象层是用于建立与硬件电路之间的接口层,其目的在于将硬件抽象化,为操作***提供虚拟硬件平台,使其具有硬件无关性,可在多种平台上进行移植。硬件抽象层中包括相机应用HAL(camera HAL),用于实现应用程序层中相机应用与硬件层中的硬件之间的信息交互。HAL层中还包括主体识别模块、预览裁剪模块以及预显示(prview)模块。其中,主体识别模块用于对摄像头采集的图像(比如,ISP的输出图像)上的目标拍摄对象进行识别,还可以用于设置标记框用于标记识别出的目标拍摄对象,还可以用于根据标记框确定第一区域,关于第一区域的确定过程将在后文介绍。预览裁剪模块用于从摄像头采集的图像(比如,ISP的输出图像)上将第一区域内的图像块裁剪下来,以作为预览图像。预显示模块用于将预览采集模块裁剪得到图像块发送给硬件层中的显示屏进行显示。
硬件层可以包括各类传感器,例如图像传感器,图像信号处理器(image signal processor,ISP)等。图像传感器可以是指设置在摄像头中的感光元件,光线通过镜头被传递到感光元 件上形成电信号,电信号被传递给ISP。其中,感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP。ISP将电信号转化为肉眼可见的图像(即数字图像信号)。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。比如,ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,ISP可以设置在摄像头中。
下面结合图17所示的软件架构介绍本申请实施例提供的变焦拍摄场景下预览图像的显示方法。
图18为本申请实施例提供的变焦场景下预览图像的显示方法的流程示意图。如图18所示,所述流程包括:
S1,相机应用检测到启动指令,该启动指令用于指示启动相机应用。
S2,相机应用向camera HAL发送启动指令。
S3,camera HAL向ISP发送启动指令。
S4,ISP得出第一图像。
S5,ISP将第一图像发送给显示屏。
S6,显示屏上显示第一预览图像,第一预览图像是第一图像。
S7,相机应用检测到触发条件。
S8,相机应用向camera HAL发送能力查询请求,能力查询请求用于请求查询电子设备是否具有第一辅助预览模式的功能。
S9,camera HAL向相机应用发送查询反馈。
可选的,S8-S9是可选步骤,可以执行或不执行。
S10,相机应用向主体识别模块发送启动主体识别算法的启动指令。
S11,主体识别模块识别第一图像上的目标拍摄对象。
S12,主体识别模块确定第一目标拍摄对象。
S13,主体识别模块向预览裁剪模块发送第一目标拍摄对象所在位置。比如,根据第一目标拍摄对象确定第一区域,然后将第一区域的位置发送给预览裁剪模块。
S14,预览裁剪模块进行图像裁剪。
S15,预览裁剪模块向裁剪出的图像块发送给显示屏。
S16,显示屏上显示第二预览图像,第二预览图像是第一图像上第一区域内的图像块。
S17,相机应用检测到目标拍摄对象切换操作。
S18,相机应用向主体识别模块发送目标拍摄对象切换指令。
S19,主体识别模块将第一目标拍摄对象切换为第二目标拍摄对象,并将第二目标拍摄对象所在位置发送给预览裁剪模块。比如,根据第二目标拍摄对象确定第二区域,然后将第二区域的位置发送给预览裁剪模块。
S20,预览裁剪模块进行图像裁剪。
S21,预览裁剪模块将裁剪出的图像块发送给显示屏。
S22,显示屏上显示第三预览图像,第三预览图像是第一图像上第二区域内的图像块。
上述S17至S22可以不执行,所以图18中使用虚线表示。
S23,相机应用检测到拍照指令。
S24,相机应用向ISP发送拍照指令以及当前预览图像对应的目标拍摄对象的位置。
比如,当拍摄界面中显示第二预览图像时检测到拍照指令,那么当前预览图像即第二预览图像,对应的目标拍摄对象是第一目标拍摄对象,那么发送的可以是第一区域所在位置。当拍摄界面中显示第三预览图像时检测到拍照指令,那么当前预览图像即第三预览图像,对应的目标拍摄对象是第二目标拍摄对象,即发送的可以是第二区域所在位置。
S25,ISP拍摄图像。
S26,ISP将拍摄的图像和所述位置发送给预览裁剪模块。
S27,预览裁剪模块对拍摄图像进行裁剪,得到裁剪图像。
如果接收的是第一区域的位置,那么将第一图像内的图像块裁剪出作为拍摄得到的图像,如果接收的是第二区域的位置,那么将第二区域内的图像块裁剪出作为拍摄得到的图像。
S28,预览裁剪模块将裁剪图像发送给相机应用。
S29,相机应用存储裁剪图像。
基于相同的构思,图19为本申请实施例提供的一种变焦拍摄场景下的预览图像显示方法的流程示意图。该方法适用于电子设备,比如手机、平板电脑等。所述流程包括:
S1901,启动电子设备中的相机应用,所述电子设备上的摄像头采集的第一图像。
其中,S1901的实现原理请参见图2中S201的介绍,在此不重复。
S1902,在变焦拍摄模式下,识别所述第一图像上的目标拍摄对象。
一种可能的实现方式为,电子设备检测到触发条件(比如,用于放大变焦倍率的操作)时,识别第一图像上的目标拍摄对象。这种方式有助于节省功耗。
另一种可能的实现方式为,电子设备摄像头采集第一图像之后,就可以识别第一图像上的目标拍摄对象,当检测到触发条件(比如,用于放大变焦倍率的操作)时,执行S1903。
S1903,显示第一预览图像,所述第一预览图像是所述第一图像上至少一个目标拍摄对象所对应的预览图像。
其中,所述至少一个目标拍摄对象可以是第一图像上占用面积最大或最小的拍摄对象;或者,所述至少一个目标拍摄对象可以是第一图像中靠近中心区域或靠近边缘区域的拍摄对象;或者,所述至少一个目标拍摄对象可以是第一图像中用户感兴趣的拍摄对象(用户感兴趣对象的确定原理请参见前文);或者,所述至少一个目标拍摄对象可以是根据用户指定的目标拍摄对象。以上举例了几种确定至少一个目标拍摄对象的方式,本申请实施例不限定于上述几种方式,还可以有其它方式确定至少一个目标拍摄对象。
其中,所述第一预览图像上所述至少一个目标拍摄对象的尺寸大于所述第一图像上所述至少一个目标拍摄对象的尺寸。可理解为,将第一图像上的所述至少一个目标拍摄对象放大显示。示例性的,请参见图8,第一图像是包括树木、两只小鸟的图像,在变焦拍摄时,电子设备显示第一图像上小鸟2对应的预览图像,即显示放大后的小鸟2。
S1904,在显示所述第一预览图像的同时还显示第一窗口,所述第一窗口中显示所述第一图像和第一标记,所述第一标记用于在所述第一图像上标记出所述至少一个目标拍摄对象。
示例性的,请参见图10中的(a),在显示放大后的小鸟2时,第一窗口中标记出小鸟2。用户可以通过第一窗口内的第一标记确定当前预览图像(即第一预览图像)对应第一 图像上的哪一个目标拍摄对象,用户体验较好。
其中,S1904是可选步骤,可以执行或不执行,所以图中使用虚线表示。
示例性的,第一窗口中还显示第二标记,第二标记用于标记所述第一图像上所述至少一个目标拍摄对象以外的其它目标拍摄对象,所述第一标记与所述第二标记不同。
示例性的,请参见图12中的(a),在显示放大后的小鸟2时,第一窗口1101中显示标记1002和标记1003,标记1002用于在第一图像上标出小鸟2,标记1003用于在第一图像上标出小鸟1,标记1002和标记1003不同(比如标记1002加粗,标记1003未加粗),这样的话,用户可以区分出当前预览图像(即第一预览图像)对应第一图像上的哪一个目标拍摄对象。
在一些实施例中,第一图像上包括第一目标拍摄对象和第二目标拍摄对象,所述第一预览图像是所述第一目标拍摄对象所对应的预览图像;当检测到目标拍摄对象切换操作时,显示第二预览图像,所述第二预览图像是所述第二目标拍摄对象所对应的预览图像。其中,所述第二预览图像上所述第二目标拍摄对象的尺寸大于所述第一图像上所述第二目标拍摄对象的尺寸。
示例性的,请参见图8和图9,图8中电子设备显示放大后的小鸟2,当检测到目标拍摄对象切换操作时,显示放大后的小鸟1,即图9。也就是说,原本放大拍摄第一目标拍摄对象,通过目标拍摄对象切换操作放大拍摄第二目标拍摄对象,用户不需要移动电子设备寻找目标拍摄对象,操作便捷。
可选的,当电子设备显示第二预览图像的同时,第一窗口中显示第二标记,所述第二标记用于在所述第一图像上标记出所述第二目标拍摄对象。可选的,当第一窗口显示所述第二标记时,可以取消显示第一标记(用于在所述第一图像上标记出所述第一目标拍摄对象)或与第一标记区别显示。
示例性的,请参见图10中的(a),当显示放大后的小鸟2时,第一窗口中小鸟2被标记出(如通过一个标记框标记),当显示放大后的小鸟1时,第一窗口中小鸟1被标记出(如通过另一个标记框标记),小鸟2取消标记,如图10中的(b)。
再例如,请参见图12中的(a),当显示放大后的小鸟2时,第一窗口中用于标记小鸟2的标记1002相对于用于标记小鸟1的标记1003突出显示,当显示放大后的小鸟1时,第一窗口中标记框1003相对于标记1002突出显示,如图12中的(b)。
在另一些实施例中,在检测到用于增加预览图像中目标拍摄对象的操作时,显示第三预览图像,所述第三预览图像是所述第一目标拍摄对象和所述第二目标拍摄对象所对应的预览图像;其中,所述第三预览图像上所述第一目标拍摄对象和所述第二目标拍摄对象的尺寸大于所述第一图像上所述第一目标拍摄对象和所述第二目标拍摄对象的尺寸。
示例性的,请参见图13B,在用于增加预览图像中目标拍摄对象的操作的作用下,电子设备显示放大后的小鸟1和小鸟2。
或者,在检测到用于增加预览图像中目标拍摄对象数量的操作时,在所述电子设备显示屏上第一区域显示所述第一预览图像,在第二区域中显示第四预览图像,所述第四预览图像是所述第二目标拍摄对象对应的预览图像;其中,所述第四预览图像上所述第二目标拍摄对象的尺寸大于所述第一图像上所述第二目标拍摄对象的尺寸。
示例性的,请参见图13A,在用于增加预览图像中目标拍摄对象的操作的作用下,电子设备分屏显示,第一区域显示放大后的小鸟2,第二分区显示放大后的小鸟1。
存在一种情况,即未识别出所述第一图像上的目标拍摄对象的情况,这种情况下,电子设备显示第五预览图像,所述第五预览图像是所述第一图像上中心区域内的图像块所对应的预览图像。示例性的,请参见图14中的(b),电子设备将第一图像上中心区域内的图像块放大显示。
在一些实施例中,检测到拍摄指令;响应于所述拍摄指令,拍摄得到所述第一图像以及第二图像,所述第二图像是所述第一预览图像对应的拍摄图像。也就是说,在变焦拍摄时,如果点击拍摄按键,那么拍摄到一张完整图像(即第一图像),还拍摄到一张对至少一个目标拍摄对象的放大图像(即第二图像),方便用户对比,体验比较好。
基于相同的构思,图20所示为本申请提供的一种电子设备2000。该电子设备2000可以是前文中的手机。如图20所示,电子设备2000可以包括:一个或多个处理器2001;一个或多个存储器2002;通信接口2003,以及一个或多个计算机程序2004,上述各器件可以通过一个或多个通信总线2005连接。其中该一个或多个计算机程序2004被存储在上述存储器2002中并被配置为被该一个或多个处理器2001执行,该一个或多个计算机程序2004包括指令,上述指令可以用于执行如上面相应实施例中手机的相关步骤。通信接口2003用于实现与其他设备的通信,比如通信接口可以是收发器。
上述本申请提供的实施例中,从电子设备(例如手机)作为执行主体的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,电子设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
以上实施例中所用,根据上下文,术语“当…时”或“当…后”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。另外,在上述实施例中,使用诸如第一、第二之类的关系术语来区份一个实体和另一个实体,而并不限制这些实体之间的任何实际的关系和顺序。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、 数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。在不冲突的情况下,以上各实施例的方案都可以组合使用。
需要指出的是,本专利申请文件的一部分包含受著作权保护的内容。除了对专利局的专利文件或记录的专利文档内容制作副本以外,著作权人保留著作权。

Claims (15)

  1. 一种变焦拍摄场景下的预览图像显示方法,应用于电子设备,其特征在于,包括:
    启动所述电子设备中的相机应用,所述电子设备上的摄像头采集第一图像;
    在变焦拍摄模式下,识别所述第一图像上的目标拍摄对象;
    显示第一预览图像,所述第一预览图像是所述第一图像上至少一个目标拍摄对象所对应的预览图像。
  2. 根据权利要求1所述的方法,其特征在于,所述第一预览图像上所述至少一个目标拍摄对象的尺寸大于所述第一图像上所述至少一个目标拍摄对象的尺寸。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    在显示所述第一预览图像的同时还显示第一窗口,所述第一窗口中显示所述第一图像和第一标记,所述第一标记用于在所述第一图像上标记出所述至少一个目标拍摄对象。
  4. 根据权利要求1或2所述的方法,其特征在于,所述第一图像上包括第一目标拍摄对象和第二目标拍摄对象,所述第一预览图像是所述第一目标拍摄对象所对应的预览图像;
    当检测到目标拍摄对象切换操作时,显示第二预览图像,所述第二预览图像是所述第二目标拍摄对象所对应的预览图像。
  5. 根据权利要求4所述的方法,其特征在于,所述第二预览图像上所述第二目标拍摄对象的尺寸大于所述第一图像上所述第二目标拍摄对象的尺寸。
  6. 根据权利要求4或5所述的方法,其特征在于,所述方法还包括:
    在显示所述第二预览图像的同时还显示第一窗口,所述第一窗口中显示所述第一图像和第二标记,所述第二标记用于在所述第一图像上标记出所述第二目标拍摄对象;
    其中,所述第一窗口显示所述第二标记时,将所述第一窗口中的第一标记取消显示,所述第一标记用于在所述第一图像上标记出所述第一目标拍摄对象。
  7. 根据权利要求1-6任一所述的方法,其特征在于,所述第一图像上包括第一目标拍摄对象和第二目标拍摄对象,所述第一预览图像是所述第一目标拍摄对象所对应的预览图像;
    在检测到用于增加预览图像中目标拍摄对象的操作时,显示第三预览图像,所述第三预览图像是所述第一目标拍摄对象和所述第二目标拍摄对象所对应的预览图像;
    其中,所述第三预览图像上所述第一目标拍摄对象和所述第二目标拍摄对象的尺寸大于所述第一图像上所述第一目标拍摄对象和所述第二目标拍摄对象的尺寸。
  8. 根据权利要求1-6任一所述的方法,其特征在于,所述第一图像上包括第一目标拍摄对象和第二目标拍摄对象,所述第一预览图像是所述第一目标拍摄对象所对应的预览图像;
    在检测到用于增加预览图像中目标拍摄对象数量的操作时,在所述电子设备显示屏上第一区域显示所述第一预览图像,在第二区域中显示第四预览图像,所述第四预览图像是所述第二目标拍摄对象对应的预览图像;
    其中,所述第四预览图像上所述第二目标拍摄对象的尺寸大于所述第一图像上所述第二目标拍摄对象的尺寸。
  9. 根据权利要求1-8任一所述的方法,其特征在于,所述方法还包括:
    在未识别出所述第一图像上的目标拍摄对象的情况下,显示第五预览图像,所述第五 预览图像是所述第一图像上中心区域内的图像块所对应的预览图像。
  10. 根据权利要求1-9任一所述的方法,其特征在于,所述方法还包括:
    检测到拍摄指令;
    响应于所述拍摄指令,拍摄得到所述第一图像以及第二图像,所述第二图像是所述第一预览图像对应的拍摄图像。
  11. 根据权利要求3-10任一所述的方法,其特征在于,所述方法还包括:
    检测到窗口隐藏操作时,隐藏所述第一窗口;
    检测到窗口唤出操作时,显示所述第一窗口。
  12. 一种电子设备,其特征在于,包括:
    处理器,存储器,以及,一个或多个程序;
    其中,所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述处理器执行时,使得所述电子设备执行如权利要求1-11任一项所述的方法步骤。
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如权利要求1至11中任意一项所述的方法。
  14. 一种计算机程序产品,其特征在于,包括计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述权利要求1-11中任意一项所述的方法。
  15. 一种电子设备上的图形用户界面,其特征在于,所述电子设备具有显示屏、存储器、以及处理器,所述处理器用于执行存储在所述存储器中的一个或多个计算机程序,所述图形用户界面包括所述电子设备执行上述权利要求1-11中任意一项所述的方法时显示的图形用户界面。
PCT/CN2022/088235 2021-04-27 2022-04-21 一种变焦拍摄场景下的预览图像显示方法与电子设备 WO2022228274A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/557,205 US20240244311A1 (en) 2021-04-27 2022-04-21 Method for Displaying Preview Image in Zoom Shooting Scene and Electronic Device
EP22794745.4A EP4311224A1 (en) 2021-04-27 2022-04-21 Preview image display method in zoom photographing scenario, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110460565.8 2021-04-27
CN202110460565.8A CN115250327A (zh) 2021-04-27 2021-04-27 一种变焦拍摄场景下的预览图像显示方法与电子设备

Publications (1)

Publication Number Publication Date
WO2022228274A1 true WO2022228274A1 (zh) 2022-11-03

Family

ID=83697394

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/088235 WO2022228274A1 (zh) 2021-04-27 2022-04-21 一种变焦拍摄场景下的预览图像显示方法与电子设备

Country Status (4)

Country Link
US (1) US20240244311A1 (zh)
EP (1) EP4311224A1 (zh)
CN (1) CN115250327A (zh)
WO (1) WO2022228274A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117135452A (zh) * 2023-03-31 2023-11-28 荣耀终端有限公司 拍摄方法和电子设备

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117119276A (zh) * 2023-04-21 2023-11-24 荣耀终端有限公司 一种水下拍摄方法及电子设备
CN116567385A (zh) * 2023-06-14 2023-08-08 深圳市宗匠科技有限公司 图像采集方法及图像采集装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3291533A1 (en) * 2016-09-06 2018-03-07 LG Electronics Inc. Terminal and controlling method thereof
CN110149482A (zh) * 2019-06-28 2019-08-20 Oppo广东移动通信有限公司 对焦方法、装置、电子设备和计算机可读存储介质
CN110365906A (zh) * 2019-07-29 2019-10-22 维沃移动通信有限公司 拍摄方法及移动终端
CN110460773A (zh) * 2019-08-16 2019-11-15 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN110830713A (zh) * 2019-10-30 2020-02-21 维沃移动通信有限公司 一种变焦方法及电子设备
CN111010506A (zh) * 2019-11-15 2020-04-14 华为技术有限公司 一种拍摄方法及电子设备
CN112135046A (zh) * 2020-09-23 2020-12-25 维沃移动通信有限公司 视频拍摄方法、视频拍摄装置及电子设备
CN112333380A (zh) * 2019-06-24 2021-02-05 华为技术有限公司 一种拍摄方法及设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3291533A1 (en) * 2016-09-06 2018-03-07 LG Electronics Inc. Terminal and controlling method thereof
CN112333380A (zh) * 2019-06-24 2021-02-05 华为技术有限公司 一种拍摄方法及设备
CN110149482A (zh) * 2019-06-28 2019-08-20 Oppo广东移动通信有限公司 对焦方法、装置、电子设备和计算机可读存储介质
CN110365906A (zh) * 2019-07-29 2019-10-22 维沃移动通信有限公司 拍摄方法及移动终端
CN110460773A (zh) * 2019-08-16 2019-11-15 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN110830713A (zh) * 2019-10-30 2020-02-21 维沃移动通信有限公司 一种变焦方法及电子设备
CN111010506A (zh) * 2019-11-15 2020-04-14 华为技术有限公司 一种拍摄方法及电子设备
CN112135046A (zh) * 2020-09-23 2020-12-25 维沃移动通信有限公司 视频拍摄方法、视频拍摄装置及电子设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117135452A (zh) * 2023-03-31 2023-11-28 荣耀终端有限公司 拍摄方法和电子设备

Also Published As

Publication number Publication date
EP4311224A1 (en) 2024-01-24
CN115250327A (zh) 2022-10-28
US20240244311A1 (en) 2024-07-18

Similar Documents

Publication Publication Date Title
WO2021093793A1 (zh) 一种拍摄方法及电子设备
WO2020177583A1 (zh) 一种图像裁剪方法和电子设备
WO2021052232A1 (zh) 一种延时摄影的拍摄方法及设备
CN110225244B (zh) 一种图像拍摄方法与电子设备
WO2022228274A1 (zh) 一种变焦拍摄场景下的预览图像显示方法与电子设备
WO2020073959A1 (zh) 图像捕捉方法及电子设备
WO2021129198A1 (zh) 一种长焦场景下的拍摄方法及终端
CN113497881B (zh) 图像处理方法及装置
WO2021143269A1 (zh) 一种长焦场景下的拍摄方法及移动终端
CN110430357B (zh) 一种图像拍摄方法与电子设备
US10893137B2 (en) Photography guiding method, device, and system
CN113596316B (zh) 拍照方法及电子设备
CN113660408B (zh) 一种视频拍摄防抖方法与装置
WO2023273323A1 (zh) 一种对焦方法和电子设备
CN113810604B (zh) 文档拍摄方法、电子设备和存储介质
US20230056332A1 (en) Image Processing Method and Related Apparatus
WO2022266907A1 (zh) 处理方法、终端设备及存储介质
WO2021179186A1 (zh) 一种对焦方法、装置及电子设备
CN114298883A (zh) 图像处理方法、智能终端及存储介质
WO2022206589A1 (zh) 一种图像处理方法以及相关设备
CN112989092A (zh) 一种图像处理方法及相关装置
WO2020077544A1 (zh) 一种物体识别方法和终端设备
WO2022228010A1 (zh) 一种生成封面的方法及电子设备
WO2022222866A1 (zh) 一种内容显示方法与电子设备
CN115640414A (zh) 图像的显示方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22794745

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022794745

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 18557205

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2022794745

Country of ref document: EP

Effective date: 20231019

NENP Non-entry into the national phase

Ref country code: DE