CN110225244B - Image shooting method and electronic equipment - Google Patents

Image shooting method and electronic equipment Download PDF

Info

Publication number
CN110225244B
CN110225244B CN201910402906.9A CN201910402906A CN110225244B CN 110225244 B CN110225244 B CN 110225244B CN 201910402906 A CN201910402906 A CN 201910402906A CN 110225244 B CN110225244 B CN 110225244B
Authority
CN
China
Prior art keywords
image
shooting
mobile phone
user
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910402906.9A
Other languages
Chinese (zh)
Other versions
CN110225244A (en
Inventor
吴钢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910402906.9A priority Critical patent/CN110225244B/en
Publication of CN110225244A publication Critical patent/CN110225244A/en
Priority to PCT/CN2020/090366 priority patent/WO2020228792A1/en
Application granted granted Critical
Publication of CN110225244B publication Critical patent/CN110225244B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Telephone Function (AREA)

Abstract

The application provides an image shooting method and electronic equipment. The method comprises the following steps: detecting an input operation, starting a camera, and displaying a viewing interface, wherein the viewing interface comprises a preview image; detecting a first operation aiming at the shooting control, and displaying a first image, wherein the first image is an image shot based on initial shooting parameters, and the weight information and the shooting adjustment parameters of each shooting object are displayed on the first image; and detecting a second operation for storing the first image, and storing a second image, wherein the second image is obtained by adjusting the first image according to the weight information and the shooting adjustment parameters of each shooting object. According to the method, a user does not need to shoot a first image at the electronic equipment, the first image is found from the gallery after being stored, and then the first image is subjected to image retouching, namely the user can obtain an image which is relatively in line with the aesthetic sense of the user through fewer operation steps, the operation is simple, and the user experience is improved.

Description

Image shooting method and electronic equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to an image capturing method and an electronic device.
Background
With the advancement of communication technologies, various functions in electronic devices are continuously perfected. Among them, the photographing function in the electronic device has become one of the functions that the user uses frequently.
For example, when the user travels and encounters a beautiful landscape, the user wants to capture a high-quality image as a memorial. However, most users who are not professional photographers or photography enthusiasts do not understand the photographing technology and do not know how to take photos with good effect, so that the taken photos are hard and stiff, the mood of the users is also influenced, and the user experience is reduced.
Disclosure of Invention
The application aims to provide an image shooting method and electronic equipment, and the method can enable the electronic equipment to shoot images meeting user requirements, so that user experience is improved.
In a first aspect, an image capture method is provided that may be performed by an electronic device, such as a cell phone, ipad, or the like. The method comprises the following steps: detecting an input operation; responding to the input operation, starting a camera, and displaying a viewing interface, wherein the viewing interface comprises a preview image; detecting a first operation aiming at a shooting control; displaying a first image, which is an image photographed based on initial photographing parameters, on which weight information and photographing adjustment parameters of each photographing object are displayed, in response to the first operation; detecting a second operation for storing the first image; and responding to the second operation, and storing a second image, wherein the second image is obtained after the first image is adjusted according to the weight information and the shooting adjustment parameters of each shooting object.
In the embodiment of the application, after the electronic device detects the operation on the shooting control, a first image is obtained through shooting, the electronic device does not directly store the first image, but before the first image is stored, a retouching opportunity of the first image can be provided for a user, the user is prompted about the weight and shooting adjustment parameters of each shooting object on the first image for the user to refer to, and when the electronic device detects the operation on storing the first image, the electronic device stores a second image obtained after retouching the first image. According to the method, a user does not need to shoot a first image at the electronic equipment, the first image is found from the gallery after being stored, and then the first image is subjected to image retouching, namely the user can obtain an image which is relatively in line with the aesthetic sense of the user through fewer operation steps, the operation is simple, and the user experience is improved.
In one possible design, the electronic device may further detect a modification operation of the weight information and/or the shooting adjustment parameter for the first photographic subject before detecting the second operation for storing the first image; responding to the modification operation, processing the first image to obtain a third image, wherein the third image is obtained after the first image is adjusted according to the modified weight information and/or shooting adjustment parameters; in response to the second operation, storing a second image, including: in response to the second operation, storing the third image.
In the embodiment of the application, after the electronic device detects an operation on a shooting control, a first image is shot, before the first image is stored, a user can be provided with a first image cropping opportunity, the user is prompted about the weight and shooting adjustment parameters of each shooting object on the first image for the user to refer to, the user can modify the weight and shooting adjustment parameters of each shooting object, the electronic device can adjust the first image based on the modified weight and/or shooting adjustment parameters to obtain a third image, and when the electronic device detects the operation on storing the first image, the electronic device stores the third image. According to the method, a user does not need to shoot a first image at the electronic equipment, the first image is found from the gallery after being stored, and then the first image is subjected to image retouching, namely the user can obtain an image which is relatively in line with the aesthetic sense of the user through fewer operation steps, the operation is simple, and the user experience is improved.
In one possible design, the electronic device may also save the first image; wherein a first mark is set on the second image, and the first mark is not set on the first image.
In this embodiment of the application, the electronic device may store the first image and the second image, that is, the electronic device may store an image before the cropping, or may store an image after the cropping, and for convenience of viewing by a user, the second image is provided with a first mark, where the first mark is used to represent that the second image is an image after the cropping, and the first image may not be provided with the first mark. It should be understood that the first mark may be an icon and/or a text, and the embodiment of the present application is not limited.
In one possible design, the weight of each photographic subject is determined according to the area ratio occupied by each photographic subject in the first image; or is preset; or, is determined according to the user selection operation; or the operation behavior of the user on the image stored in the electronic equipment is determined.
It should be understood that the electronic device may self-learn the weight of each photographic subject according to the operation behavior of the user on the image stored in the electronic device, so that when the electronic device is constructed based on the self-learned weight of each photographic subject, the obtained image is an image which is more beautiful for the user.
In one possible design, the weight of each photographic subject is determined according to the operation behavior of a user on the image stored in the electronic device, and the method includes: the weight of each shot object is determined according to the number of times each shot object is subjected to image trimming in the stored image; or, the number of times each photographic subject is cut out in the stored images, or the number of images containing each photographic subject in all the stored images.
It should be understood that the electronic device may self-learn the weight of each photographic subject according to the operation behavior of the user on the image stored in the electronic device, so that when the electronic device is constructed based on the self-learned weight of each photographic subject, the obtained image is an image which is more beautiful for the user.
In one possible design, the shooting adjustment parameter of each shooting object is preset, or is determined according to an image parameter adjusted when a user trims each shooting object in an image stored in the electronic device.
It should be understood that the electronic device may self-learn the shooting adjustment parameters of each photographic subject according to the operation behavior of the user on the images stored in the electronic device, so that when the electronic device is modified based on the self-learned shooting adjustment parameters of each photographic subject, the obtained image is an image that is more aesthetically pleasing to the user.
In one possible design, the displaying the weight information and the shooting adjustment parameter of each photographic subject on the first image includes: a blank area on the first image displays weight information and shooting adjustment parameters of each shooting object; or displaying the weight information and the shooting adjustment parameters of each object in the area where each shooting object is located on the first image.
In this embodiment of the present application, the weight information and the shooting adjustment parameter of each shooting object may be displayed in a blank area on the first image, or may be displayed in an area where each shooting object is located on the first image, and this embodiment of the present application is not limited.
In one possible design, the electronic device further detects a third operation before detecting the second operation for storing the first image; canceling display of the weight information and the photographing adjustment parameter of each of the photographic subjects on the first image in response to the third operation; in response to the second operation, storing a second image, including: in response to the second operation, storing the first image.
In the embodiment of the application, after the electronic device detects an operation on the shooting control, a first image is obtained through shooting, before the first image is stored, a retouching opportunity of the first image can be provided for a user, the user is prompted about the weight and the shooting adjustment parameter of each shooting object on the first image for the user to refer to, and the user can cancel displaying of the weight and the shooting adjustment parameter of each shooting object on the first image, so that the image obtained through storage by the electronic device is the image before retouching, namely the first image. In the method, the electronic equipment can enable the user to select to repair or not to repair the picture, so that the user operation is facilitated, and the user experience is improved.
In a second aspect, an electronic device is also provided that includes at least one processor and a memory. Wherein the memory is used to store one or more computer programs; the memory stores one or more computer programs that, when executed by the at least one processor, enable the electronic device to implement the first aspect or any one of the possible designs of the first aspect.
In a third aspect, there is also provided an electronic device comprising means for performing the method of the first aspect or any one of the possible designs of the first aspect. These modules/units may be implemented by hardware, or by hardware executing corresponding software.
In a fourth aspect, a computer-readable storage medium is also provided, which comprises a computer program, which, when run on an electronic device, causes the electronic device to perform the first aspect or any one of the possible designs of the first aspect described above.
In a fifth aspect, there is also provided a program product, which, when run on an electronic device, causes the electronic device to perform the method of the first aspect or any one of the possible designs of the first aspect described above.
In a sixth aspect, a chip is further provided, where the chip is coupled to a memory in an electronic device, and is configured to call a computer program stored in the memory and execute a technical solution of any one of the first aspect and the possible designs of the first aspect of the embodiment of the present application; "coupled" in the context of this application means that two elements are joined to each other either directly or indirectly.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of a mobile phone 100 according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the present application;
fig. 3 is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the present application;
fig. 4 is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the present application;
fig. 5 is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a flow of an image capturing method according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a user graphical interface of the mobile phone 100 according to the embodiment of the present application;
fig. 8 is a schematic diagram illustrating a flow of an image capturing method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Hereinafter, some terms in the embodiments of the present application are explained to facilitate understanding by those skilled in the art.
The electronic device related to the embodiment of the present application may be any electronic device having an image capturing function, such as a mobile phone, a digital camera, an ipad, and the like, and may also be a desktop computer, a wearable device, and the like, which is not limited in the embodiment of the present application.
The shooting parameters related to the embodiments of the present application may be parameters set when the electronic device shoots an image, such as aperture, focal length, shutter, sensitivity, exposure, white balance, and the like.
The shooting adjustment parameter, that is, the adjustment range of the shooting parameter, according to the embodiment of the present application, taking the shooting parameter as the focal length as an example, the shooting adjustment parameter is the adjustment range of the focal length. Assuming that the focal length is 50mm, if the shooting parameter adjustment is +5mm, the focal length after the adjustment is 55 mm.
It should be noted that, for convenience of use by the user, the shooting adjustment parameters displayed on the electronic device may be divided in a step manner, for example, the color may be divided according to the step, the initial step is 0, then the color is increased by 2 steps (indicated by +2), and the corresponding shooting adjustment parameter is that the exposure time is increased by 0.25, that is, when the exposure time is increased by 0.25 seconds when the electronic device shoots an image, the color in the shot image may be increased by 2 steps. It is to be understood that the numerical values provided herein are exemplary and not limiting.
The electronic equipment adjusts the initial shooting parameters according to the shooting adjustment parameters to obtain the final shooting parameters. For example, taking the focal length as an example, the initial focal length is 50mm, and if the shooting parameter adjustment is +5mm, the final focal length is 55 mm.
The preview image according to the embodiment of the present application is an image displayed in a viewing interface before the electronic device detects an operation on a shooting control.
The embodiment of the present application relates to a photographic subject, a photographic subject included in an image, such as a person, a flower, a green plant, an animal, and the like, and the embodiment of the present application is not limited.
The terminology used in the following examples is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. In this application, the singular forms "a", "an", "the" and "the" are intended to include the expressions such as "one or more" unless the context clearly indicates otherwise. It should also be understood that in the embodiments of the present application, "one or more" means one, two, or more than two; "and/or" describes the association relationship of the associated objects, indicating that three relationships may exist; for example, a and/or B, may represent: a alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The following describes electronic devices, Graphical User Interfaces (GUIs) for such electronic devices, and embodiments for using such electronic devices. In some embodiments of the present application, the electronic device may be a portable device that includes a camera, such as a cell phone, a tablet, a wearable device (e.g., a smart watch), and the like. Exemplary embodiments of the portable device include, but are not limited to, a mount
Figure GDA0002711285400000041
Or other operating system. The above-described portable device may be other portable devices as long as the image capturing function can be realized. It should also be understood that in some other embodiments of the present application, the electronic device may not be a portable device, but may be a desktop computer capable of implementing the image capturing function.
Generally, an electronic device can support multiple Applications (APPs). Such as one or more of the following applications: cameras, galleries, instant messaging applications, and the like. Among other things, instant messaging applications may be varied, such as WeChat, Tencent chat software (QQ), WhatsApp Messenger, Link, Kakao Talk, nailer, and so forth. The user can send information such as characters, voice, pictures, video files and other various files to other contacts through instant messaging application; or the user may have voice, video calls, etc. with other contacts through the instant messaging application.
The structure of the mobile phone is described below by taking the mobile phone as an example.
Referring to fig. 1, the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, a sensor module 180, and the like.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be a neural center and a command center of the cell phone 100, among others. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
The processor 100 may run the software code of the image capturing method provided by the embodiment of the present application to capture an image that is satisfied by a user.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the mobile phone 100, and may also be used to transmit data between the mobile phone 100 and peripheral devices.
The charging management module 140 is configured to receive charging input from a charger. The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the mobile phone 100 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the handset 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to the mobile phone 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, the antenna 1 of the handset 100 is coupled to the mobile communication module 150 and the antenna 2 is coupled to the wireless communication module 160 so that the handset 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The display screen 194 is used to display a display interface of an application and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the cell phone 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The camera 193 is used to capture still images or video. The cameras 193 may include a front camera and a rear camera.
It should be understood that at least one camera, such as at least one front-facing camera, and/or at least one rear-facing camera, may be provided on the handset 100.
Internal memory 121 may be used to store one or more computer programs, including instructions. The processor 110 may cause the mobile phone 100 to perform the image capturing method provided in some embodiments of the present application by executing the above-mentioned instructions stored in the internal memory 121. The internal memory 121 may exemplarily include a program storage area and a data storage area. Wherein, the storage program area can store an operating system; the storage program area may also store one or more applications (e.g., gallery, contacts, etc.), and the like. The data storage area can store data (such as photos, contacts, etc.) created during use of the mobile phone 100, and the like. Further, the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage components, flash memory components, Universal Flash Storage (UFS), and the like. In some embodiments, the processor 110 may cause the mobile phone 100 to execute the image capturing method provided in the embodiments of the present application by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor 110.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the mobile phone 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as pictures, videos, and the like are saved in an external memory card.
Of course, the software code of the image capturing method provided in the embodiment of the present application may also be stored in the external memory, and the processor 110 may execute the software code through the external memory interface 120 to execute the flow steps of the image capturing method, so as to capture the image that is satisfied by the user.
It should be understood that images (still images or moving images), videos, and the like captured by the mobile phone 100 may be stored in the external memory, or the images, videos, and the like may be stored in the external memory when the mobile phone 100 detects that certain conditions are satisfied (the memory space of the internal memory is insufficient, or the user selects to store the images, videos, and the like in the external memory).
The mobile phone 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc.
The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194.
The gyro sensor 180B may be used to determine the motion attitude of the cellular phone 100. In some embodiments, the angular velocity of the handpiece 100 about three axes (i.e., the x, y, and z axes) may be determined by the gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, the handset 100 calculates altitude, aiding in positioning and navigation, from the barometric pressure measured by the barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The handset 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the handset 100 is a flip phone, the handset 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E can detect the magnitude of acceleration of the cellular phone 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the handset 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The handset 100 may measure distance by infrared or laser. In some embodiments, taking a picture of a scene, the cell phone 100 may utilize the range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The cellular phone 100 emits infrared light to the outside through the light emitting diode. The handset 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the cell phone 100. When insufficient reflected light is detected, the cell phone 100 can determine that there are no objects near the cell phone 100. The mobile phone 100 can detect that the mobile phone 100 is held by the user and close to the ear for communication by using the proximity light sensor 180G, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. The handset 100 may adaptively adjust the brightness of the display 194 according to the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the mobile phone 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The mobile phone 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, take a photograph of the fingerprint, answer an incoming call with the fingerprint, and the like.
The temperature sensor 180J is used to detect temperature. In some embodiments, the handset 100 implements a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the mobile phone 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the cell phone 100 heats the battery 142 when the temperature is below another threshold to avoid an abnormal shutdown of the cell phone 100 due to low temperatures. In other embodiments, when the temperature is lower than a further threshold, the mobile phone 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on the surface of the mobile phone 100, different from the position of the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The cellular phone 100 may receive a key input, and generate a key signal input related to user setting and function control of the cellular phone 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be attached to and detached from the cellular phone 100 by being inserted into the SIM card interface 195 or being pulled out from the SIM card interface 195.
It is to be understood that the components shown in fig. 2 are not to be construed as specifically limiting the handset 100, and that the handset 100 may include more or fewer components than shown, or some components may be combined, some components may be separated, or a different arrangement of components may be used.
It should be noted that there are many shooting modes in the mobile phone, such as a portrait mode, a landscape mode, and the like, but shooting parameters in each shooting mode are already set (generally set before the mobile phone leaves the factory), where the shooting parameters include aperture, focal length, shutter, sensitivity, exposure, white balance, and the like. However, the aesthetic feeling and the appeal of each user are different, and an image obtained by shooting in a certain shooting mode does not necessarily meet the requirements of each user. For example, an image shot by the mobile phone in the portrait mode is preferred by the user a, but is not preferred by the user B, so that the user B may need to fix a picture to obtain a satisfactory image, and the operation is complicated and the experience is poor.
In the image shooting method provided by the embodiment of the application, the mobile phone can learn the attention degree of the user to each shooting object in the image and the shooting adjustment parameter of each shooting object, for example, if the mobile phone learns that most images in the stored image contain portrait and a few images contain animal, the attention degree of the user to people is higher, and the attention degree to animals is lower. For another example, when the mobile phone learns that the number of times of modifying the person is large and the number of times of modifying the landscape is small in the process of repairing the image, the mobile phone determines that the attention degree of the person is high and the attention degree of the landscape is low. For another example, the mobile phone can also learn the modification of the image parameters of the person by the user in the process of repairing the image, and record the modification, so that the mobile phone can obtain the shooting adjustment parameters of the person and the shooting adjustment parameters of other shooting objects through a large number of learning processes.
The mobile phone sets the weight of each object according to the attention degree of each object, for example, the weight with higher attention degree is higher, and the weight with lower attention degree is lower.
Therefore, the weight of each photographic subject and the photographic adjustment parameter of each photographic subject can be stored in the mobile phone, and the weight and the photographic adjustment parameter of each photographic subject are updated by real-time learning during the use of the mobile phone by the user.
Therefore, when the mobile phone shoots an image, each shooting object in the image is identified, the final shooting adjustment parameter of each object is determined according to the weight of each object and the shooting adjustment parameter of each shooting object, and the shooting parameters are adjusted based on the final shooting adjustment parameter of each shooting object.
Several examples of the process of capturing an image by the cell phone 100 shown in fig. 1 are described below.
Example 1:
referring to fig. 2(a), the mobile phone 100 displays a home screen (home screen)201, and icons or components of one or more applications, including an icon 202 of a camera, may be included on the home screen 201. The cellular phone 100 detects an operation for the icon 202 of the camera, and in response to the operation, displays the viewfinder interface 203 of the camera, as shown in fig. 2 (b). As shown in fig. 2(b), the mobile phone 100 displays a finder interface 203, and a preview image, which is an image presented in the finder interface before the mobile phone 100 detects an operation on the shooting control 205, is included in the finder interface 203.
With continued reference to fig. 2(b), when the cell phone 100 detects that the "photograph" mode 204 is selected, the preview image displayed in the viewfinder interface is an image captured based on the initial shooting parameters. Here, the initial shooting parameter is an initial value of the shooting parameter, for example, when the shooting parameter is exposure, the initial shooting parameter is an initial exposure value. It should be noted that the initial shooting parameters may be default shooting parameters or shooting parameters that have been adjusted last time by the user.
Referring to fig. 2(c), the mobile phone 100 detects that the "smart shoot" mode 206 is selected, and displays a view interface 207, and the preview image in the view interface 207 is an image acquired by the mobile phone 100 based on the shooting parameters after adjustment, wherein the shooting parameters after adjustment are the shooting parameters after adjustment of the initial exposure parameters. Of course, the mobile phone 100 may also display a prompt message "enter smart shooting mode" on the preview image.
For example, the mobile phone 100 may identify at least one photographic subject on the preview image (there may be various image identification methods, such as an edge detection method, etc., and the embodiments of the present application are not limited thereto), and determine a weight and a photographic adjustment parameter for each photographic subject. Wherein the weight and the shooting adjustment parameter of each shot object may be self-learned by the cellular phone 100. The mobile phone 100 determines the final adjustment parameter of each object to be photographed according to the weight of each object to be photographed and the photographing adjustment parameter, and then adjusts the initial photographing parameter according to the final adjustment parameter to obtain the adjusted photographing parameter.
Taking the example that the shooting object in the preview image is a blue-sky white cloud, for convenience of understanding, fig. 2(c) shows that the blue-sky white cloud on the preview image in fig. 2(c) and the blue-sky white cloud on the preview image in fig. 2(b) have different display effects by diagonally distinguishing the area of the blue-sky white cloud. Because, the preview image in fig. 2(c) is acquired based on the adjusted photographing parameters, and the preview image in fig. 2(b) is acquired based on the initial photographing parameters.
In example 1, after the mobile phone 100 enters the smart shooting mode, the preview image displayed in the viewfinder interface is an image captured based on the shooting parameters after adjustment. That is, after the mobile phone 100 enters the smart shooting mode, the preview image changes from a normal mode (e.g., "photo" mode), and the user can visually see the effect of the smart shooting mode through the preview image.
Example 2:
referring to fig. 3(a), after detecting that the "smart shooting" mode 301 is selected, the mobile phone 100 displays a view interface in which a preview image 302 is displayed, the preview image 302 is acquired based on the initial shooting parameters, and weight information of each shooting object, a "cancel" control 303, and an "apply" control 304 are displayed on the preview image 302. For example, referring to fig. 3(a), the weight information displayed on the preview image includes: "blue sky white cloud: 40% "," green plants: 20% "," single person: 40% ". Wherein, the ' blue sky white cloud 40% ' represents that the weight of blue sky white cloud in the preview image is 40%, ' green plants: 20% "represents that the weight of the green plant in the preview image is 20%," single person: 40% "the weight characterizing a person in the preview image is 40%.
For example, the weight may be a ratio of an area occupied by the photographic subject in the preview image, or the weight may be related to a preset grade of the photographic subject, such as a higher grade of the photographic subject, a higher weight, a lower grade of the photographic subject, and a lower weight; the preset may be user-defined, may be default set before the mobile phone 100 leaves a factory, or may be self-learned by the mobile phone 100 according to a photographing habit of the user (for example, photographing parameters adjusted in a photographing process, or image parameters adjusted in a picture repairing process, etc.). Taking the example that the weight is the occupied area ratio of the shooting object in the preview image, when the preview image moving image changes, the weight of each shooting object can also be dynamically changed.
Illustratively, referring to fig. 3(b), when the handset 100 detects a cloud for "blue sky: 40% ", the mobile phone 100 displays the shooting adjustment parameter corresponding to the blue sky and white cloud, i.e., color +4, and when the mobile phone 100 detects that the shooting adjustment parameter is" green plant: in the operation of 20% ", the mobile phone 100 displays the shooting adjustment parameter of green plants, i.e., saturation +2, and when the mobile phone 100 detects that the shooting adjustment parameter is" single: 40% ", the handset 100 displays the shooting adjustment parameter for the task, i.e., color + 2.
It is noted that, in some embodiments, the weight and the shooting adjustment parameter of each photographic subject displayed on the preview image may be modified. With continued reference to fig. 3(b), when the mobile phone 100 detects an operation (e.g., a long press operation) for 40% of the blue sky white cloud, the weight information of the blue sky white cloud may be adjusted to an edit state, e.g., the weight information shows "-" and "+", and when the mobile phone 100 detects an operation for "+", the weight is increased to increase the original 40% to 41%. As another example, the cell phone 100 detects that for "blue sky white cloud: operation of color +4 "(such as long press operation), may convert" blue sky white cloud: color +4 "is set to the edit state, e.g.," blue sky white cloud: the "-" and "+" are displayed in color +4 ". The handset 100 detects an operation for "+" and may add a color that sets original +4 to + 5. When the cell phone 100 detects an operation for the "apply" control 304, the cell phone 100 acquires a new preview image based on the modified shooting parameters and then displays the new preview image in the viewing interface, as shown in fig. 3 (c). As shown in fig. 3(c), the image 305 displayed in the viewing interface is a new preview image. For convenience of understanding, the area of the blue sky white cloud is diagonally distinguished in fig. 3(c) to represent that the display effect of the blue sky white cloud on the preview image in fig. 3(c) is different from that of the blue sky white cloud on the preview image in fig. 3 (b). Because the preview image in fig. 3(c) is acquired based on the modified photographing parameters, and the preview image in fig. 3(b) is acquired based on the initial photographing parameters.
In example 2, after the mobile phone 100 enters the "smart shooting" mode, the shooting parameters of each shooting object may be displayed, the user may modify the weight and the shooting parameters of each shooting object, the mobile phone 100 may adjust the initial shooting parameters based on the modified shooting parameters and the weight to obtain adjusted shooting parameters, and then acquire a new preview image based on the adjusted shooting parameters.
For example, the cellular phone 100 may determine a product of the weight of each photographic subject and the photographing adjustment parameter, and then adjust the initial photographing parameter by the product. For example, referring to fig. 3(b), the mobile phone 100 determines that the shooting adjustment parameter of the blue-sky white cloud is color +4, so that the mobile phone 100 may determine that the shooting adjustment parameter of the blue-sky white cloud is color +1.6 (40% × 4 ═ 1.6), and correspondingly, the mobile phone 100 increases the initial exposure time by 0.6 second, so as to increase the color of the blue-sky white cloud on the image by 1.6.
The mobile phone 100 determines that the shooting adjustment parameter for the green plant is saturation +2, so that the mobile phone 100 determines that the shooting adjustment parameter for the green plant is saturation +0.4 (i.e. 20% × 2 ═ 0.4), and correspondingly, the mobile phone 100 adjusts the initial value of the aperture to be small, increases the shutter, for example, the aperture is adjusted to be small by one (for example, f2.8 is reduced to f2), and increases the shutter by 1/25 seconds, so as to achieve saturation +0.4 of the green plant on the image. The aperture is adjusted to be smaller and the shutter is increased to be larger than the saturation of the green plant +0.4 on the image, and a person skilled in the art can determine the saturation according to a test and then store the determined value in the mobile phone 100, and the process can be performed before the mobile phone 100 leaves a factory.
The mobile phone 100 determines that the shooting adjustment parameter of the person is color + 2. Therefore, the mobile phone 100 determines that the shooting adjustment parameter of the person is color +0.8 (40% × 2 ═ 0.8). Correspondingly, the cell phone 100 increases the initial exposure time by 0.3 to achieve a color increase of 0.8 for the person on the image. It is to be understood that the numerical values herein are exemplary only, and are not limiting.
As an example, the initial exposure time may be increased by 0.6, the aperture may be reduced by one step (e.g., from f2.8 to f2), the shutter may be increased by 1/25 seconds, and the exposure time may be increased by 0.3 seconds, so that the image captured by the mobile phone 100 is a more satisfactory image for the user.
Example 3:
referring to fig. 4(a), the mobile phone 100 displays a view interface of the camera, where the view interface includes a preview image 401, and the preview image 401 is acquired based on the initial shooting parameters, or the preview image is acquired based on the adjusted shooting parameters, for example, the mobile phone 100 identifies shooting objects in a shooting scene, adjusts final shooting parameters obtained by adjusting the initial shooting parameters based on the weight of each shooting object and the shooting adjustment parameters, and then shoots the preview image 401 based on the final shooting parameters. Wherein the weight and the shooting adjustment parameter of each shot object are self-learned by the mobile phone 100. As shown in fig. 4(a), the weight information and shooting adjustment parameter for each subject are not displayed in the preview image 401.
When the mobile phone 100 detects an operation on the shooting control 402, the mobile phone 100 shoots the obtained image 403 and displays the image 403, and also displays the cancel control 404 and the determination control 405, and further displays weight information and shooting adjustment parameters for each shooting object on the shot image 403, as shown in fig. 4 (b). It should be understood that the weight information and the shooting adjustment parameter of each shooting object may be displayed in a blank area of the image 403, or may be displayed in an area where each shooting object is located, and the embodiment of the present application is not limited thereto.
The user can modify the weight information and the shooting adjustment parameter of each shooting object in the image 403, which has been described above and will not be described herein repeatedly. The mobile phone 100 can perform a retouching on the image 403 according to the modified weight and the shooting adjustment parameter, so as to obtain an image after retouching. When the mobile phone 100 detects the operation of the determination control 403, the mobile phone 100 stores the trimmed image, and as shown in fig. 4(c), the lower left corner of the mobile phone 100 displays a thumbnail 406, and the image corresponding to the thumbnail 406 is the image obtained after trimming the image 403.
In other embodiments, continuing with fig. 4(b) as an example, after the mobile phone 100 detects the preset operation, the display of the weight information and the shooting adjustment parameter of each shooting object on the image 403 may be cancelled. The prediction operation may be an operation of double-clicking a certain area on the image 403, or a specific control (not shown in the figure) is displayed on the image 403, and when the mobile phone 100 detects the operation on the specific control, the display of the weight information and the shooting adjustment parameter of each shooting object on the image 403 is cancelled. In this case, the weight information and the shooting adjustment parameter for each subject are not displayed on the image 403, so that when the mobile phone 100 detects an operation to the "ok" control 405, the image 403 is stored, that is, the mobile phone 100 does not trim the image 403 but stores the image 403.
Note that in this example, when the mobile phone 100 detects an operation on the shooting control 402, the shot image 403 may be stored in the cache. The user can modify the weight and shooting adjustment parameter of a certain subject on the image 403. The mobile phone 100 may perform a retouching on the image 403 in the cache according to the modified weight and the shooting adjustment parameter, so as to obtain a modified image. When the cell phone 100 detects the operation of the determination control 403, the modified image is stored in the gallery in the cell phone 100. For example, the mobile phone 100 may delete the image 403 stored in the cache, or may also store the image 403 stored in the cache in the gallery. Referring to fig. 5, thumbnails of a plurality of images are stored in the gallery of the mobile phone 100, wherein the image corresponding to the thumbnail 501 is an image stored in the cache (i.e., an image before the image is trimmed), and the image corresponding to the thumbnail 502 is an image obtained by trimming the image in the cache.
As shown in fig. 5, for the convenience of the user to view, a mark 503 is displayed on the thumbnail of the image after the image is trimmed, that is, the thumbnail 502, and the mark 503 is used to represent that the image corresponding to the thumbnail 502 is the image after the image is trimmed. No mark is displayed on the thumbnail image of the image before the trimming, i.e., the thumbnail image 501. It should be understood that the mark 503 may be an icon and/or a word, etc., and the embodiments of the present application are not limited thereto.
The process of handset 100 self-learning is described below.
Example 1: the cell phone 100 self-learns the weights of the photographic objects.
For example, taking an image as an example, the mobile phone 100 detects a clipping instruction for the image, and the mobile phone 100 determines that the weight of the subject (the remaining subject) on the clipped image is high, and determines that the weight of the clipped subject is low or 0.
As another example, the cell phone 100 detects that a plurality of images are stored in a gallery. The mobile phone 100 determines that the number of images including a person is large among the plurality of images (or the number of images including a person is large among the plurality of images captured recently), and determines that the number of image data including an animal is small among the plurality of images (or the number of images including a person is small among the plurality of images captured recently), the mobile phone 100 determines that the weight of the person is high, and determines that the weight of the animal is low.
For another example, the mobile phone 100 detects a cropping operation for one image, determines that the person in the image is modified more (for example, the person is subjected to multiple cropping operations such as skin beautifying and eye brightening), and determines that the weight of the person is greater and the weight of the green plant is smaller if the person is modified less.
Of course, the weight of each photographic subject may also be set by default or manually by a user, and the embodiment of the present application is not limited.
Example 2: the cellular phone 100 may self-learn the photographing adjustment parameters of the photographic subject.
For example, the mobile phone 100 detects that a plurality of images are stored in a gallery, the mobile phone 100 detects that a user performs a cropping operation on a certain image, and the mobile phone 100 may record adjustment of shooting parameters of a shooting object in the image during the cropping operation. For example, the mobile phone 100 determines that the saturation of green plants is increased by 2 when repairing the image, and the color of the human image is increased by 2, then the mobile phone 100 determines that the shooting adjustment parameter of green plants is saturation +2, and the shooting adjustment parameter of human is color + 2.
It should be noted that the self-learning process of the handset 100 may be implemented by a model. A model, i.e. an algorithm, comprises one or more functions/equations. The model mentioned in the embodiments of the present application may be a prior art model, such as a neural network unit, a machine learning model, etc. Generally, a model is a functional expression including model parameters, input parameters, and output parameters, and an output result can be obtained by calculating the functional expression given specific values of the model parameters and the input parameters. It should be understood that in the embodiment of the present application, the input parameter, i.e., the input image, in the case where the model parameter is determined, the output result may be obtained by the model, and the output result may be a weight of each photographic subject in the input image, and/or a photographing adjustment parameter of each photographic subject. That is, the above-described example 1 and example 2 may be implemented using the same mode, or may be implemented using different models. The mobile phone 100 can adjust the shooting parameters according to the output result, and a final image is obtained by shooting (the process will be described later).
Fig. 6 is a schematic flow chart of an image capturing method according to an embodiment of the present disclosure. As shown in fig. 6, the method may include:
s601: the mobile phone 100 detects an input operation, starts a camera, and acquires an initial image based on the initial shooting parameters.
For example, the original shooting parameter may be an initial value, which may be a minimum value or a value set before the mobile phone 100 leaves the factory, or the original shooting parameter may be a shooting parameter used when the image was shot last time.
It should be noted that the camera may be activated by the mobile phone 100 in various ways. For example, taking fig. 2(a) as an example, when the mobile phone 100 detects that the user clicks an icon of the camera, the mobile phone 100 starts the camera and starts the camera. For another example, the mobile phone 100 detects that the user clicks an icon of the WeChat to start the WeChat, and when the mobile phone 100 detects that the operation of clicking the video call control in the WeChat is performed, the camera is started. For another example, when the mobile phone 100 detects that the user clicks the "shooting" control in the friend circle, the camera is started. In summary, the technical solution provided by the embodiment of the present application can be applied to any application that can start a camera to capture an image.
S602: the handset 100 identifies at least one scene type to which the initial image belongs.
Illustratively, the mobile phone 100 stores a plurality of preset scene types. Such as an indoor scene, an outdoor scene, a single shot, a group shot scene, a character scene, a landscape scene, and so forth.
The mobile phone 100 identifies at least one scene type to which the original image belongs through the feature information in the original image. The feature information includes feature information of a photographic subject, for example, when the photographic subject includes a person, the scene type may be a single person scene, and when the photographic subject includes a blue sky, the scene type is an outdoor scene and/or a landscape scene.
There may be one or more scene types included on an original image. Taking fig. 2(b) as an example, the preview image includes a landscape scene, an outdoor scene, a single person scene, and the like.
S603: the handset 100 determines a weight for each of the at least one scene type.
For example, the weight of each scene type may be manually set by the user, or set before the mobile phone 100 leaves the factory, or self-learned by the mobile phone 100, and after the mobile phone 100 determines each scene type, the weight corresponding to each scene type may be queried. Taking the self-learning of the mobile phone 100 as an example, the mobile phone 100 detects that a plurality of images are stored in the gallery, and determines that there are more images of outdoor scene types (or more images of outdoor scene types in images shot recently), then the mobile phone 100 determines that the weight of the outdoor scene is higher.
Taking fig. 2(b) as an example, the mobile phone 100 recognizes that the preview image belongs to three scene types of "outdoor", "landscape", and "single person". The weights for each scene type are as follows:
type of scene Weight of scene type
Outdoors 20%
Landscape 30%
Single person 30%
TABLE 1
S604: the mobile phone 100 determines the shooting adjustment parameters of the shooting objects corresponding to each scene type.
The mobile phone 100 learns the shooting adjustment parameters of the shooting objects in the images of each scene type by itself, specifically referring to the following table 2:
Figure GDA0002711285400000131
Figure GDA0002711285400000141
TABLE 2
S605: the cellular phone 100 determines at least one photographic subject on the original image, and the weight of each photographic subject.
Shooting object Weight of a subject
Green plant 20%
Blue sky white cloud 40
Character
40%
TABLE 3
S606: the mobile phone 100 determines the final shooting adjustment parameter of each object according to the weight of each scene type, the weight of each object, and the shooting adjustment parameter of the corresponding object in each scene type.
Continuing with fig. 2(b), the mobile phone 100 determines the final shooting adjustment parameters of the blue sky white cloud according to the following formula:
the first adjustment parameters are: outdoor weight, (+2) bluish sky-white cloud weight;
the second adjustment parameter is: landscape weight blue sky white cloud weight (+ 4);
the third adjustment parameter is: single person weight blue sky white cloud weight (+ 1);
wherein the outdoor weight (see table 1) is 20%, the landscape weight (see table 1) is 30%, the single person weight (see table 1) is 30%, and the blue sky white cloud weight (see table 3) is 40%.
"2" in the above formula is the color of the blue sky white cloud +2 corresponding to the outdoor scene type in table 2, "4" in the above formula is the color of the blue sky white cloud +4 corresponding to the scene type in table 2, and "1" in the above formula is the color of the blue sky white cloud +1 corresponding to the single scene type in table 2.
Therefore, in the above formula, the first adjustment parameter is: 20% × 40% × 2 ═ 0.16, the corresponding cell phone 100 can increase the exposure time by t1 to achieve the effect of increasing the color by 0.16; the second adjustment parameter is 30% by 40% by 4 to 0.48, and the corresponding mobile phone 100 may increase the exposure time by t2 to achieve the effect of increasing the color by 0.48; the third adjustment parameter is: 30%. 40%. 1.0.12, and correspondingly, the cell phone 100 may increase the exposure time by t3 to achieve the effect of increasing the color by 0.12. That is, the mobile phone 100 may adjust the exposure time three times to achieve the effect of increasing the color of the blue sky white cloud.
Of course, the mobile phone 100 may be adjusted once, that is, 20% + 40% +2) + 30% + 40% +4) + 30% + 40% +1 to 0.76, and correspondingly, the mobile phone 100 increases the exposure time by t4, that is, the mobile phone 100 may adjust the exposure time once, so as to achieve the effect of changing the color of the blue sky white cloud to + 0.76.
In a similar manner, the cellular phone 100 can determine the final shooting adjustment parameter for the green plant in fig. 2(b), and the final shooting adjustment parameter for the person.
In the above embodiment, the mobile phone 100 considers the weight of at least one scene type of the original image and the weight of each object on the original image when calculating the final shooting adjustment parameter, and in practical applications, the mobile phone 100 may only consider the weight of each object, for example, the mobile phone 100 may learn the shooting adjustment parameter of each object during the image modification process by itself, and the mobile phone 100 may multiply the shooting adjustment parameter of each object by the weight of each object when calculating the final shooting adjustment parameter, where the product is the final shooting adjustment parameter of each object. Of course, the mobile phone 100 may also only consider the weight of at least one scene type of the original image, for example, the mobile phone 100 learns the shooting adjustment parameter during the process of the user performing the image retouching or shooting the image of the outdoor scene, and when the mobile phone 100 calculates the final shooting adjustment parameter, the shooting adjustment parameter of each scene type may be multiplied by the weight of each scene type, and the product is the final shooting adjustment parameter.
S607: the mobile phone 100 adjusts the initial photographing parameters based on the final photographing adjustment parameters of each subject.
As an example, the cell phone 100 may increase the color of the blue sky white cloud (e.g., the area where the blue sky white cloud is located) in the preview image by 0.76; the saturation of the green plants (such as the area where the green plants are located) is +0.12, and the color of the people (such as the area where the people are located) is increased by 0.24.
As another example, the cell phone 100 increases the color of the entire preview image by 0.76, then increases the saturation of the entire preview image by +0.12, and then increases the color of the entire preview image by 0.24.
S608; the mobile phone 100 obtains a preview image by shooting based on the adjusted shooting parameters.
S609: the mobile phone 100 displays a viewing interface, which includes the preview image.
Illustratively, referring to fig. 2(c), the preview image in fig. 2(c) is an image captured by the mobile phone 100 based on the adjusted image capturing parameters.
Another scenario is described below: and (5) a gallery.
Referring to fig. 7(a), the mobile phone 100 displays a main interface 701, the main interface 701 includes icons of a plurality of applications, including an icon 702 of a gallery, and when the mobile phone 100 detects an operation on the icon 702, the interface 703 of the gallery is displayed, as shown in fig. 7 (b). The gallery interface 703 includes thumbnails of a plurality of images, and when the cell phone 100 detects that the user clicks on the thumbnail of the image 704, the interface shown in fig. 7(c) is displayed.
Referring to fig. 7(c), the cellular phone 100 displays an image 704, and displays a logo 705 on the image 704. When the mobile phone 100 detects an operation for the identifier 705, the mobile phone 100 displays a prompt message 706 for prompting the user to enter the smart retouching mode, as shown in fig. 7 (d).
With continued reference to fig. 7(d), after the mobile phone 100 enters the smart mask mode, the weight information and the shooting adjustment parameters of each shooting object in the image 704 may be displayed. The weight information and shooting adjustment parameters of each shot object can be modified, and when the mobile phone 100 detects an operation for determining the control, the image is subjected to image retouching based on the modified weight and shooting adjustment parameters of each shot object; when the mobile phone 100 detects an operation for the cancel control, the smart retouching mode is exited. The manner of modifying the image by the mobile phone 100 based on the modified weight and the shooting adjustment parameter of each shooting object is similar to the foregoing manner, and is not repeated here.
With reference to the foregoing embodiments and the related drawings, the present application provides an image capturing method, which can be implemented in an electronic device (e.g., a mobile phone, a tablet computer, etc.) with a touch screen and a camera as shown in fig. 1. As shown in fig. 8, the method may include the steps of:
s801: detecting an input operation;
for example, taking fig. 2(a) as an example, the input operation may be an operation of clicking an icon 202 in the main interface 201 in the mobile phone 100.
S802: responding to the input operation, starting a camera, and displaying a viewing interface, wherein the viewing interface comprises a preview image;
illustratively, taking fig. 2(b) as an example, the viewing interface may be interface 203.
S803: detecting a first operation aiming at a shooting control;
illustratively, taking 4(a) as an example, the first operation may be an operation of clicking a shooting control 402 in the viewfinder interface 401.
S804: displaying a first image, which is an image photographed based on initial photographing parameters, on which weight information and photographing adjustment parameters of each photographing object are displayed, in response to the first operation;
after the electronic equipment detects the operation aiming at the shooting control, a first image is obtained through shooting, the first image is displayed, and the weight and the shooting adjustment parameter of each shooting object are displayed on the first image. Illustratively, taking fig. 4(b) as an example, the first image may be an image 403 displayed on the mobile phone 100.
S805: detecting a second operation for storing the first image;
illustratively, taking fig. 4(b) as an example, the second operation may be an operation of clicking the determination control 405.
S806: and responding to the second operation, and storing a second image, wherein the second image is obtained after the first image is adjusted according to the weight information and the shooting adjustment parameters of each shooting object.
For example, taking fig. 4(b) as an example, after the mobile phone 100 detects an operation on the determination control 405, a second image obtained after the first image, that is, the image 403, is subjected to cropping may be stored. Taking fig. 4(c) as an example, the lower left corner of the mobile phone 100 displays a thumbnail 406, and the thumbnail 406 is a thumbnail corresponding to the second image, that is, the mobile phone 100 stores the second image in the gallery.
The various embodiments of the present application can be combined arbitrarily to achieve different technical effects.
In the embodiments provided in the present application, the method provided in the embodiments of the present application is described from the perspective of an electronic device (such as the mobile phone 100) as an execution subject. In order to implement the functions in the method provided by the embodiments of the present application, the electronic device may include a hardware structure and/or a software module, and the functions are implemented in the form of a hardware structure, a software module, or a hardware structure and a software module. Whether any of the above-described functions is implemented as a hardware structure, a software module, or a hardware structure plus a software module depends upon the particular application and design constraints imposed on the technical solution.
As shown in fig. 9, other embodiments of the present application disclose an electronic device, such as a mobile phone, an ipad, etc., which may include: a touch screen 901, wherein the touch screen 901 comprises a touch sensitive surface 906 and a display 907; one or more processors 902; a plurality of application programs 908; the various devices described above may be connected by one or more communication buses 905. The display 907 may be used to display a main interface, or a display interface of an application program of the plurality of application programs 908, such as an interface of a camera (i.e., a viewing interface), and may also be used to display an image captured by the electronic device.
Wherein the one or more computer programs 904 are stored in the memory 903 and configured to be executed by the one or more processors 902, the one or more computer programs 904 comprising instructions that may be used to perform the steps as in figures 2-8 and the corresponding embodiments.
It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation. Each functional unit in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. For example, in the above embodiment, the first obtaining unit and the second obtaining unit may be the same unit or different units. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
As used in the above embodiments, the terms "when …" or "after …" may be interpreted to mean "if …" or "after …" or "in response to determination …" or "in response to detection …", depending on the context. Similarly, depending on the context, the phrase "at the time of determination …" or "if (a stated condition or event) is detected" may be interpreted to mean "if the determination …" or "in response to the determination …" or "upon detection (a stated condition or event)" or "in response to detection (a stated condition or event)". In addition, in the above-described embodiments, relational terms such as first and second are used to distinguish one entity from another entity without limiting any actual relationship or order between the entities.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that a portion of this patent application contains material which is subject to copyright protection. The copyright owner reserves the copyright rights whatsoever, except for making copies of the patent files or recorded patent document contents of the patent office.

Claims (9)

1. An image shooting method applied to electronic equipment is characterized by comprising the following steps:
detecting an input operation;
responding to the input operation, starting a camera, and displaying a viewing interface, wherein the viewing interface comprises a preview image;
detecting a first operation aiming at a shooting control;
displaying a first image, which is an image photographed based on initial photographing parameters, on which weight information and photographing adjustment parameters of each photographing object are displayed, in response to the first operation;
detecting a second operation for storing the first image;
and responding to the second operation, and storing a second image, wherein the second image is obtained after the first image is adjusted according to the product of the weight information of each shooting object and the shooting adjustment parameter.
2. The method of claim 1, wherein prior to detecting the second operation for storing the first image, the method further comprises:
detecting a modification operation of the weight information and/or shooting adjustment parameters for the first shooting object;
responding to the modification operation, processing the first image to obtain a third image, wherein the third image is obtained after the first image is adjusted according to the modified weight information and/or shooting adjustment parameters;
in response to the second operation, storing a second image, including:
in response to the second operation, storing the third image.
3. The method of claim 1 or 2, wherein the method further comprises:
saving the first image;
wherein a first mark is set on the second image, and the first mark is not set on the first image.
4. The method according to claim 1 or 2, wherein the weight of each subject is determined according to a ratio of an area occupied by each subject in the first image; or is preset; or, is determined according to the user selection operation; or the operation behavior of the user on the image stored in the electronic equipment is determined.
5. The method of claim 4, wherein the weight of each photographic subject is determined according to user operation behavior on images stored in the electronic device, comprising:
the weight of each shot object is determined according to the number of times each shot object is subjected to image trimming in the stored image; or, the number of times each photographic subject is cut out in the stored images, or the number of images containing each photographic subject in all the stored images.
6. The method according to claim 1, 2 or 5, wherein the shooting adjustment parameter of each shooting object is preset or determined according to the image parameter adjusted when a user trims each shooting object in the image stored in the electronic device.
7. The method of claim 1 or 2, 5, wherein prior to detecting the second operation for storing the first image, the method further comprises:
detecting a third operation;
canceling display of the weight information and the photographing adjustment parameter of each of the photographic subjects on the first image in response to the third operation;
in response to the second operation, storing a second image, including:
in response to the second operation, storing the first image.
8. An electronic device comprising at least one processor and memory;
the memory for storing one or more computer programs;
one or more computer programs stored in the memory that, when executed by the at least one processor, enable the electronic device to implement the method of any of claims 1-7.
9. A computer storage medium, characterized in that the computer-readable storage medium comprises a computer program which, when run on an electronic device, causes the electronic device to perform the method according to any one of claims 1 to 7.
CN201910402906.9A 2019-05-15 2019-05-15 Image shooting method and electronic equipment Active CN110225244B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910402906.9A CN110225244B (en) 2019-05-15 2019-05-15 Image shooting method and electronic equipment
PCT/CN2020/090366 WO2020228792A1 (en) 2019-05-15 2020-05-14 Image capture method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910402906.9A CN110225244B (en) 2019-05-15 2019-05-15 Image shooting method and electronic equipment

Publications (2)

Publication Number Publication Date
CN110225244A CN110225244A (en) 2019-09-10
CN110225244B true CN110225244B (en) 2021-02-09

Family

ID=67821095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910402906.9A Active CN110225244B (en) 2019-05-15 2019-05-15 Image shooting method and electronic equipment

Country Status (2)

Country Link
CN (1) CN110225244B (en)
WO (1) WO2020228792A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110225244B (en) * 2019-05-15 2021-02-09 华为技术有限公司 Image shooting method and electronic equipment
CN112532854B (en) * 2019-09-17 2022-05-31 华为技术有限公司 Image processing method and electronic equipment
CN111027374B (en) * 2019-10-28 2023-06-30 华为终端有限公司 Image recognition method and electronic equipment
CN111666124B (en) * 2020-05-29 2023-10-27 平安科技(深圳)有限公司 Image acquisition device calling method, device, computer device and storage medium
CN115835010A (en) 2020-08-27 2023-03-21 荣耀终端有限公司 Shooting method and terminal
CN112215804B (en) * 2020-09-15 2021-10-15 数坤(北京)网络科技股份有限公司 Data processing method, equipment and computer storage medium
CN112422814A (en) * 2020-09-30 2021-02-26 华为技术有限公司 Shooting method and electronic equipment
CN115118840A (en) * 2021-03-22 2022-09-27 Oppo广东移动通信有限公司 Shooting method and device, electronic equipment and storage medium
CN112948048A (en) * 2021-03-25 2021-06-11 维沃移动通信(深圳)有限公司 Information processing method, information processing device, electronic equipment and storage medium
CN116723415B (en) * 2022-10-20 2024-04-19 荣耀终端有限公司 Thumbnail generation method and terminal equipment
CN116703791B (en) * 2022-10-20 2024-04-19 荣耀终端有限公司 Image processing method, electronic device and readable medium
WO2024119501A1 (en) * 2022-12-09 2024-06-13 深圳传音控股股份有限公司 Image editing method, apparatus, terminal device, and storage medium
CN116320716B (en) * 2023-05-25 2023-10-20 荣耀终端有限公司 Picture acquisition method, model training method and related devices

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107197170A (en) * 2017-07-14 2017-09-22 维沃移动通信有限公司 A kind of exposal control method and mobile terminal

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006287814A (en) * 2005-04-04 2006-10-19 Fuji Photo Film Co Ltd Imaging apparatus and method of determining motion vector
JP2009152827A (en) * 2007-12-20 2009-07-09 Nikon Corp Image processing method for time-lapse image, image processing program, and image processor
CN105391940B (en) * 2015-11-05 2019-05-28 华为技术有限公司 A kind of image recommendation method and device
GB201611385D0 (en) * 2016-06-30 2016-08-17 Nokia Technologies Oy Recommendation method and system
CN106303250A (en) * 2016-08-26 2017-01-04 维沃移动通信有限公司 A kind of image processing method and mobile terminal
US10027879B2 (en) * 2016-11-15 2018-07-17 Google Llc Device, system and method to provide an auto-focus capability based on object distance information
US11425309B2 (en) * 2017-06-09 2022-08-23 Huawei Technologies Co., Ltd. Image capture method and apparatus
CN107995422B (en) * 2017-11-30 2020-01-10 Oppo广东移动通信有限公司 Image shooting method and device, computer equipment and computer readable storage medium
CN108182031A (en) * 2017-12-28 2018-06-19 努比亚技术有限公司 A kind of photographic method, terminal and computer readable storage medium
CN108629814B (en) * 2018-05-14 2022-07-08 北京小米移动软件有限公司 Camera adjusting method and device
CN109002243B (en) * 2018-06-28 2021-06-29 维沃移动通信有限公司 Image parameter adjusting method and terminal equipment
CN110225244B (en) * 2019-05-15 2021-02-09 华为技术有限公司 Image shooting method and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107197170A (en) * 2017-07-14 2017-09-22 维沃移动通信有限公司 A kind of exposal control method and mobile terminal

Also Published As

Publication number Publication date
WO2020228792A1 (en) 2020-11-19
CN110225244A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN110225244B (en) Image shooting method and electronic equipment
CN110401766B (en) Shooting method and terminal
WO2021093793A1 (en) Capturing method and electronic device
CN111183632A (en) Image capturing method and electronic device
CN112532857A (en) Shooting method and equipment for delayed photography
WO2021129198A1 (en) Method for photography in long-focal-length scenario, and terminal
WO2020029306A1 (en) Image capture method and electronic device
CN113497881B (en) Image processing method and device
CN114092364A (en) Image processing method and related device
CN112580400B (en) Image optimization method and electronic equipment
CN113660408B (en) Anti-shake method and device for video shooting
CN113542580B (en) Method and device for removing light spots of glasses and electronic equipment
WO2022228274A1 (en) Preview image display method in zoom photographing scenario, and electronic device
CN114115770A (en) Display control method and related device
CN113467735A (en) Image adjusting method, electronic device and storage medium
CN112532854B (en) Image processing method and electronic equipment
CN112532508B (en) Video communication method and video communication device
CN112584037A (en) Method for saving image and electronic equipment
CN116055859A (en) Image processing method and electronic device
CN113473057B (en) Video recording method and electronic equipment
CN113472996B (en) Picture transmission method and device
CN111049968A (en) Control method and electronic equipment
CN116709018B (en) Zoom bar segmentation method and electronic equipment
CN116437194B (en) Method, apparatus and readable storage medium for displaying preview image
CN115002333B (en) Image processing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant