WO2020102978A1 - 图像处理方法及电子设备 - Google Patents

图像处理方法及电子设备

Info

Publication number
WO2020102978A1
WO2020102978A1 PCT/CN2018/116443 CN2018116443W WO2020102978A1 WO 2020102978 A1 WO2020102978 A1 WO 2020102978A1 CN 2018116443 W CN2018116443 W CN 2018116443W WO 2020102978 A1 WO2020102978 A1 WO 2020102978A1
Authority
WO
WIPO (PCT)
Prior art keywords
light effect
electronic device
light
picture
effect template
Prior art date
Application number
PCT/CN2018/116443
Other languages
English (en)
French (fr)
Inventor
王习之
刘昆
李阳
吴磊
杜成
王强
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2018/116443 priority Critical patent/WO2020102978A1/zh
Priority to CN201880094372.1A priority patent/CN112262563B/zh
Publication of WO2020102978A1 publication Critical patent/WO2020102978A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present application relates to the field of image processing, in particular to an image processing method and electronic equipment.
  • the phone can provide multiple shooting modes: portrait shooting mode, large aperture shooting mode, night scene shooting mode, etc.
  • portrait shooting mode the mobile phone can provide a variety of light effect templates.
  • Different light effect templates represent (or correspond to) different light effect parameters, such as light source position, layer fusion parameter, texture pattern projection position, projection direction, etc.
  • the user can select different light effect templates to make the photos taken show different effects.
  • users often need to go through multiple attempts to find a suitable light effect template, and the user is cumbersome to operate and the use efficiency of mobile phones is low.
  • the embodiments of the present application provide an image processing method and an electronic device, which can enable a user to quickly select a suitable light effect template and reduce user operations.
  • an embodiment of the present application provides a photographing method, including: an electronic device turning on a camera to collect an image of a photographed object; the electronic device displays a first user interface; wherein, the first user interface includes: a first display area , Shooting mode list, light effect template option bar; the above shooting mode list includes one or more shooting mode options, the one or more shooting modes include the first shooting mode, the first shooting mode has been selected, the above A shooting mode is a shooting mode that highlights the people included in the captured picture, and the above light effect template option bar includes options of two or more light effect templates; the above light effect template includes one or more light effect parameters, Used to process pictures taken in the first shooting mode; the electronic device displays the image collected by the camera in the first display area; the electronic device highlights the light effect matching the shooting scene in the light effect template option bar Options for the template; wherein, the shooting scene is a shooting scene corresponding to the image displayed in the first display area.
  • the above-mentioned first display area may be referred to as a framing frame.
  • the above-mentioned first shooting mode may be referred to as a portrait shooting mode.
  • the above light effect template includes one or more of the following light effect parameters: the fusion parameter of the diffuse reflection layer, the highlight layer and the shadow layer, the background part of the RGB image and the overall light effect rendering Fusion parameters of the projected texture layer in the background, the color (pixel value) of the projected texture, the stretch value of the projected texture, the position of the projected texture pattern, the direction of projection, the projected texture layer of the portrait, and the rendering effect of the face light And the fusion parameters of the face part in the RGB image.
  • the electronic device can intelligently identify the current shooting scene when taking a photograph in the first shooting mode, and recommend a light effect template matching the current shooting scene for the user according to the shooting scene, which can enable the user to quickly Choose a suitable light effect template to reduce user operations and improve the efficiency of electronic devices.
  • the first user interface further includes a shooting control and a first control; after the electronic device highlights the option of the light effect template matching the shooting scene in the light effect template option bar, the above method It also includes: after detecting the user operation acting on the shooting control, the electronic device uses the light effect parameters corresponding to the selected light effect template to process the captured picture to generate a first picture; A control displays a thumbnail of the first picture; wherein, the thumbnail of the first picture contains fewer pixels than the first picture.
  • the selected light effect template is the light effect template matching the shooting scene.
  • the user may select a light effect template recommended by the electronic device that matches the shooting scene, and adopting the light effect template to process the picture may make the shooting effect of the obtained picture better.
  • the above uses the light effect parameters corresponding to the selected light effect template to process the captured picture to generate the first picture, which includes: the electronic device uses the selected light effect template corresponding to the Light effect parameters, light direction and depth data process the captured picture to generate a first picture; wherein the light direction is the light direction identified from the picture displayed in the first display area, and the depth data is the Depth data.
  • the technical solution provided by the embodiment of the present application can process the captured picture according to the actual lighting direction in the shooting scene, so that the light effect applied later does not conflict with the original lighting of the picture, and the shadow caused by occlusion is rendered, especially the eye socket
  • the shadow cast by the light from the nose part greatly enhances the three-dimensional sense of the face.
  • the method further includes: The light effect parameter corresponding to the selected light effect template and the depth data respectively process the portrait part and the background part; wherein the portrait part and the background part are obtained by segmenting the captured picture.
  • the technical solution provided by the embodiment of the present application can render the portrait part and the background part separately, so that the light effect fluctuates on the portrait, increasing the realism and three-dimensional sense of the picture.
  • the options for highlighting the light effect template matching the shooting scene in the above light effect template option bar include one or more of the following: the first display in the above light effect template option bar The location shows the options of the light effect template matching the shooting scene; highlights the options of the light effect template matching the shooting scene in the light effect template option bar; dynamically displays the shooting scenes in the light effect template option bar Options for matching light effect templates.
  • the embodiments of the present application provide a variety of ways to highlight the options of the light effect template matching the shooting scene, through which the user can more quickly and intuitively find the light effect template suitable for the current shooting scene, reduce user operations, and improve the Use efficiency.
  • the method further includes: the electronic device detects a first user operation acting on the first control, In response to the first user operation, the electronic device displays a second user interface for viewing the first picture.
  • the above-mentioned first user operation may be a click operation.
  • the technical solution provided by the embodiment of the present application may cause the electronic device to display a second user interface for viewing the first picture by clicking the first control.
  • the second user interface includes: a second display area and a second control; wherein: the second display area is used to display the first picture; the method further includes: the electronic device detects In response to the second user operation of the second control, the electronic device displays a second user interface for editing the first picture.
  • the above-mentioned second user operation may be a click operation.
  • the technical solution provided by the embodiment of the present application can cause the electronic device to display a second user interface for editing the first picture by clicking the second control, and the user can edit the light effect of the first picture.
  • the technical solution can improve the interaction between users and electronic devices.
  • the second user interface further includes: a light source indicator; wherein the light source indicator is used to indicate a lighting direction of the light source in the shooting scene; the method further includes: the electronic device detects an effect In the third user operation of the light source indicator, in response to the third user operation, update the light direction, and re-execute the light effect parameter, light direction and depth data corresponding to the selected light effect template of the electronic device Steps to process the captured pictures.
  • the third user operation may be a sliding operation.
  • the technical solution provided by the embodiment of the present application can change the illumination direction of the light source by sliding the light source indicator, so that the electronic device processes the captured picture according to the new illumination direction.
  • the technical solution can improve the interaction between users and electronic devices.
  • the second user interface further includes: a light intensity indicator; wherein the light intensity indicator is used to indicate the light intensity of the light source; the method further includes: the electronic device detects The fourth user operation of the light intensity indicator, in response to the fourth user operation, updates the light source intensity, and uses the light effect parameters, light direction, light source intensity and depth data corresponding to the selected light effect template to shoot For processing.
  • the above-mentioned fourth user operation may be a sliding operation for increasing or decreasing the light intensity.
  • the fourth user operation may be a user operation of sliding left or sliding right.
  • the fourth user operation may be a user operation of sliding up or sliding down.
  • the fourth user operation may be a click operation.
  • the technical solution provided by the embodiment of the present application can change the light intensity of the light source through the fourth user operation on the light intensity indicator, so that the electronic device processes the captured picture according to the new light intensity.
  • the technical solution can improve the interaction between users and electronic devices.
  • the second user interface further includes the light effect template option bar; the method further includes: the electronic device detects a fifth user operation acting on the light effect template option bar, and responds to the above The fifth user operation is to update the selected light effect template and re-execute the step of processing the captured picture by the electronic device using the light effect parameters, illumination directions and depth data corresponding to the selected light effect template.
  • the above-mentioned fifth user operation may be a click operation on a light effect template option included in the light effect template option bar, so that the electronic device performs the captured picture according to the light effect parameters corresponding to the new light effect template deal with.
  • the technical solution can improve the interaction between users and electronic devices.
  • an embodiment of the present application provides an electronic device, including: one or more processors, memory, one or more cameras, and a touch screen; the memory, the one or more cameras, the touch screen, and the one or more Multiple processors are coupled, the memory is used to store computer program code, the computer program code includes computer instructions, the one or more processors call the computer instructions to execute: turn on the camera to collect images of the shooting object, and display the first user Interface; wherein, the above first user interface includes: a first display area, a list of shooting modes, an option bar of a light effect template; the above list of shooting modes includes options of one or more shooting modes, and the one or more shooting modes include a first Shooting mode. The first shooting mode has been selected.
  • the first shooting mode is a shooting mode that highlights the people included in the captured picture.
  • the light effect template option bar includes two or more light effect templates. Options; the light effect template includes one or more light effect parameters for processing the pictures taken in the first shooting mode; displaying the image collected by the camera in the first display area; in the option bar of the light effect template Highlight the option of the light effect template matching the shooting scene; wherein the shooting scene is the shooting scene corresponding to the image displayed in the first display area.
  • the first user interface further includes a shooting control and a first control; after the processor highlights the option of the light effect template matching the shooting scene in the light effect template option bar, the above processing
  • the device also executes: after detecting the user operation acting on the above-mentioned shooting control, processing the captured picture using the light effect parameters corresponding to the selected light effect template to generate a first picture; displaying the above in the first control A thumbnail of the first picture; wherein the thumbnail of the first picture contains fewer pixels than the first picture.
  • the selected light effect template is the light effect template matching the shooting scene.
  • the processor uses the light effect parameters corresponding to the selected light effect template to process the captured picture, and specifically executes when generating the first picture: the processor uses the selected light effect The light effect parameter corresponding to the template, the light direction and the depth data process the captured picture to generate a first picture; wherein the light direction is the light direction identified from the picture displayed in the first display area, and the depth data is the above The depth data of the subject.
  • the processing The device further executes: separately processing the portrait part and the background part according to the light effect parameters corresponding to the selected light effect template and the depth data; wherein, the portrait part and the background part are obtained by dividing the captured picture.
  • the options for highlighting the light effect template matching the shooting scene in the above light effect template option bar include one or more of the following: the first display in the above light effect template option bar The location shows the options of the light effect template matching the shooting scene; highlights the options of the light effect template matching the shooting scene in the light effect template option bar; dynamically displays the shooting scenes in the light effect template option bar Options for matching light effect templates.
  • the processor further executes: detecting a first user operation acting on the first control, in response to the first A user operation, the electronic device displays a second user interface for viewing the first picture.
  • the second user interface includes: a second display area and a second control; wherein: the second display area is used to display the first picture; and the processor further executes: In response to the second user operation of the second control, the electronic device displays a second user interface for editing the first picture in response to the second user operation.
  • the second user interface further includes: a light source indicator; wherein the light source indicator is used to indicate a lighting direction of the light source in the shooting scene; the processor further executes: detecting that The third user operation of the light source indicator, in response to the third user operation, updates the light direction, and re-executes the picture taken using the light effect parameters, light direction, and depth data corresponding to the selected light effect template Be processed.
  • the second user interface further includes: a light intensity indicator; wherein the light intensity indicator is used to indicate the light intensity of the light source; and the processor further executes: detecting the effect on the light
  • the fourth user operation of the strong indicator in response to the fourth user operation, updates the light source intensity, and uses the light effect parameters, illumination directions, light source intensity, and depth data corresponding to the selected light effect template for the captured picture Be processed.
  • the second user interface further includes the light effect template option bar; the processor further executes: detecting a fifth user operation acting on the light effect template option bar, and responding to the fifth The user operates to update the selected light effect template, and re-executes the electronic device to process the captured picture using the light effect parameters, illumination directions, and depth data corresponding to the selected light effect template.
  • an embodiment of the present application provides a graphical user interface on an electronic device.
  • the electronic device has a touch screen, a camera, a memory, and a processor to execute a program stored in the memory.
  • the graphical user interface includes a A user interface, the first user interface includes: a first display area, a list of shooting modes, an option bar of a light effect template, the list of shooting modes includes options of one or more shooting modes, and the one or more shooting modes include a first Shooting mode.
  • the first shooting mode has been selected.
  • the first shooting mode is a shooting mode that highlights the people included in the captured picture.
  • the light effect template option bar includes two or more light effect templates.
  • the light effect template includes one or more parameters for processing pictures taken in the first shooting mode; wherein: the image captured by the camera is displayed in the first display area; in the option bar of the light effect template Highlight the option of the light effect template matching the shooting scene; wherein the shooting scene is the shooting scene corresponding to the image displayed in the first display area.
  • the options for highlighting the light effect template matching the shooting scene in the above light effect template option bar include one or more of the following: the first display in the above light effect template option bar The location shows the options of the light effect template matching the shooting scene; highlights the options of the light effect template matching the shooting scene in the light effect template option bar; dynamically displays the shooting scenes in the light effect template option bar Options for matching light effect templates.
  • the first user interface further includes a shooting control and a first control; wherein: in response to the detected user operation acting on the shooting control, a thumbnail of the first picture is displayed on the first Within a control; wherein, the first picture is a captured picture, and the thumbnail of the first picture contains fewer pixels than the first picture; in response to the detected user acting on the first control Operate to display a second user interface for viewing the first picture above.
  • the second user interface includes: a second display area and a second control; wherein the second display area is used to display the first picture; in response to the detected action on the second The user operation of the control displays a second user interface for editing the first picture above.
  • the second user interface further includes: a light source indicator, a light intensity indicator, and an option bar of the light effect template; wherein the light source indicator is used to indicate the light direction of the light source in the shooting scene , The light intensity indicator is used to indicate the light intensity of the light source; in response to the detected user operation acting on the light source indicator, the display position of the light source indicator and the picture displayed in the second display area are updated; In response to the detected user operation acting on the light intensity indicator, update the display of the light intensity indicator and the picture displayed in the second display area; in response to the detected user operation acting on the option bar of the light effect template, Update and display the picture displayed in the option bar of the light effect template and the second display area.
  • an embodiment of the present application provides a computer storage medium, including computer instructions, which when executed on an electronic device, causes the electronic device to perform the first aspect or any one of the first aspects of the embodiments of the present application Photographing methods provided by various implementation methods.
  • an embodiment of the present application provides a computer program product that, when the computer program product runs on an electronic device, causes the electronic device to perform the first aspect of the embodiment of the present application or any implementation manner of the first aspect Provide photographing method.
  • the electronic device provided in the second aspect provided above, the computer storage medium provided in the fourth aspect, and the computer program product provided in the fifth aspect are all used to execute the photographing method provided in the first aspect.
  • the beneficial effects that can be achieved refer to the beneficial effects in the photographing method provided in the first aspect, which will not be repeated here.
  • FIG. 1A is a schematic structural diagram of an electronic device provided by an embodiment of this application.
  • 1B is a schematic structural diagram of a 3D sensing module provided by an embodiment of this application.
  • 1C is a block diagram of a software structure of an electronic device provided by an embodiment of this application.
  • FIG. 2 is a schematic diagram of a user interface provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of another user interface involved in an embodiment of the present application.
  • 4 to 5 are schematic diagrams of an embodiment of a user interface provided by embodiments of the present application.
  • FIG. 6 is a schematic diagram of another embodiment of a user interface provided by an embodiment of the present application.
  • FIGS. 7-8 are schematic diagrams of another embodiment of a user interface provided by embodiments of the present application.
  • FIG. 9 is a schematic diagram of another user interface provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of another user interface provided by an embodiment of this application.
  • FIG. 11 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 12 is a schematic flowchart of a method for rendering a face light effect provided by an embodiment of the present application
  • FIG. 13 is a schematic diagram of a result of portrait segmentation provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of the facial feature segmentation result provided by an embodiment of the present application.
  • 15 is a schematic flowchart of an overall light effect rendering method provided by an embodiment of this application.
  • 16 to 22 are schematic diagrams of the flow of hardware driver interaction within the electronic device
  • 24 is a schematic diagram of a flow of hardware driver interaction within an electronic device.
  • An embodiment of the present application provides an image processing method, which can be applied to an electronic device to process a picture taken by a camera application.
  • the electronic device when the first shooting mode is turned on, can recommend a suitable light effect template to the user according to the shooting scene to reduce user operations and improve the use efficiency of the mobile phone. Further, the electronic device can also combine the depth data to perform light effect rendering on the picture taken by the camera application, so as to enhance the stereoscopic effect of the picture.
  • the electronic devices involved in the embodiments of the present application may be mobile phones, tablet computers, desktops, laptops, notebook computers, ultra-mobile personal computers (Ultra-mobile Personal Computer, UMPC), handheld computers, netbooks, personal digital assistants (Personal Digital Assistant (PDA), wearable electronic devices, virtual reality devices, etc.
  • Ultra-mobile Personal Computer Ultra-mobile Personal Computer
  • PDA Personal Digital Assistant
  • wearable electronic devices virtual reality devices, etc.
  • the first shooting mode the shooting mode set when the subject is a person to highlight the person and enhance the beauty of the person in the captured picture.
  • the electronic device can use a larger aperture to keep the depth of field shallower to highlight the person, and improve the color effect through a specific algorithm to optimize the person's skin color.
  • the electronic device can also turn on the flash to perform illumination compensation.
  • Electronic devices can provide a variety of shooting modes.
  • the shooting parameters such as aperture size, shutter speed, and sensitivity (International) in different shooting modes are different, and the processing algorithms for the pictures taken are also different.
  • the first shooting mode may be referred to as a portrait shooting mode. This application does not limit the naming of the first shooting mode.
  • Light effect template a collection of multiple light effect parameters that can be used to process the pictures taken by the user in the first shooting mode.
  • the light effect parameter set may include one or more of the following parameters: the fusion parameter of the diffuse reflection layer, the highlight layer and the shadow layer, the background part of the RGB image and the background in the overall light effect rendering The fusion parameters of the projected texture layer, the color (pixel value) of the projected texture, the stretch value of the projected texture, the location of the projection of the texture pattern, the projection direction, the projected texture layer of the portrait, the rendering effect of the face light effect and RGB The fusion parameters of the face part in the picture.
  • the parameters listed above are only exemplary descriptions.
  • the set of light effect parameters may further include other parameters, which is not limited in the embodiments of the present application.
  • the electronic device may provide two or more light effect templates in the first shooting mode, and different light effect templates correspond to different sets of light effect parameters. Using different light effect templates to process pictures, electronic devices can obtain pictures with different effects.
  • the light effect template may be a template such as soft light, theater light, church light, tree shadow light, window shadow light, and dual color light.
  • Light effect rendering a method of processing pictures, which can make the pictures show a three-dimensional effect.
  • the light effect rendering in the embodiment of the present application may include light effect rendering on the human face, or the light effect rendering in the embodiment of the present application may include light effect rendering on the human face and overall light effect rendering. The detailed process of light effect rendering can be described in the subsequent embodiments.
  • FIG. 1A shows a schematic structural diagram of the electronic device 10.
  • the electronic device 10 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display 194, user Identification module (subscriber identification module, SIM) card interface 195, and 3D sensing module 196, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 10.
  • the electronic device 10 may include more or fewer components than shown, or combine some components, or split some components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), and a picture signal processor (image) signal processor (ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and / or neural-network processing unit (NPU) Wait.
  • application processor application processor
  • AP application processor
  • modem processor graphics processor
  • GPU graphics processor
  • ISP picture signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 10.
  • the controller can generate the operation control signal according to the instruction operation code and the timing signal to complete the control of fetching instructions and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Avoid repeated access, reduce the waiting time of the processor 110, thus improving the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • Interfaces can include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit, sound, I2S) interface, pulse code modulation (pulse code modulation (PCM) interface, universal asynchronous transceiver (universal asynchronous) receiver / transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input / output (GPIO) interface, subscriber identity module (SIM) interface, and And / or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input / output
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationship between the modules illustrated in the embodiments of the present invention is only a schematic description, and does not constitute a limitation on the structure of the electronic device 10.
  • the electronic device 10 may also use different interface connection methods in the foregoing embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive wireless charging input through the wireless charging coil of the electronic device 10. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and / or the charging management module 140, and supplies power to the processor 110, internal memory 121, external memory, display screen 194, camera 193, wireless communication module 160, and the like.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be disposed in the processor 110.
  • the power management module 141 and the charging management module 140 may also be set in the same device.
  • the wireless communication function of the electronic device 10 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 10 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G / 3G / 4G / 5G and the like applied to the electronic device 10.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and so on.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1 and filter, amplify, etc. the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor and convert it to electromagnetic wave radiation through the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110, and may be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (bluetooth, BT), and global navigation satellites that are applied to the electronic device 10 System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency-modulates and filters electromagnetic wave signals, and transmits the processed signals to the processor 110.
  • the wireless communication module 160 may also receive the signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it to electromagnetic wave radiation through the antenna 2.
  • the antenna 1 of the electronic device 10 and the mobile communication module 150 are coupled, and the antenna 2 and the wireless communication module 160 are coupled so that the electronic device 10 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include a global mobile communication system (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long-term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and / or IR technology, etc.
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a beidou navigation system (BDS), and a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and / or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS beidou navigation system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 10 realizes a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connecting the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations, and is used for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the GPU can be used to calculate the following aspects: the calculation of the highlight and diffuse reflection models in the face light rendering process; the occlusion relationship between the light source and each mesh patch; the highlight layer, the diffuse reflection layer and the shadow Layer fusion results; Gaussian blur of the background part of the RGB image during the overall light effect rendering; the projected texture coordinates of the vertices of each grid; portrait texture layers in the portrait area, face light rendering results, original RGB The result of the image fusion; the result of the fusion of the texture projection layer of the background in the background area and the background after the Gaussian blur.
  • the display screen 194 is used to display pictures, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel may use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light-emitting diode or an active matrix organic light-emitting diode (active-matrix organic light) emitting diode, AMOLED, flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the electronic device 10 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the display screen 194 may be used to display pictures to be taken, pictures rendered with light effects, and the like.
  • the electronic device 10 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP processes the data fed back by the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, and the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, which is converted into a picture visible to the naked eye.
  • ISP can also optimize algorithms for noise, brightness and skin color of pictures. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be set in the camera 193.
  • the camera 193 is used to capture still pictures or videos.
  • the object generates an optical picture through the lens and projects it onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (charge coupled device, CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CCD charge coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital picture signal.
  • the ISP outputs the digital picture signal to the DSP for processing.
  • DSP converts digital picture signals into standard RGB, YUV and other format picture signals.
  • the electronic device 10 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • the camera 193 is divided into two types, a front camera and a rear camera.
  • the front camera is a camera located on the front of the electronic device 10
  • the rear camera is a camera located
  • the digital signal processor is used to process digital signals. In addition to processing digital picture signals, it can also process other digital signals. For example, when the electronic device 10 is selected at a frequency point, the digital signal processor is used to perform Fourier transform on the energy at the frequency point.
  • Video codec is used to compress or decompress digital video.
  • the electronic device 10 may support one or more video codecs. In this way, the electronic device 10 can play or record videos in multiple encoding formats, such as: moving picture experts group (moving picture experts, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • moving picture experts group moving picture experts, MPEG
  • MPEG2 moving picture experts, MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent cognition of the electronic device 10, for example: picture recognition, face recognition, voice recognition, text understanding, and the like.
  • the function of intelligently recognizing the shooting scene of the electronic device 10 can be realized by the NPU, and the function of the intelligent recognition of the illumination direction of the electronic device 10 can also be realized by the NPU.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 10.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes instructions stored in the internal memory 121 to execute various functional applications and data processing of the electronic device 10.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area may store an operating system, at least one function required application programs (such as sound playback function, image playback function, etc.).
  • the storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 10 and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and so on.
  • the internal memory 121 can store pictures taken by the camera application, and can also be used to store a mapping relationship table between the shooting scene and the matching light effect template, and can also be used to store the recognition results of the shooting scene and the direction of the face lighting direction. Recognition results, portrait segmentation results, generated grid data, facial features segmentation results, etc.
  • the electronic device 10 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headphone interface 170D, and an application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and also used to convert analog audio input into digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
  • the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 10 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also known as "handset" is used to convert audio electrical signals into sound signals.
  • the electronic device 10 answers a phone call or voice message, the voice can be received by bringing the receiver 170B close to the ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a person's mouth, and input a sound signal to the microphone 170C.
  • the electronic device 10 may be provided with at least one microphone 170C. In other embodiments, the electronic device 10 may be provided with two microphones 170C. In addition to collecting sound signals, it may also implement a noise reduction function. In other embodiments, the electronic device 10 may further include three, four, or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the headset interface 170D is used to connect wired headsets.
  • the earphone interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device (open terminal) platform (OMTP) standard interface, and the American Telecommunications Industry Association (cellular telecommunications industry association of the United States, CTIA) standard interface.
  • OMTP open mobile electronic device
  • CTIA American Telecommunications Industry Association
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 10.
  • the air pressure sensor 180C is used to measure air pressure.
  • the magnetic sensor 180D includes a Hall sensor.
  • the acceleration sensor 180E can detect the magnitude of acceleration of the electronic device 10 in various directions (generally three axes).
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 10 can measure the distance by infrared or laser. In some embodiments, when shooting scenes, the electronic device 10 may use the distance sensor 180F to measure distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • LED light emitting diode
  • a light detector such as a photodiode.
  • the ambient light sensor 180L is used to sense the brightness of ambient light.
  • the electronic device 10 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 10 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the temperature sensor 180J is used to detect the temperature.
  • Touch sensor 180K also known as "touch panel”.
  • the touch sensor 180K may be provided on the display screen 194, and the touch sensor 180K and the display screen 194 constitute a touch screen, also called a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation may be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 10, which is different from the location where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the key 190 includes a power-on key, a volume key, and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 10 can receive key input and generate key signal input related to user settings and function control of the electronic device 10.
  • the motor 191 may generate a vibration prompt.
  • the motor 191 can be used for vibration notification of incoming calls and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminder, receiving information, alarm clock, game, etc.
  • Touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate a charging state, a power change, and may also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be inserted into or removed from the SIM card interface 195 to achieve contact and separation with the electronic device 10.
  • the electronic device 10 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards may be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 can also be compatible with external memory cards.
  • the electronic device 10 interacts with the network through the SIM card to realize functions such as call and data communication.
  • the electronic device 10 uses eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 10 and cannot be separated from the electronic device 10.
  • the 3D sensing module 196 can acquire depth data, and the depth data acquired during the photographing process can be passed to the GPU to perform 3D rendering of the image acquired by the camera 193.
  • the 3D sensing module 196 may be a time-of-flight (TOF) 3D sensing module or a structured light 3D sensing module, which may be disposed on top of the electronic device 10, such as the "bangs" position of the electronic device 10 (ie, FIG. 1B Area AA) shown in. It can be known that, in addition to the 3D sensing module 196, the area AA may also include a camera 193, a proximity light sensor 180G, a receiver 170B, a microphone 170C, and the like.
  • TOF time-of-flight
  • a structured light 3D sensing module which may be disposed on top of the electronic device 10, such as the "bangs" position of the electronic device 10 (ie, FIG. 1B Area AA) shown in.
  • the area AA may also include a camera 193, a proximity light sensor 180G, a receiver 170B, a microphone 170C, and the like.
  • the structured light 3D sensing module 196 is integrated in the electronic device 10 is used as an example for description.
  • the arrangement form of the structured light 3D sensing module 196 in the electronic device 10 is as follows: Modules such as camera 196-1 and dot matrix projector 196-2.
  • the dot matrix projector 196-2 includes a high-power laser (such as VCSEL) and diffractive optical components, etc., that is, a structured light emitter, used to emit a "structured" infrared laser light using a high-power laser and project it on an object surface.
  • the process of acquiring depth data by the structured light 3D sensing module 196 described above is: when the processor 110 detects that the current shooting mode is the portrait mode, the dot matrix projector 196-2 is controlled to start.
  • the high-power laser in the dot-matrix projector 196-2 emits infrared laser light. These infrared lasers are generated through the action of structures such as diffractive optical components in the dot-matrix projector 196-2 (such as about 30,000)
  • the spot of "structured" light is projected onto the surface of the shooting target.
  • the array formed by the light spots of these structured lights is reflected at different positions on the shooting target surface, and the infrared light camera 196-1 captures the light spots of the structured light reflected on the shooting target surface, thereby obtaining depth data of different positions on the shooting target surface, Then, the acquired depth data is uploaded to the processor 110.
  • the depth data acquired by the structured light 3D sensing module 196 can also be used for face recognition, for example, by unlocking the face of the owner when the electronic device 10 is unlocked.
  • the structured light 3D sensing module 196 may also include a floodlight illuminator, an infrared image sensor, and the aforementioned proximity light sensor 180G And other modules.
  • floodlight illuminators include low-power lasers (such as VCSEL) and homogenizers, etc., which are used to emit "unstructured" infrared laser light using a low-power laser and project it on the surface of an object.
  • the proximity light sensor 180G senses that the object is approaching the electronic device 10, thereby sending a signal to the processor 110 of the electronic device 10 that the object is approaching.
  • the processor 110 receives the signal that the object is approaching, and controls the flood illuminator to start.
  • the low-power laser in the flood illuminator projects infrared laser light onto the surface of the object.
  • the object surface reflects the infrared laser light projected by the floodlight illuminator.
  • the infrared camera captures the infrared laser light reflected by the object surface, thereby acquiring image information on the object surface, and then uploading the acquired image information to the processor 110 .
  • the processor 110 determines whether the object approaching the electronic device 10 is a human face according to the uploaded image information. When the processor 110 determines that the object close to the electronic device 10 is a human face, the dot matrix projector 196-2 is controlled to start.
  • the subsequent specific implementation is similar to the foregoing specific implementation when the current shooting mode is detected as the portrait mode, and depth data is acquired and uploaded to the processor 110.
  • the processor 110 compares and calculates the uploaded depth data with the user's facial feature data pre-stored in the electronic device 10, and recognizes whether the face close to the electronic device 10 is the user's face of the electronic device 10, if If yes, control the electronic device 10 to unlock; if no, control the electronic device 10 to continue to maintain the locked state.
  • the software system of the electronic device 10 may adopt a layered architecture, event-driven architecture, micro-core architecture, micro-service architecture, or cloud architecture.
  • the embodiment of the present invention takes an Android system with a layered architecture as an example to exemplarily explain the software structure of the electronic device 10.
  • FIG. 1C is a block diagram of the software structure of the electronic device 10 according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor.
  • the layers communicate with each other through a software interface.
  • the Android system is divided into four layers, from top to bottom are the application layer, the application framework layer, the Android runtime and the system library, and the kernel layer.
  • the application layer may include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • the application framework layer provides an application programming interface (application programming interface) and programming framework for applications at the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and so on.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, intercept the screen, etc.
  • Content providers are used to store and retrieve data, and make these data accessible to applications.
  • the data may include videos, pictures, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text and controls for displaying pictures.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes an SMS notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 10. For example, the management of call status (including connection, hang up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear after a short stay without user interaction.
  • Android Runtime includes core library and virtual machine. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one part is the function function that Java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in the virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer into binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library may include multiple functional modules. For example: surface manager (surface manager), media library (Media library), 3D graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • surface manager surface manager
  • media library Media library
  • 3D graphics processing library for example: OpenGL ES
  • 2D graphics engine for example: SGL
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports a variety of commonly used audio, video format playback and recording, and still picture files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least the display driver, camera driver, audio driver, and sensor driver.
  • the following describes the workflow of the software and hardware of the electronic device 10 in combination with the usage scene of capturing a photograph.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into original input events (including touch coordinates, time stamps and other information of touch operations).
  • the original input event is stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, for example, the control corresponding to the click operation is a camera application icon.
  • the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer.
  • the camera 193 captures a still image or video.
  • FIG. 2 exemplarily shows a user interface for an application menu on the electronic device 10.
  • the user interface 20 in FIG. 2 may include a status bar 202, a time component icon 204 and a weather component icon 203, icons of multiple applications such as a camera icon 201, a WeChat icon 208, a settings icon 209, an album icon 207, a Weibo icon 206, Alipay icon 205 and the like, and the interface 20 may further include a page indicator 210, a phone icon 211, a short message icon 212, and a contact icon 213. among them:
  • the status bar 202 may include: an operator indicator (for example, the operator's name "China Mobile"), one or more signal strength indicators of a wireless fidelity (Wi-Fi) signal, and a mobile communication signal (also available) One or more signal strength indicators and battery status indicators (called cellular signals).
  • an operator indicator for example, the operator's name "China Mobile”
  • Wi-Fi wireless fidelity
  • a mobile communication signal also available
  • One or more signal strength indicators and battery status indicators called cellular signals.
  • the time component icon 204 may be used to indicate the current time, such as date, day of the week, hour and minute information, and so on.
  • the weather component icon 203 can be used to indicate weather types, such as cloudy to sunny, light rain, etc., and can also be used to indicate information such as air temperature.
  • the page indicator 210 can be used to indicate which page of the application the user is currently browsing. Users can slide the area of multiple application icons left and right to browse the application icons in other pages.
  • FIG. 2 only exemplarily shows the user interface on the electronic device 10, and should not constitute a limitation on the embodiments of the present application.
  • the electronic device 10 may detect a user operation acting on the camera icon 201, and in response to the operation, the electronic device 10 may display a user interface for taking pictures.
  • the user interface may be the user interface 30 involved in the embodiment of FIG. 3. That is to say, the user can click the camera icon 201 to open the user interface for taking pictures.
  • FIG. 3 exemplarily shows a user interface for image capturing.
  • the user interface may be a user interface that the user clicks on the camera icon 201 in the embodiment of FIG. 2 to open, but not limited to this, the user may also open a user interface for taking photos in other applications, for example, the user clicks a shooting control in WeChat To open the user interface for taking pictures.
  • the user interface 30 for taking pictures may include: a framing frame 301, a shooting control 302, a shooting mode list 303, a control 304, and a control 305. among them:
  • the framing frame 301 can be used to display the picture acquired by the camera 193.
  • the electronic device can refresh the display content in real time.
  • the camera 193 for acquiring pictures may be a rear camera or a front camera.
  • the shooting control 302 can be used to monitor user operations that trigger shooting.
  • the electronic device may detect a user operation on the shooting control 302 (such as a click operation on the shooting control 302), and in response to the operation, the electronic device 10 may determine the captured picture and display the captured picture in 305. That is to say, the user can click the shooting control 302 to trigger shooting.
  • the shooting control 302 may be a button or other forms of controls.
  • One or more shooting mode options may be displayed in the shooting mode list 303.
  • the electronic device 10 can detect the user operation acting on the shooting mode option, and in response to the operation, the electronic device 10 can turn on the shooting mode selected by the user.
  • the electronic device can also detect a sliding operation in the shooting mode list 303 (such as a sliding operation to the left or right), and in response to the operation, the electronic device 10 can switch the shooting mode options displayed in the shooting mode list 303 in order to Users browse more shooting mode options.
  • the shooting mode options may be icons or other forms of options.
  • the shooting mode list 303 may include: portrait shooting mode icon 303A, photo shooting mode icon 303B, video shooting mode icon 303C, large aperture shooting mode icon 303D, night scene shooting mode icon 303E, slow motion shooting mode icon 303F.
  • the control 304 may be used to monitor user operations that trigger camera switching.
  • the electronic device 10 can detect a user operation acting on the control 304 (such as a click operation on the control 304), and in response to the operation, the electronic device 10 can switch the camera (such as switching the rear camera to the front camera, or the front Set the camera to the rear camera).
  • the control 305 can be used to monitor user operations that trigger the opening of the album.
  • the electronic device 10 may detect a user operation (such as a click operation on the control 305) acting on the control 305, and in response to the operation, the electronic device 10 may open an album and display the newly saved picture.
  • UI user interface
  • FIG. 4 exemplarily shows a UI embodiment of the user interface 30 for the user to select the portrait shooting mode.
  • the electronic device 10 can detect a user operation acting on the portrait shooting mode option in the shooting mode list 303 (such as a click operation on the portrait shooting mode icon 303A), and in response to the operation, the electronic device 10 can Turn on the first shooting mode.
  • a user operation acting on the portrait shooting mode option in the shooting mode list 303 such as a click operation on the portrait shooting mode icon 303A
  • the electronic device 10 may also update the display state of the portrait shooting mode option, and the updated display state may indicate that the portrait shooting mode has been selected.
  • the updated display state may be the text information "Portrait” corresponding to the highlight shooting mode icon 303A.
  • the updated display state can also present other interface expressions, such as the font of the text information "Portrait” becomes larger, the text information "Portrait” is framed, the text information "Portrait” is underlined, and icons 303A color deepening, etc.
  • the electronic device 10 may also display the control 306 in the user interface 30.
  • the control 306 can be used to monitor user operations that open the option bar of the light effect template.
  • the electronic device 10 may detect a user operation acting on the control 306, and in response to the operation, the electronic device 10 may display a light effect template option bar 307 in the user interface 30, refer to FIG. 5.
  • the light effect template option bar 307 includes two or more light effect template options.
  • the light effect template option in the light effect template option bar 307 can be used to monitor the user's selection operation. Specifically, the electronic device may detect a user operation (such as a click operation on "light effect 1") that acts on the light effect template option in the light effect template option bar 307, and in response to the operation, the electronic device may determine that the selected The fixed light effect template is a light effect template for processing the captured image.
  • the selected light effect template may be a light effect template corresponding to the light effect template option used for the operation. For example, if the operation is an operation of clicking “light effect 1”, the selected light effect template is light effect template 1 corresponding to “light effect 1”.
  • the electronic device 10 may update the display state of the selected light effect template option, and the updated display state may indicate that the light effect template has been selected.
  • the updated display state may be to highlight the text information "light effect 1" corresponding to the selected light effect template icon.
  • the updated display state can also present other interface expressions, such as the font of the text information "light effect 1" becomes larger, the text information "light effect 1” is framed, and the text information "light effect 4" "Underlined, the color of the selected light effect template icon is darkened, etc.”
  • the selected light effect template is called a first light effect template.
  • the electronic device 10 can also detect a sliding operation (such as a left or right sliding operation) in the light effect template option bar 307, and in response to the operation, the electronic device 10 can switch the display in the light effect template option bar 307 Light effect template options, so that users can browse more light effect template options.
  • the electronic device 10 may display the light effect template option in the light effect template option bar according to the current shooting scene.
  • the light effect template option bar is used to monitor user operations for selecting the light effect template. Refer to FIG. 6.
  • the electronic device 10 in response to the operation, may also directly display the light effect template option bar in the user interface 30 without displaying the control 306, and monitor the user operation of opening the light effect template option bar through the control 306.
  • FIG. 6 exemplarily shows a UI embodiment in which the electronic device 10 recommends options of light effect templates according to the current shooting scene.
  • the arrangement order of the light effect template options included in the light effect template option bar 307 is as follows from left to right: light Effect 1, Effect 2, Effect 3, Effect 4, Effect 5.
  • the arrangement order of the light effect template options included in the light effect template option bar 307 is as follows from left to right: light effect 4, light effect 1, Light effect 2, light effect 3, light effect 5.
  • the electronic device 10 can use the light effect template 4 matching the shooting scene "Baiyun” as the recommended light effect template, and display the corresponding option "Light Effect 4" (307A) on the left side of the light effect template option bar 307 The first position.
  • the example is only an embodiment provided by the present application, and is not limited thereto, and other embodiments may also be possible.
  • the electronic device 10 can recommend the light effect template option according to the current shooting scene.
  • the display state of the light effect template option matching the current shooting scene is the first display state.
  • the first display state can be used to highlight the option, prompting the user that the template corresponding to the option is a template suitable for the current shooting scene, which is convenient for the user to quickly identify and select the option, and can effectively recommend the option to the user.
  • the first display state can be achieved in one or more of the following ways: the display position of the option in the option bar is the first position (such as the first display position on the left, the display position in the middle, etc.), the option is Highlighted, the font corresponding to the text information of the option (such as "Light Effect 1") is a large font, and the icon corresponding to the option presents a dynamic change (for example, a heartbeat effect).
  • the display position of the option in the option bar is the first position (such as the first display position on the left, the display position in the middle, etc.)
  • the option is Highlighted
  • the font corresponding to the text information of the option such as "Light Effect 1”
  • the icon corresponding to the option presents a dynamic change (for example, a heartbeat effect).
  • FIG. 7 exemplarily shows a UI embodiment in which the user interface 30 is used for a user to take a picture.
  • the option “light effect 4” is highlighted, indicating that the electronic device 10 has determined the option “light effect 4” "The corresponding light effect template 4 is the first light effect template.
  • the electronic device 10 can detect a user operation (such as a click operation on the shooting control 302) acting on the shooting control 302 ), In response to this operation, the electronic device 10 takes a picture and processes the picture using the light effect parameters corresponding to the first light effect template.
  • the picture taken by the electronic device 10 may be a picture taken by the electronic device 10 at the moment when the above user operation is detected.
  • the picture taken by the electronic device 10 may also be a series of pictures taken by the electronic device 10 within a period of time before the time when the user operation is detected. This period of time may be, for example, 5 ms, 10 ms, or the like.
  • the electronic device 10 may also display a thumbnail of the picture in the control 305, refer to FIG. 8.
  • the thumbnail of the picture contains fewer pixels than the picture.
  • the electronic device 10 can detect a user operation (such as a click operation on the control 305) acting on the control 305.
  • the electronic device 10 may display the user interface 40 of the picture processed by the light effect parameter of the first light effect template.
  • the user interface 40 may refer to FIG. 9. That is to say, the user can click on the control 305 to open the user interface 40 for displaying pictures.
  • the user can also open the user interface for displaying pictures in other applications, for example, the user clicks the icon 207 of the album application in the interface 20 to open the user interface for displaying pictures, and for example, the user clicks in WeChat Photo controls to open the user interface for displaying pictures.
  • 9-10 illustrate the user interface 40 by way of example.
  • the user interface 40 may include: a picture content display area 401 and controls 402.
  • the picture content display area 401 is used to display the picture generated after the light effect parameter processing of the first light effect template in the first shooting mode, and the picture may be referred to as a first picture.
  • the electronic device 10 may detect a user operation (such as a click operation on the control 402) acting on the control 402, and in response to the operation, the electronic device 10 may also display in the user interface 40: a light source indicator 403, a light intensity indicator 404, the light effect template option bar 405, the cancel control 406 and the save control 407, refer to FIG. 10. among them:
  • the light source indicator 403 is a virtual light source indicator set according to the light direction, and can be used to indicate the light direction of the actual light source in the shooting scene.
  • the electronic device 10 can recognize the lighting direction according to the image of the human face displayed in the view frame 301.
  • the specific way of identifying the light direction according to the image of the human face will be described in detail in the subsequent embodiment of FIG. 17, and will not be described in detail here.
  • the electronic device 10 may detect a user operation (such as a sliding operation on the light source indicator 403) acting on the light source indicator 403, and in response to the operation, the electronic device 10 may update the display light source indicator 403.
  • a user operation such as a sliding operation on the light source indicator 403 acting on the light source indicator 403
  • the electronic device 10 may update the display light source indicator 403.
  • the electronic device 10 may also update the picture displayed in the picture content display area 401 according to the light direction indicated by the updated light source indicator 403.
  • the light intensity adjuster 404 can be used to indicate the light intensity of the light source.
  • the electronic device 10 may detect a first user operation (such as a left-slide operation on the light intensity adjuster 404) acting on the light intensity adjuster 404, and in response to the operation, the electronic device 10 may update the display light intensity adjuster 404, The light intensity indicated by the updated light intensity adjuster 404 becomes weaker. In response to this operation, the electronic device 10 may also update the picture in the display picture content display area 401 according to the weakened light intensity.
  • the electronic device 10 may also detect a second user operation (such as a right slide operation on the light intensity adjuster 404) acting on the light intensity adjuster 404, and in response to the operation, the electronic device 10 updates the display light intensity adjuster 404, The light intensity indicated by the updated light intensity adjuster 404 becomes stronger.
  • the electronic device 10 may also update the picture in the display picture content display area 401 according to the increased light intensity.
  • the horizontal light intensity adjuster shown in 404 there may be other forms of light intensity adjusters, such as vertical light intensity adjusters, or light intensity adjusters presented in the form of plus and minus signs, this application
  • the embodiment does not limit this. It is not limited to the user operation of sliding left, and the first user operation may also be a user operation of sliding or clicking. It is not limited to the user operation of sliding right, and the second user operation may also be a user operation of sliding up or clicking. The embodiments of the present application do not limit this.
  • the light effect template option bar 405 may include two or more light effect template options, and the options of the first light effect template in the light effect template option bar 405 are specially marked (as shown in the option "light effect 4" in FIG. 10) Indicates that the picture displayed in the picture content display area 401 has been processed by the light effect parameter corresponding to the first light effect template.
  • the electric device 10 can detect a user operation (such as a click operation on the option "light effect 3") acting on the second light effect template option in the light effect template option bar 405, and in response to the operation, the electronic device 10 updates the display The display status of the second light effect template option and the first light effect template option.
  • a user operation such as a click operation on the option "light effect 3”
  • the second light effect template is other light effect templates except the first light effect template.
  • the display state of the updated second light effect template option may indicate that the second light effect template has been selected, and the updated display state of the first light effect template option may indicate that the first light effect template has been deselected.
  • the display state of the updated second light effect template option may be the same as the display state of the first light effect template option before update. For details, reference may be made to the description in the embodiment of FIG. 6, which is not repeated here.
  • the display state of the updated first light effect template option may be consistent with the display state of the second light effect template option before update.
  • the electronic device 10 may also update the picture in the displayed picture content display area 401 according to the light effect parameter corresponding to the second light effect template.
  • the adjustment of the light direction by the light source indicator 403, the adjustment of the light intensity by the light intensity adjuster 404, and the switching of the light effect template by the light effect template option bar 405 can be called light effect editing.
  • the cancel control 406 may be used to monitor user operations that trigger the cancellation of light effect editing.
  • the electronic device 10 can detect a user operation (such as a click operation on the cancel control 406) acting on the cancel control 406, and in response to the operation, the electronic device 10 can cancel the light effect editing of the first picture and update the display picture content display
  • the picture in the area 401 is the first picture. That is to say, the user can click the cancel control 406 to trigger the cancellation of the light effect editing of the first picture.
  • the save control 407 can be used to monitor user operations that trigger to save the second picture.
  • the second picture is a picture displayed in the picture content display area 401.
  • the electronic device 10 can detect a user operation (such as a click operation on the save control 407) acting on the save control 407, and in response to the operation, the electronic device 10 can save the picture displayed in the picture content display area 401.
  • the second picture may be a picture generated after editing the light effect of the first picture. That is to say, the user can click the save control 407 to trigger saving of the second picture generated after editing the light effect of the first picture.
  • FIG. 11 is a schematic flowchart of an image processing method provided by the present application.
  • the image processing methods provided in this application are mainly divided into three major processes: taking pictures, rendering of light effects, and editing of light effects.
  • the following describes the electronic device as the main subject and expands the description:
  • the photographing process mainly includes the following S101-S105.
  • S101 The electronic device starts the first shooting mode.
  • the manner in which the electronic device 10 turns on the first shooting mode may include but is not limited to the following:
  • the electronic device 10 can detect the user operation acting on the portrait shooting mode option in the user interface 30 through the touch sensor 180K to turn on the first shooting mode.
  • the electronic device 10 can detect the user operation acting on the portrait shooting mode option in the user interface 30 through the touch sensor 180K to turn on the first shooting mode.
  • the electronic device 10 can detect the user operation acting on the control 304 in the user interface 30 through the touch sensor 180K to turn on the first shooting mode. That is to say, when the electronic device 10 is switched from the rear camera to the front camera, the electronic device 10 can start the first shooting mode.
  • the electronic device 10 can also determine whether there is a human face in the framing frame 301, and if so, further determine whether the human face meets the requirements. If there is no face, a prompt message such as "no face detected” is displayed in the framing frame 301. If it is determined that the face does not meet the requirements, a prompt message such as "a face that meets the requirements is not detected” is displayed in the framing frame 301.
  • the face that meets the requirements can be one or any combination of the following: a single face, the angle of the face does not exceed the first threshold, the ratio of the area of the face in the framing frame 301 to the total area of the picture to be taken is greater than Or equal to the second threshold. The detection method of the face angle is described in subsequent embodiments, and will not be described in detail here.
  • S102 The electronic device recognizes the shooting scene of the picture to be taken.
  • the electronic device may acquire the RGB data of the picture to be shot, input the RGB data of the picture to be shot into the first model, and output the identified shooting scene.
  • the first model is trained from RGB data of a large number of pictures of known shooting scenes.
  • the output result of the first model may be a binary character string.
  • the value of the character string represents a shooting scene, and the correspondence between the character string and the shooting scene may be stored in the internal memory 121 of the electronic device 10 in the form of a table. For example, 001 represents scene 1, 010 represents scene 2, 011 represents scene 3, 100 represents scene 4, and so on.
  • the electronic device 10 may search for the shooting scene corresponding to the character string in the table according to the character string output by the first model.
  • the number of digits of the character string can be determined according to all kinds of shooting scenes.
  • the output form of the first model in the embodiment of the present application is exemplarily described, and there may be other output forms in a specific implementation, which is not limited in the embodiment of the present application.
  • the electronic device 10 can also recognize the lighting direction of the human face, and the recognition result of the lighting direction can be used in the subsequent light effect rendering process and light effect editing process.
  • the recognition process of the light direction is described in the subsequent embodiments, and will not be described in detail here.
  • S103 The electronic device displays an option bar of the light effect template according to the shooting scene of the picture to be taken.
  • the electronic device 10 may store a mapping relationship table between the shooting scene and the matching light effect template. After the electronic device 10 finds the light effect template matching the current shooting scene according to the mapping relationship table, it can set the display state of the option of the matched light effect template to the first display state, for details, refer to the description in the embodiment of FIG. 6 , Not repeated here.
  • mapping relationship tables The following exemplarily shows several mapping relationship tables.
  • the shooting scene and the matching light effect template are in a one-to-one correspondence. As shown in Table 1.
  • Table 1 The mapping relationship between the shooting scene and the matching light effect template
  • the option corresponding to the light effect template 1 is displayed as “light effect 1” in the interface 30, and the light effect template 2 to the light effect template 5 are similar and will not be described in detail.
  • a shooting scene in the mapping relationship table may correspond to multiple light effect templates with different matching degrees. Take a shooting scene corresponding to three light effect templates with different matching degrees (high, medium and low) as an example, as shown in Table 2.
  • the light effect template (high) in Table 2 represents a light effect template with a high degree of matching
  • the light effect template (middle) represents a light effect template with a degree of matching
  • the light effect template (low) represents a light effect template with a low degree of matching.
  • the electronic device 10 may search for three light effect templates matching the shooting scene according to Table 2, and display the options corresponding to the matched light effect templates in the forefront of the light effect template option bar according to the matching degree from high to low. .
  • mapping relationship table The relationship between the above shooting scene and the matched light effect template in the form of a mapping relationship table is only an exemplary description, and there may be other forms in a specific implementation, which is not limited in the embodiments of the present application.
  • S104 The electronic device receives a user operation for selecting the first light effect template.
  • the user operation for selecting the first light effect template may be a user operation acting on the first light effect template option in the option bar of the light effect template, as shown in the click operation on the icon 307A in the embodiment of FIG. 6, here No details.
  • the first light effect template is turned on, so that after the electronic device 10 receives the photographing instruction, the electronic device 10 can determine that the first light effect template is used for processing shooting For the light effect template of the image, reference may be made to the description of the embodiment in FIG. 5, which is not repeated here.
  • S105 The electronic device receives a photographing instruction when the first light effect template has been selected.
  • the photographing instruction may be an instruction generated by a user operation acting on the photographing control 302, which can be specifically seen in the description of the embodiment of FIG. 7, which is not repeated here.
  • the electronic device 10 may acquire RGB data and depth data at time t1.
  • the RGB data and the depth data need to be coordinately aligned to obtain RGBD data (RGB data and depth data) that are aligned in time and coordinates for subsequent Light effect rendering process and light effect editing process.
  • the RGB data collection device may be a rear camera
  • the depth data collection device may be a rear camera
  • the electronic device 10 may calculate depth data based on the RGB data collected by the rear camera. After the electronic device 10 starts the first shooting mode, the depth data can be calculated in real time.
  • the RGB data collection device may be a front camera, and the depth data collection device may be a 3D sensing module 196. After starting the first shooting mode, the electronic device 10 can collect depth data in real time.
  • the process of light effect rendering mainly includes S106. It can be known from the foregoing description in S105 that the depth data can be calculated from the RGB data acquired by the rear camera, or can also be acquired by the 3D sensing module 196. The depth data used in the light effect rendering process involved in the embodiments of the present application will be described by taking the data collected by the 3D sensing module 196 as an example.
  • S106 The electronic device uses the light effect parameters corresponding to the first light effect template to process the captured picture to generate the first picture.
  • the light effect parameters corresponding to the first light effect template are used to perform light effect rendering on the captured picture.
  • the process of light effect rendering may include face light effect rendering, or include face light effect rendering and overall light effect rendering.
  • face light effect rendering is the light effect rendering of the face part in the picture
  • overall light effect rendering is the light effect rendering of the entire picture.
  • the process of light effect editing mainly includes the following S107-S108.
  • S107 The electronic device receives an instruction of the user to edit the light effect of the first picture.
  • the electronic device 10 displays the first picture generated in S105 in the user interface 40.
  • the electronic device 10 displays the first picture generated in S105 in the user interface 40.
  • the electronic device 10 displays the first picture
  • the instruction of the user to edit the light effect of the first picture may be generated by the electronic device 10 detecting a user operation acting on the control 402.
  • the electronic device 10 may further display the light source indicator 403, the light intensity indicator 404, the light effect template option bar 405, and cancel the control in the user interface 40
  • the electronic device 10 may further display the light source indicator 403, the light intensity indicator 404, the light effect template option bar 405, and cancel the control in the user interface 40
  • the electronic device 10 can also display an indicator of the projection position of the texture pattern in the user interface 40, and the user can manually adjust the indicator to change the projection pattern on the background and portrait to enhance The interaction between the user and the electronic device 10.
  • S108 The electronic device generates a second picture and saves the second picture.
  • the electronic device 10 detects the user operation acting on the light source indicator 403, or the user operation acting on the light intensity regulator 404, or the user operation acting on the second light effect template option in the light effect template option bar 405 In response to the above user operation, the electronic device 10 may generate a second picture and display it in the picture content display area 401.
  • the electronic device 10 After the electronic device 10 detects a user operation acting on the save control 407, in response to the operation, the electronic device 10 saves the second picture.
  • the internal memory 121 of the device 10 allows the user to directly call the above intermediate result when editing the first picture light effect, thereby reducing the amount of calculation.
  • the user can manually adjust the light direction, the light source intensity in the first picture, and change the light effect template, which can enhance the interaction between the user and the electronic device 10 and improve the user experience.
  • FIG. 12 shows the rendering of face light effects involved in the embodiments of the present application, which may specifically include the following steps:
  • Phase one (S201): Establish a three-dimensional model.
  • S201 The electronic device establishes a three-dimensional model based on RGBD data.
  • the process of building a three-dimensional model includes the following steps:
  • S2012 Perform hole filling operation on the RGBD data to remove outliers, that is, interpolation, so that the data is continuous, smooth and free of holes.
  • S2013 Perform filtering operation on the RGBD data after hole filling to remove noise.
  • the mesh is usually composed of triangles, quadrilaterals, or other simple convex polygons to simplify the rendering process.
  • the grid is composed of triangles as an example for description.
  • Stage two (S202-S203): Segment the picture.
  • S202 The electronic device divides the captured picture into two parts, an adult portrait and a background, and obtains a portrait segmentation result.
  • the captured picture is a picture formed by the RGB data collected by the electronic device 10 through the front camera 193 (hereinafter referred to as RGB picture).
  • the RGB picture includes multiple pixels, and the pixel value of each pixel is the RGB value.
  • the electronic device 10 may calculate the RGB data acquired by the front camera 193 to obtain a portrait segmentation map.
  • a portrait segmentation map For example, you can use the edge-based segmentation method for portrait segmentation, that is, calculate the gray value of each pixel and find the set of continuous pixels on the boundary line of two different areas in the picture. The pixels on both sides of these continuous pixels There is a significant difference in the gray value or it is located at the turning point of the gray value rising or falling.
  • other methods can also be used, such as threshold-based segmentation method, region-based segmentation method, graph-based segmentation method, energy functional-based segmentation method, etc.
  • the above method for segmenting portraits is only an exemplary description, which is not limited in the embodiments of the present application.
  • the portrait segmentation diagram is shown in FIG. 13, the white part is the portrait part, and the black part is the background part.
  • the portrait and background are segmented to obtain the portrait and background parts, which can also be used to render the portrait and background parts separately when the entire picture is subsequently rendered.
  • the specific rendering process can be seen in the description of subsequent embodiments, which will not be described here.
  • S203 The electronic device performs facial feature segmentation on the portrait part, and obtains the facial feature segmentation result.
  • the electronic device inputs the RGB data of the face part in the portrait part to the third model, and can output the segmentation result, and the segmentation result includes facial features (eyes, nose, eyebrows, mouth, ears), skin, hair, and other parts.
  • the segmentation result output by the third model a facial features segmentation map can be obtained. As shown in FIG. 14, regions with different gray levels represent different parts.
  • the third model is trained from a large number of RGB data of face parts with known segmentation results.
  • the output form of the third model can represent the part to which a certain pixel belongs in a specific binary number (e.g.
  • the processor 110 may represent pixels belonging to the same part in the same grayscale, and represent pixels in different parts in different grayscales.
  • the above method for segmenting facial features is only an exemplary description, and there may be other methods for segmenting facial features in a specific implementation, which is not limited in the embodiments of the present application.
  • S201 may be executed first and then S202-S203, or S202-S203 may be executed first and then S201.
  • Stage three Calculate the gray value of each pixel in the three layers separately.
  • S204 The electronic device inputs the grid data into the diffuse reflection model, and outputs the diffuse reflection layer.
  • the diffuse reflection model uses the Oren-Nayar reflection model.
  • the input data of the Oren-Nayar reflection model includes grid data, a facial feature segmentation map, the light source intensity and the light source direction when the light source illuminates each pixel, Oren -
  • the output data of Nayar reflection model is the gray value of each pixel, which is called diffuse reflection layer.
  • the parameters of the Oren-Nayar reflection model belong to the light effect parameter set corresponding to the first light effect template, and are determined by the light effect template selected in S104.
  • the intensity of the light source when the light source illuminates each pixel can be calculated by the Linearly Transformed Cosines (LTC) algorithm.
  • LTC Linearly Transformed Cosines
  • S205 The electronic device inputs the grid data into the specular reflection model and outputs the specular layer.
  • the specular reflection model uses the GGX reflection model, and the input data and the diffuse reflection model input data are consistent with the output data, and the output of the GGX reflection model is called a specular layer.
  • the parameters of the GGX reflection model belong to the light effect parameter set corresponding to the first light effect template, and are determined by the light effect template selected in S104.
  • S206 The electronic device calculates whether each grid is occluded, and if it is occluded, performs shadow rendering on the grid and outputs a shadow layer.
  • each grid is occluded can be calculated separately according to the light source direction and grid data. If it is blocked, the gray value of the pixel corresponding to the grid is set to the lowest, if it is not blocked, the gray value of the pixel corresponding to the grid is set to the highest, and finally each pixel after outputting the shadow rendering
  • the grayscale value is called the shadow layer.
  • the highest gray value can be determined by the gray level of the picture. In the embodiment of the present application, the gray level of the picture is 2, the highest gray value is 1, and the lowest gray value is 0.
  • the occlusion relationship of each grid is calculated according to the actual light direction in the identified photographed scene, and the gray value of the pixel corresponding to the grid is set according to the occlusion relationship, which can increase the shadow of strong sense of reality effect.
  • the gray level of each pixel output in the above S204 and S205 may be 256, and the gray value range of each pixel output is [0,1], that is, the gray value in the range [0,255] is normalized to [ 0,1] gray value, so that the gray value range of each pixel output in S204 and S205 is consistent with the gray value range of each pixel output in S206, which is convenient for the three layers in S207 ( Diffuse layer, highlight layer and shadow layer) for overlay fusion.
  • the sequence of the above S204, S205, and S206 is not limited.
  • S207 The electronic device superimposes and merges the diffuse reflection layer, the highlight layer, and the shadow layer, and outputs a face light rendering result according to the fusion result and RGB data.
  • the diffuse reflection layer output in S204, the highlight layer output in S205, and the shadow layer output in S206 are superimposed and fused, that is, the gray values of the pixels at the same position in each layer are weighted and summed To get the gray value of each pixel after superimposed fusion.
  • the weight of the gray value of the pixels of each layer is the layer fusion parameter.
  • the layer fusion parameter belongs to the light effect parameter set corresponding to the first light effect template and is determined by the light effect template selected in S104. Multiply the gray value of each pixel after superimposed fusion with the pixel value of the pixel to obtain the pixel value of each pixel after the rendering of the facial light effect, which is the rendering result of the facial light effect.
  • the pixel value range of each pixel in the embodiment of the present application may be [0,255].
  • the virtual light source is placed at a light source position determined according to the light direction so that the light effect applied later does not conflict with the original light of the picture, and each light source is calculated according to the intelligently recognized light direction
  • the occlusion relationship of the grid and set the gray value of the pixel corresponding to the grid according to the occlusion relationship, rendering the shadow caused by the occlusion, especially the shadow cast by the light of the eye socket and nose, which greatly enhances the stereoscopic effect of the face.
  • the specific process may include the following steps:
  • Phase one (S301): Gaussian blur.
  • S301 The electronic device performs Gaussian blur on the background part of the RGB image.
  • the RGB picture is a picture obtained based on RGB data acquired by the front camera 193.
  • the pixel value of each pixel in the background part is weighted and averaged with the pixel values of neighboring pixels to calculate the pixel value of the pixel after Gaussian blur.
  • Stage two Calculate the projection texture layer of the portrait and the projection texture layer of the background separately.
  • S302 The electronic device calculates the texture coordinates of each grid vertex according to the texture pattern projection direction and the portrait grid.
  • the position coordinates of the texture pattern projection are known, the direction of the projection is known, and the projection matrix can be calculated.
  • the projection matrix is the connection matrix between the space coordinate system where the portrait grid is located and the space coordinate system where the texture pattern is projected.
  • the space coordinate system where the portrait grid is located can take the center of the portrait grid as the origin of the coordinate system, the horizontal direction to the right is the positive x-axis direction, the horizontal forward is the positive y-axis direction, and the vertical upward is the positive z-axis direction.
  • the spatial coordinate system where the texture pattern is projected takes this position as the origin of the coordinate system, and the x-axis, y-axis, and z-axis are parallel to the x-axis, y-axis, and z-axis of the portrait grid, respectively.
  • the projection pattern projected on the portrait grid can be determined according to the stretch ratio of the projection texture on the x-axis and y-axis, and the pixel value of the projection texture.
  • the coordinate position of the mesh vertex in the projection pattern is the texture coordinate.
  • the projection direction of the texture pattern, the position coordinates of the projection of the texture pattern, the stretching ratio of the projected texture on the x-axis and y-axis, and the pixel value of the projected texture belong to the light effect parameter set, which is determined by the light effect template selected in S104.
  • the electronic device extracts the pixel value of the corresponding texture pattern according to the texture coordinates of each mesh vertex, and outputs the projected texture layer of the portrait.
  • the coordinate position of the grid vertex in the projection pattern is known, and the projection pattern projected on the portrait grid is known.
  • the pixel value of the texture pattern corresponding to each grid vertex can be extracted to obtain all the pixels in the portrait grid
  • the pixel value of the texture pattern corresponding to the grid is called the projected texture layer of the portrait.
  • S304 The electronic device sets a projection plane perpendicular to the location of the portrait in the background of the portrait.
  • a virtual projection plane is set in the background part.
  • the virtual projection plane is perpendicular to the ground on which the portrait is located.
  • S305 The electronic device calculates the texture coordinates of the pixels in the projection plane according to the projection direction of the texture pattern and the projection plane.
  • the determination of the projection pattern of the projection plane is similar to the determination of the projection pattern on the portrait grid, which will not be repeated here.
  • the texture coordinate of a pixel in the projection plane is the coordinate position of the pixel in the projection pattern.
  • the electronic device extracts the pixel value of the corresponding texture pattern according to the texture coordinates of the projection plane, and outputs the projected texture layer of the background.
  • the coordinate position of the pixel point in the projection pattern is known
  • the projection pattern projected on the projection plane is known
  • the pixel value of the texture pattern for each pixel point can be extracted to obtain the corresponding values of all the pixel points on the projection plane
  • the pixel value of the texture pattern is called the projected texture layer of the background.
  • Phase three (S307-S308): superimposed fusion.
  • S307 The electronic device superimposes and merges the projection texture layer of the portrait, the rendering effect of the face light effect and the RGB image.
  • the pixel value of the pixel at the same position in the portrait texture layer of the portrait in S303, the rendering result of the face light effect in S207 and the portrait part of the RGB image obtained by the front camera 193 are weighted and summed
  • the pixel value of each pixel of the portrait part after superimposed fusion can be obtained.
  • the weights of the projected texture layer of the portrait, the rendering result of the face light effect and the pixel values of the pixels in the RGB image obtained by the front camera 193 belong to the light effect parameter set corresponding to the first light effect template.
  • the selected light effect template is determined.
  • S308 The electronic device overlays and merges the background projection texture layer and the Gaussian blurred background.
  • the pixel values of the pixels at the same position of the projected texture layer of the background in S306 and the background part after Gaussian blur in S301 are weighted and summed to obtain each pixel of the background part after superimposed fusion Pixel value.
  • the weights of the pixel values of the pixels in the background projection texture layer and the Gaussian blurred background belong to the light effect parameter set corresponding to the first light effect template, which is determined by the light effect template selected in S104.
  • the background projection texture layer and the Gaussian blurred background are superimposed and fused in the background part, so that the image after the light effect rendering has the light effect background, and the trace of the original background is retained, which increases the Photorealism.
  • Stage 4 Post-processing of pictures.
  • S309 The electronic device performs post-processing on the superimposed and fused picture.
  • the superimposed and fused picture includes the fusion result of the portrait part in S307 and the fusion result of the background part in S308 to form an entire picture.
  • the post-processing may include processing the hue, contrast, and filters of the entire picture.
  • the tone processing is mainly to adjust the overall color tendency of the whole picture by adjusting the H value.
  • Contrast processing is mainly to adjust the ratio of the brightness of the brightest part to the darkest part of the whole picture.
  • Filter processing is to calculate the pixel value of each pixel after filter processing through a matrix and the pixel value of each pixel in the entire picture to adjust the overall effect of the entire picture.
  • the H value in the above tone processing, the ratio of the brightness of the brightest part to the darkest part in the contrast process, and the matrix in the filter process all belong to the light effect parameter set corresponding to the first light effect template, and the light effect selected in S104
  • the template decides.
  • the portrait part and the background part can be rendered separately, and the real depth data collected by the 3D sensing module 196 can be used to make the light effect fluctuate on the portrait, increasing the realism and stereoscopic effect of the picture.
  • the result of the face light effect rendering output in S207 is the pixel value of each pixel of the first picture. That is to say, the first picture can be obtained after the light effect of the face is rendered.
  • the process of light effect rendering may include face light effect rendering and overall light effect rendering
  • the overall light effect rendering is continued to calculate the pixel value of each pixel of the first picture. That is to say, the first picture can be obtained after the overall light effect is rendered.
  • the display screen 194 displays the user interface 20.
  • the user interface 20 displays application icons of multiple applications, including the camera application icon 201.
  • the touch sensor 180K detects that the user clicks the camera application icon 201.
  • the touch sensor 180K reports the event that the user clicks the camera application icon 201 to the processor 110.
  • the processor 110 determines an event that the user clicks on the camera application icon 201, and issues an instruction to the display screen 194 to display the user interface 30.
  • the display screen 194 displays the user interface 30 in response to the instruction issued by the processor 110.
  • the processor 110 determines an event that the user clicks on the camera application icon 201, and issues an instruction to the camera 193 to turn on the camera 193.
  • the camera 193 turns on the rear camera in response to an instruction issued by the processor 110, and collects RGB data of the picture to be taken in real time.
  • the touch sensor 180K detects that the user clicks on the control 306.
  • the touch sensor 180K reports the event that the user clicks on the control 306 to the processor 110.
  • the processor 110 determines an event that the user clicks on the control 306 and issues an instruction to the camera 193 to turn on the front camera.
  • the camera 193 turns on the front camera in response to an instruction issued by the processor 110.
  • the front camera can collect RGB data of the picture to be taken in real time, and save the RGB data of the picture to be taken to the internal memory 121.
  • the RGB data of the picture to be captured collected in real time may carry a time stamp, so that the processor 110 performs time alignment processing on the RGB data and the depth data in subsequent processing.
  • the touch sensor 180K detects that the user clicks the icon 303A.
  • the touch sensor 180K reports the event that the user clicks the icon 303A to the processor 110.
  • the processor 110 determines the event that the user clicks the icon 303A, and starts the first shooting mode.
  • the processor 110 adjusts shooting parameters such as aperture size, shutter speed, and sensitivity.
  • the processor 110 can also determine whether there is a human face in the framing frame 301, and if so, further determine whether the human face meets the requirements, as described in S101, and will not be repeated here. Next, we will introduce the detection method of face angle in detail.
  • the above-mentioned face angle may be the face angle in three-dimensional space
  • the first threshold includes three data, which are the pitch angle (pitch) rotating around the x-axis and the y-axis in the standard three-dimensional coordinate system Yaw angle (yaw), the roll angle around the z-axis (roll)
  • the direction, the horizontal forward direction is the positive direction of the y-axis
  • the vertical upward direction is the positive direction of the z-axis.
  • the angle of the face that meets the requirements may be that the pitch angle is less than or equal to 30 °
  • the yaw angle is less than or equal to 30 °
  • the roll angle is less than or equal to 35 °. 35 °.
  • Face angle detection can be achieved by building a three-dimensional model of the face. Specifically, the three-dimensional model of the face to be detected can be established through depth data, and then the standard three-dimensional model stored in the internal memory 121 is rotated until the standard three-dimensional model matches the three-dimensional model of the face to be detected, then the angle of rotation of the standard three-dimensional model The angle of the face to be detected.
  • the above method for detecting the face angle is only an exemplary description, and there may be other detection methods in a specific implementation, which is not limited in the embodiments of the present application.
  • the second threshold may be, for example, 4%, 10%, or the like.
  • the first threshold and the second threshold are not limited to the above listed values, and may be other values in a specific implementation, which is not limited in this embodiment of the present application.
  • the processor 110 issues the display status of the update icon 303A to the display screen 194.
  • the display 194 updates the display state of the display icon 303A in response to an instruction issued by the processor 110.
  • the processor 110 sends an instruction to collect depth data to the 3D sensing module 196.
  • the 3D sensing module 196 collects depth data in real time in response to instructions sent by the processor 110.
  • the 3D sensing module 196 saves the depth data to the internal memory 121.
  • the depth data collected in real time may carry a time stamp, so that the processor 110 performs time alignment processing on the RGB data and the depth data in subsequent processing.
  • the touch sensor 180K detects that the user clicks on the control 306.
  • the touch sensor 180K reports the event that the user clicks on the control 306 to the processor 110.
  • the processor 110 determines the event that the user clicks on the control 306 and sends an instruction to the display screen 194 to display the option bar 307 of the light effect template.
  • the display screen 194 displays the light effect template option bar 307 in response to the instruction sent by the processor 110.
  • the processor 110 reads the RGB data and depth data of the picture to be taken from the internal memory 121.
  • the recognition method of the shooting scene can be described in S102.
  • the identification method of the light direction will be introduced in detail.
  • the processor 110 may input the RGB data of the face part to the second model, and output the result of the lighting direction of the face.
  • the second model is trained from a large number of RGB data of face parts with known light directions.
  • the direction of the face light includes three data in three-dimensional space, the angle ⁇ with the xoy plane, the angle ⁇ with the xoz plane, and the angle ⁇ with the yoz plane, where the origin o is the position of the nose of the face, horizontal To the right is the positive direction of the x-axis, horizontally forward is the positive direction of the y-axis, and vertically upward is the positive direction of the z-axis, then the output result of the second model is ( ⁇ , ⁇ , ⁇ ).
  • the virtual light source can be placed in the light source position determined according to the lighting direction during the subsequent rendering of the light effect, so that the light effect applied later does not conflict with the original lighting of the picture;
  • the position of the virtual light source can be displayed on the interface 40.
  • the user can change the picture effect by adjusting the position of the virtual light source to improve the interaction between the user and the electronic device 10;
  • the position of the virtual light source is displayed to enhance the fun during the photographing process.
  • the processor 110 saves the light direction recognition result to the internal memory 121, so that the result can be directly called in subsequent processing.
  • the processor 110 reads the mapping relationship table between the shooting scene and the matching light effect template from the internal memory 121.
  • the processor 110 determines a light effect template matching the shooting scene (assuming light effect template 4).
  • the processor 110 sends an instruction to update the display light effect template option bar 307 to the display screen 194.
  • the display state of the light effect template option matching the shooting scene is the first display state.
  • the display screen 194 updates the display light effect template option bar 307 in response to the instruction issued by the processor 110.
  • the touch sensor 180K detects that the user clicks the first light effect template option.
  • the touch sensor 180K reports the event that the user clicks the first light effect template option to the processor 110.
  • the processor 110 determines an event that the user clicks on the first light effect template option, and sends to the display screen 194 to update the display state of the first light effect template option.
  • the display screen 194 updates the display state of the first light effect template option in response to the instruction of the processor 110.
  • the touch sensor 180K detects that the user clicks on the shooting control 302.
  • the touch sensor 180K reports the event that the user clicks on the shooting control 302 to the processor 110.
  • the processor 110 determines the event that the user clicks on the shooting control 302, and reads the RGB data of the picture to be shot stored in the internal memory 121.
  • the processor 110 determines the event that the user clicks on the shooting control 302, and reads the depth data stored in the internal memory 121.
  • the time stamp of the depth data is consistent with the time stamp of the RGB data of the picture to be taken read in 38, so as to ensure the time alignment of the RGB data and the depth data.
  • the processor 110 aligns the RGB data with the depth data coordinates to obtain RGBD data with time and coordinates aligned.
  • FIG. 21 to FIG. 23 describe in detail the cooperation relationship of various components in the electronic device 10 in the light effect editing.
  • the display screen 194 displays the first picture in the control 305.
  • the touch sensor 180K detects that the user clicks on the control 305.
  • the touch sensor 180K reports the event that the user clicks the control 305 to the processor 110.
  • the processor 110 determines an event that the user clicks on the control 305, and sends an instruction to display the user interface 40 to the display screen 194.
  • the display screen 194 displays the user interface 40 in response to the instruction sent by the processor 110.
  • the touch sensor 180K detects that the user clicks on the control 402.
  • the touch sensor 180K reports the event that the user clicks on the control 402 to the processor 110.
  • the processor 110 determines an event that the user clicks on the control 402, and sends an instruction to update the display user interface 40 to the display screen 194.
  • the display screen 194 updates and displays the user interface 40 in response to the instruction sent by the processor 110.
  • the updated user interface 40 may include: a light source indicator 403, a light intensity adjuster 404, a light effect template option bar 405, a cancel control 406, a save control 407, and the like.
  • the touch sensor 180K detects that the user slides the light source indicator 403. 11. The touch sensor 180K reports the event that the user slides the light source indicator 403 to the processor 110.
  • the processor 110 determines an event that the user slides the light source indicator 403, and sends an instruction to update the display light source indicator 403 to the display screen 194.
  • the display screen 194 updates the display light source indicator 403 in response to the instruction sent by the processor 110.
  • the processor 110 determines a new lighting direction, and determines a picture in the picture content display area 401 according to the new lighting direction.
  • the processor 110 sends an instruction to the display screen 194 to update the picture displayed in the picture content display area 401.
  • the display screen 194 updates the pictures in the picture content display area 401 in response to the instruction sent by the processor 110.
  • the user inputs a sliding operation on the light source indicator 403 to move the light source indicator 403 from (x1, y1) to (x2, y2), as shown in FIG. 23.
  • the touch sensor 180K detects the user's sliding operation on the light source indicator 403, and reports an event (the user's sliding operation on the light source indicator 403) to the processor 110.
  • the processor 110 calculates the picture content display according to the new light direction
  • the RGB data of the picture in the area 401 that is, the pixel value of each pixel included in the picture.
  • the display screen 194 is caused to display the light source indicator 403 at (x2, y2), and the picture in the display picture content display area 401 is updated. It can be known that the user's sliding operation on the light source indicator 403 is a continuous action.
  • the electronic device 10 can update and display the light source indicator 403 and the picture in the display content display area 401 in real time.
  • the above calculation of the RGB data of the picture in the picture content display area 401 according to the new illumination direction may be to recalculate the occlusion relationship of each grid of the face in the face light rendering section, and reset the gray value of each grid according to the occlusion relationship , Output shadow layer. Furthermore, the diffuse reflection layer, the highlight layer and the shadow layer are superimposed and fused, and the face light rendering result is output according to the fusion result and the RGB data.
  • the touch sensor 180K detects that the user slides the light intensity adjuster 404.
  • the touch sensor 180K reports the event that the user slides the light intensity adjuster 404 to the processor 110.
  • the processor 110 determines the event that the user slides the light intensity adjuster 404, and sends an instruction to update the display light intensity adjuster 404 to the display screen 194.
  • the display screen 194 updates the display picture content to display the light intensity adjuster 404.
  • the processor 110 determines a new light intensity, and determines a picture in the picture content display area 401 according to the new light intensity.
  • the processor 110 sends an instruction to update the display in the picture content display area 401 to the display screen 194.
  • the display screen 194 updates the pictures in the picture content display area 401 in response to the instruction sent by the processor 110.
  • the user's sliding operation on the light source indicator 403 is a continuous action.
  • the electronic device 10 may update and display the light source indicator 403 and the picture in the picture content display area 401 in real time.
  • the touch sensor 180K detects that the user clicks on the second light effect template option.
  • the touch sensor 180K reports the event that the user clicks on the second light effect template option to the processor 110.
  • the processor 110 determines an event that the user clicks on the second light effect template option, and sends an instruction to the display screen 194 to update and display the first light effect template option and the second light effect template option.
  • the display screen 194 updates and displays the first light effect template option and the second light effect template option in response to the instruction sent by the processor 110.
  • the processor 110 determines the pictures in the picture content display area 401 according to the light effect parameters corresponding to the second light effect template.
  • determining the picture in the picture content display area 401 is to calculate the RGB data of the picture in the picture content display area 401 according to the light effect parameter corresponding to the second light effect template.
  • the processor 110 sends an instruction to update the picture displayed in the picture content display area 401 to the display screen 194.
  • the display screen 194 updates the pictures in the picture content display area 401 in response to the instruction sent by the processor 110.
  • the sequence of the above 26-27 and 28-30 is not limited.
  • the sequence of the above 10-16, 17-23, 24-30 is not limited.
  • S106 may include some or all of 10-16, 17-23, 24-30, which is not limited in the embodiments of the present application.
  • the touch sensor 180K detects that the user clicks the save control 407.
  • the touch sensor 180K reports the event that the user clicks the save control 407 to the processor 110.
  • the processor 110 determines the event that the user clicks the save control 407, saves the second picture to the internal memory 121, and deletes the first picture from the internal memory.
  • the pictures displayed updated in 16, 23, and 30 are the second pictures.
  • Saving the second picture to the internal memory 121 is to save the RGB data of the second picture to the internal memory 121.
  • Deleting the first picture from the internal memory 121 means deleting the RGB data of the first picture from the internal memory 121.
  • the image processing method provided in this embodiment of the present application can recommend a suitable light effect template to the user according to the identified shooting scene during the photographing process, which can enable the user to quickly select a suitable light effect template, reduce user operations, and improve the use efficiency of the mobile phone.
  • recommending a suitable light effect template for the user according to the shooting scene may be implemented during the light effect editing process.
  • the electronic device 10 may enable the first shooting mode during the photographing process, receive the user's operation to select the first light effect template, and then receive the user's photographing instruction to complete the photographing process and determine the picture to be photographed.
  • the electronic device 10 After receiving the photographing instruction from the user, the electronic device 10 renders the photo to be captured according to the light effect parameter corresponding to the first light effect template to generate the first image. Before the light effect rendering, the electronic device 10 can recognize the light direction of the human face, and perform light effect rendering of the face in combination with the light direction of the face during the light effect rendering process.
  • the specific face light effect rendering process can be implemented with reference to FIG. 12 Example description.
  • the electronic device 10 may set the display state of the light effect template option matching the shooting scene to the first display state in the light effect template option bar,
  • the user is reminded that the template corresponding to the option is a template suitable for the current shooting scene, which facilitates the user to quickly identify and select the option, and can effectively recommend the option to the user.
  • the electronic device 10 can identify the shooting scene before the light effect editing process and determine the light effect template matching the shooting scene.
  • the recognition process of the shooting scene is similar to the method described in S102 in the embodiment of FIG. 11, and the method of determining the light effect template matching the shooting scene is similar to the method described in S103 in the embodiment of FIG. 11, which is not repeated here.
  • the first display state is the same as the first display state in the embodiment of FIG. 6 and will not be repeated here.
  • the embodiments of the present application also provide a computer-readable storage medium. All or part of the processes in the above method embodiments may be completed by a computer program instructing relevant hardware.
  • the program may be stored in the above-mentioned computer storage medium. When the program is executed, the processes may include the processes of the above method embodiments.
  • the computer readable storage medium includes: read-only memory (read-only memory, ROM) or random access memory (random access memory, RAM), magnetic disk or optical disk, and other media that can store program codes.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted through the computer-readable storage medium.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including a server, a data center, and the like integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)), or the like.
  • the modules in the device of the embodiment of the present application may be combined, divided, and deleted according to actual needs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开了一种拍照方法,该方法应用于电子设备,可以使电子设备在采用第一拍摄模式进行拍照时,根据拍摄场景为用户推荐合适的光效模板,减少用户操作,提升电子设备的使用效率。该方法可包括:开启摄像头采集拍摄对象的图像;显示第一用户界面;其中,第一用户界面包括:第一显示区域、拍摄模式列表、光效模板选项栏;在第一显示区域中显示摄像头采集的图像;在光效模板选项栏中突出显示与拍摄场景匹配的光效模板的选项;其中,拍摄场景为第一显示区域中显示的图像对应的拍摄场景。

Description

图像处理方法及电子设备 技术领域
本申请涉及图像处理领域,尤其涉及一种图像处理方法及电子设备。
背景技术
随着社交网络的流行,市场对智能人像美化的需求越来越大,搭载人像美化功能的电子设备产品层出不穷。
在用户拍照时,手机可以提供多种拍摄模式:人像拍摄模式、大光圈拍摄模式、夜景拍摄模式等。其中,在人像拍摄模式这种拍摄模式下,手机可以提供多种光效模板。不同的光效模板代表(或对应)不同的光效参数,如光源位置、图层融合参数、纹理图案投影的位置、投影的方向等。用户可以选择不同的光效模板使得拍摄得到的照片呈现出不同的效果。但是,对于手机提供的多种光效模板,用户往往需要经过多次尝试才能找到合适的光效模板,用户操作繁琐,手机使用效率低。
发明内容
本申请实施例提供了一种图像处理方法及电子设备,可以使用户快速选择合适的光效模板,减少用户操作。
第一方面,本申请实施例提供了一种拍照方法,包括:电子设备开启摄像头,采集拍摄对象的图像;上述电子设备显示第一用户界面;其中,上述第一用户界面包括:第一显示区域、拍摄模式列表、光效模板选项栏;上述拍摄模式列表包括一个或多个拍摄模式的选项,上述一个或多个拍摄模式包括第一拍摄模式,上述第一拍摄模式已被选定,上述第一拍摄模式为突出显示拍摄的图片中包含的人物的拍摄模式,上述光效模板选项栏中包括两个或两个以上光效模板的选项;上述光效模板包括一个或多个光效参数,用于处理采用上述第一拍摄模式拍摄的图片;上述电子设备在上述第一显示区域中显示上述摄像头采集的图像;上述电子设备在上述光效模板选项栏中突出显示与拍摄场景匹配的光效模板的选项;其中,上述拍摄场景为上述第一显示区域中显示的图像对应的拍摄场景。
在一些实施例中,上述第一显示区域可称为取景框。
在一些实施例中,上述第一拍摄模式可称为人像拍摄模式。
在一些实施例中,上述光效模板包括可包括以下光效参数中的一个或多个:漫反射图层、高光图层以及阴影图层的融合参数,RGB图片的背景部分和整体光效渲染中背景的投影纹理图层的融合参数,投影纹理的颜色(像素值),投影纹理的拉伸值,纹理图案投影的位置,投影的方向,人像的投影纹理图层、人脸光效渲染结果和RGB图片中人脸部分的融合参数等。
实施第一方面提供的拍照方法,电子设备可以在采用第一拍摄模式进行拍照时,智能识别当前的拍摄场景,并根据拍摄场景为用户推荐与当前拍摄场景匹配的光效模板,可以使用户快速选择合适的光效模板,减少用户操作,提升电子设备的使用效率。
在一种可能的实现方式中,上述第一用户界面还包括拍摄控件和第一控件;上述电子 设备在上述光效模板选项栏中突出显示与拍摄场景匹配的光效模板的选项之后,上述方法还包括:上述电子设备在检测到作用于上述拍摄控件的用户操作后,采用已选定的光效模板对应的光效参数对拍摄的图片进行处理,生成第一图片;上述电子设备在上述第一控件中显示上述第一图片的缩略图;其中,上述第一图片的缩略图包含的像素点少于上述第一图片包含的像素点。
在一种可能的实现方式中,上述已选定的光效模板为上述与拍摄场景匹配的光效模板。
本申请实施例提供的技术方案,用户可以选择电子设备推荐的与拍摄场景匹配的光效模板,采用该光效模板对图片进处理可使获得图片的拍摄效果更好。
在一种可能的实现方式中,上述采用已选定的光效模板对应的光效参数对拍摄的图片进行处理,生成第一图片,包括:上述电子设备采用已选定的光效模板对应的光效参数、光照方向以及深度数据对拍摄的图片进行处理,生成第一图片;其中,上述光照方向为根据上述第一显示区域中显示的图片识别的光照方向,上述深度数据为上述拍摄对象的深度数据。
本申请实施例提供的技术方案,可以根据拍摄场景中真实的光照方向对拍摄的图片进行处理,使后期施加的光效与图片原始的光照不冲突,渲染因遮挡造成的阴影,尤其可以渲染眼窝和鼻子部分光照投射的阴影,极大增强面部的立体感。
在一种可能的实现方式中,上述采用已选定的光效模板对应的光效参数、光照方向以及深度数据对拍摄的图片进行处理之后,上述生成第一图片之前,上述方法还包括:根据上述已选定的光效模板对应的光效参数以及上述深度数据分别对人像部分和背景部分进行处理;其中,上述人像部分和上述背景部分为根据上述拍摄的图片分割得到。
本申请实施例提供的技术方案,可以将人像部分和背景部分分开渲染,使光效在人像上错落起伏,增加图片的真实感和立体感。
在一种可能的实现方式中,上述在上述光效模板选项栏中突出显示与拍摄场景匹配的光效模板的选项包括以下一项或多项:在上述光效模板选项栏中的第一显示位置显示上述与拍摄场景匹配的光效模板的选项;在上述光效模板选项栏中高亮显示上述与拍摄场景匹配的光效模板的选项;在上述光效模板选项栏中动态显示上述与拍摄场景匹配的光效模板的选项。
本申请实施例提供了多种突出显示与拍摄场景匹配的光效模板选项的方式,通过上述方式可使用户更加快速直观地发现适合当前拍摄场景的光效模板,减少用户操作,提升电子设备的使用效率。
在一种可能的实现方式中,上述电子设备在上述第一控件中显示上述第一图片的缩略图之后,上述方法还包括:上述电子设备检测到作用于上述第一控件的第一用户操作,响应于上述第一用户操作,上述电子设备显示用于查看上述第一图片的第二用户界面。
在一些实施例中,上述第一用户操作可以是点击操作。
本申请实施例提供的技术方案可以通过点击第一控件使电子设备显示用于查看第一图片的第二用户界面。
在一种可能的实现方式中,上述第二用户界面包括:第二显示区域和第二控件;其中:上述第二显示区域用于显示上述第一图片;上述方法还包括:上述电子设备检测到作用于 上述第二控件的第二用户操作,响应于上述第二用户操作,上述电子设备显示用于编辑上述第一图片的第二用户界面。
在一些实施例中,上述第二用户操作可以是点击操作。
本申请实施例提供的技术方案可以通过点击第二控件使电子设备显示用于编辑第一图片的第二用户界面,用户可对第一图片进行光效编辑。该技术方案可提升用户与电子设备的互动性。
在一种可能的实现方式中,上述第二用户界面还包括:光源指示符;其中,上述光源指示符用于指示上述拍摄场景中光源的光照方向;上述方法还包括:上述电子设备检测到作用于上述光源指示符的第三用户操作,响应于上述第三用户操作,更新上述光照方向,并重新执行上述电子设备采用上述已选定的光效模板对应的光效参数、光照方向以及深度数据对拍摄的图片进行处理的步骤。
在一些实施例中,上述第三用户操作可以是滑动操作。
本申请实施例提供的技术方案可以通过滑动光源指示符改变光源的光照方向,使电子设备根据新的光照方向对拍摄的图片进行处理。该技术方案可提升用户与电子设备的互动性。
在一种可能的实现方式中,上述第二用户界面还包括:光强指示符;其中,上述光强指示符用于指示上述光源的光照强度;上述方法还包括:上述电子设备检测到作用于上述光强指示符的第四用户操作,响应于上述第四用户操作,更新上述光源强度,并采用上述已选定的光效模板对应的光效参数、光照方向、光源强度以及深度数据对拍摄的图片进行处理。
在一些实施例中,上述第四用户操作可以是滑动操作,用于增大或减小光照强度。
在一些具体的实施例中,上述第四用户操作可以是左滑或者右滑的用户操作。
在一些具体的实施例中,上述第四用户操作可以是上滑或者下滑的用户操作。
在一些实施例中,上述第四用户操作可以是点击操作。
本申请实施例提供的技术方案可以通过对光强指示符的第四用户操作改变光源的光照强度,使电子设备根据新的光照强度对拍摄的图片进行处理。该技术方案可提升用户与电子设备的互动性。
在一种可能的实现方式中,上述第二用户界面还包括上述光效模板选项栏;上述方法还包括:上述电子设备检测到作用于上述光效模板选项栏的第五用户操作,响应于上述第五用户操作,更新上述已选定的光效模板,并重新执行上述电子设备采用上述已选定的光效模板对应的光效参数、光照方向以及深度数据对拍摄的图片进行处理的步骤。
在一些实施例中,上述第五用户操作可以是对光效模板选项栏中包含的一个光效模板选项的点击操作,使电子设备根据新的光效模板对应的光效参数对拍摄的图片进行处理。该技术方案可提升用户与电子设备的互动性。
第二方面,本申请实施例提供了一种电子设备,包括:一个或多个处理器、存储器、一个或多个摄像头、触摸屏;上述存储器、上述一个或多个摄像头以及上述触摸屏与上述一个或多个处理器耦合,上述存储器用于存储计算机程序代码,上述计算机程序代码包括计算机指令,上述一个或多个处理器调用上述计算机指令以执行:开启上述摄像头采集拍 摄对象的图像;显示第一用户界面;其中,上述第一用户界面包括:第一显示区域、拍摄模式列表、光效模板选项栏;上述拍摄模式列表包括一个或多个拍摄模式的选项,上述一个或多个拍摄模式包括第一拍摄模式,上述第一拍摄模式已被选定,上述第一拍摄模式为突出显示拍摄的图片中包含的人物的拍摄模式,上述光效模板选项栏中包括两个或两个以上光效模板的选项;上述光效模板包括一个或多个光效参数,用于处理采用上述第一拍摄模式拍摄的图片;在上述第一显示区域中显示上述摄像头采集的图像;在上述光效模板选项栏中突出显示与拍摄场景匹配的光效模板的选项;其中,上述拍摄场景为上述第一显示区域中显示的图像对应的拍摄场景。
在一种可能的实现方式中,上述第一用户界面还包括拍摄控件和第一控件;上述处理器在上述光效模板选项栏中突出显示与拍摄场景匹配的光效模板的选项之后,上述处理器还执行:在检测到作用于上述拍摄控件的用户操作后,采用已选定的光效模板对应的光效参数对拍摄的图片进行处理,生成第一图片;在上述第一控件中显示上述第一图片的缩略图;其中,上述第一图片的缩略图包含的像素点少于上述第一图片包含的像素点。
在一种可能的实现方式中,上述已选定的光效模板为上述与拍摄场景匹配的光效模板。
在一种可能的实现方式中,上述处理器采用已选定的光效模板对应的光效参数对拍摄的图片进行处理,生成第一图片时具体执行:上述处理器采用已选定的光效模板对应的光效参数、光照方向以及深度数据对拍摄的图片进行处理,生成第一图片;其中,上述光照方向为根据上述第一显示区域中显示的图片识别的光照方向,上述深度数据为上述拍摄对象的深度数据。
在一种可能的实现方式中,上述处理器采用已选定的光效模板对应的光效参数、光照方向以及深度数据对拍摄的图片进行处理之后,上述处理器生成第一图片之前,上述处理器还执行:根据上述已选定的光效模板对应的光效参数以及上述深度数据分别对人像部分和背景部分进行处理;其中,上述人像部分和上述背景部分为根据上述拍摄的图片分割得到。
在一种可能的实现方式中,上述在上述光效模板选项栏中突出显示与拍摄场景匹配的光效模板的选项包括以下一项或多项:在上述光效模板选项栏中的第一显示位置显示上述与拍摄场景匹配的光效模板的选项;在上述光效模板选项栏中高亮显示上述与拍摄场景匹配的光效模板的选项;在上述光效模板选项栏中动态显示上述与拍摄场景匹配的光效模板的选项。
在一种可能的实现方式中,上述在上述第一控件中显示上述第一图片的缩略图之后,上述处理器还执行:检测到作用于上述第一控件的第一用户操作,响应于上述第一用户操作,上述电子设备显示用于查看上述第一图片的第二用户界面。
在一种可能的实现方式中,上述第二用户界面包括:第二显示区域和第二控件;其中:上述第二显示区域用于显示上述第一图片;上述处理器还执行:检测到作用于上述第二控件的第二用户操作,响应于上述第二用户操作,上述电子设备显示用于编辑上述第一图片的第二用户界面。
在一种可能的实现方式中,上述第二用户界面还包括:光源指示符;其中,上述光源指示符用于指示上述拍摄场景中光源的光照方向;上述处理器还执行:检测到作用于上述 光源指示符的第三用户操作,响应于上述第三用户操作,更新上述光照方向,并重新执行上述采用上述已选定的光效模板对应的光效参数、光照方向以及深度数据对拍摄的图片进行处理。
在一种可能的实现方式中,上述第二用户界面还包括:光强指示符;其中,上述光强指示符用于指示上述光源的光照强度;上述处理器还执行:检测到作用于上述光强指示符的第四用户操作,响应于上述第四用户操作,更新上述光源强度,并采用上述已选定的光效模板对应的光效参数、光照方向、光源强度以及深度数据对拍摄的图片进行处理。
在一种可能的实现方式中,上述第二用户界面还包括上述光效模板选项栏;上述处理器还执行:检测到作用于上述光效模板选项栏的第五用户操作,响应于上述第五用户操作,更新上述已选定的光效模板,并重新执行上述电子设备采用上述已选定的光效模板对应的光效参数、光照方向以及深度数据对拍摄的图片进行处理。
第三方面,本申请实施例提供了一种电子设备上的图形用户界面,上述电子设备具有触摸屏、摄像头、存储器和用以执行存储于上述存储器中的程序的处理器,上述图形用户界面包括第一用户界面,上述第一用户界面包括:第一显示区域、拍摄模式列表、光效模板选项栏,上述拍摄模式列表包括一个或多个拍摄模式的选项,上述一个或多个拍摄模式包括第一拍摄模式,上述第一拍摄模式已被选定,上述第一拍摄模式为突出显示拍摄的图片中包含的人物的拍摄模式,上述光效模板选项栏中包括两个或两个以上光效模板的选项,上述光效模板包括一个或多个参数,用于处理采用上述第一拍摄模式拍摄的图片;其中:在上述第一显示区域中显示上述摄像头采集的图像;在上述光效模板选项栏中突出显示与拍摄场景匹配的光效模板的选项;其中,上述拍摄场景为上述第一显示区域中显示的图像对应的拍摄场景。在一种可能的实现方式中,上述在上述光效模板选项栏中突出显示与拍摄场景匹配的光效模板的选项包括以下一项或多项:在上述光效模板选项栏中的第一显示位置显示上述与拍摄场景匹配的光效模板的选项;在上述光效模板选项栏中高亮显示上述与拍摄场景匹配的光效模板的选项;在上述光效模板选项栏中动态显示上述与拍摄场景匹配的光效模板的选项。
在一种可能的实现方式中,上述第一用户界面还包括拍摄控件和第一控件;其中:响应于检测到的作用于上述拍摄控件的用户操作,将第一图片的缩略图显示在上述第一控件内;其中,上述第一图片为拍摄的图片,上述第一图片的缩略图包含的像素点少于上述第一图片包含的像素点;响应于检测到的作用于上述第一控件的用户操作,显示用于查看上述第一图片的第二用户界面。
在一种可能的实现方式中,上述第二用户界面包括:第二显示区域及第二控件;其中,上述第二显示区域用于显示上述第一图片;响应于检测到的作用于上述第二控件的用户操作,显示用于编辑上述第一图片的第二用户界面。
在一种可能的实现方式中,上述第二用户界面还包括:光源指示符、光强指示符、上述光效模板选项栏;其中,上述光源指示符用于指示上述拍摄场景中光源的光照方向,上述光强指示符用于指示上述光源的光照强度;响应于检测到的作用于上述光源指示符的用户操作,更新上述光源指示符的显示位置及上述第二显示区域内显示的图片;响应于检测到的作用于上述光强指示符的用户操作,更新显示上述光强指示符及上述第二显示区域内 显示的图片;响应于检测到的作用于上述光效模板选项栏的用户操作,更新显示上述光效模板选项栏及上述第二显示区域内显示的图片。
第四方面,本申请实施例提供了一种计算机存储介质,包括计算机指令,当该计算机指令在电子设备上运行时,使得该电子设备执行本申请实施例第一方面或第一方面的任意一种实现方式提供的拍照方法。
第五方面,本申请实施例提供了一种计算机程序产品,当该计算机程序产品在电子设备上运行时,使得该电子设备执行本申请实施例第一方面或第一方面的任意一种实现方式提供的拍照方法。
可以理解地,上述提供的第二方面提供的电子设备、第四方面提供的计算机存储介质,以及第五方面提供的计算机程序产品均用于执行第一方面所提供的拍照方法,因此,其所能达到的有益效果可参考第一方面所提供的拍照方法中的有益效果,此处不再赘述。
附图说明
图1A为本申请实施例提供的电子设备的结构示意图;
图1B为本申请实施例提供的3D感测模块结构示意图;
图1C为本申请实施例提供的电子设备的软件结构框图;
图2为本申请实施例提供的一种用户界面示意图;
图3为本申请实施例涉及的另一种用户界面示意图;
图4-图5为本申请实施例提供的一种用户界面实施例的示意图;
图6为本申请实施例提供的又一种用户界面实施例的示意图;
图7-图8为本申请实施例提供的另一种用户界面实施例的示意图;
图9为本申请实施例提供的另一种用户界面示意图;
图10为本申请实施例提供的另一种用户界面示意图;
图11为本申请实施例提供的图像处理方法流程示意图;
图12为本申请实施例提供的人脸光效渲染方法流程示意图;
图13为本申请实施例提供的人像分割结果示意图;
图14为本申请实施例提供的五官分割结果示意图;
图15为本申请实施例提供的整体光效渲染方法流程示意图;
图16-图22为电子设备内部的硬件驱动交互的流程示意图;
图23为本申请实施例提供的人机交互示意图;
图24为电子设备内部的硬件驱动交互的流程示意图。
具体实施方式
本申请实施例提供了一种图像处理方法,可以应用于电子设备对相机应用拍摄的图片进行处理。
本申请中,电子设备可以在开启第一拍摄模式的情况下,根据拍摄场景为用户推荐合适的光效模板,减少用户操作,提高手机的使用效率。进一步地,电子设备还可以结合深度数据对相机应用拍摄的图片进行光效渲染,提升图片的立体感。
本申请实施例中涉及的电子设备可以是手机、平板电脑、桌面型、膝上型、笔记本电脑、超级移动个人计算机(Ultra-mobile Personal Computer,UMPC)、手持计算机、上网本、个人数字助理(Personal Digital Assistant,PDA)、可穿戴电子设备、虚拟现实设备等。
首先,介绍本申请实施例中涉及的几个概念。
第一拍摄模式:针对拍摄对象为人物时设置的拍摄模式,以突出人物,提升拍摄图片中人物的美感。当电子设备开启第一拍摄模式时,电子设备可以采用较大的光圈保持景深较浅,以突出人物,通过特定的算法改善色彩效果,以优化人物肤色。在检测到环境光线强度低于一定阈值时,电子设备还可以开启闪光灯进行光照补偿。电子设备可以提供多种拍摄模式,不同拍摄模式下的光圈大小、快门速度以及感光度(International Standardization Organization,ISO)等拍摄参数各不相同,对于拍摄得到的图片的处理算法也各不相同。本申请的实施例中,第一拍摄模式可以称为人像拍摄模式。本申请对第一拍摄模式的命名不做限制。
光效模板:多个光效参数的集合,可用于处理用户选择第一拍摄模式所拍摄的图片。本申请实施例中,光效参数的集合可以包括以下参数中的一个或多个:漫反射图层、高光图层以及阴影图层的融合参数,RGB图片的背景部分和整体光效渲染中背景的投影纹理图层的融合参数,投影纹理的颜色(像素值),投影纹理的拉伸值,纹理图案投影的位置,投影的方向,人像的投影纹理图层、人脸光效渲染结果和RGB图片中人脸部分的融合参数等。以上列举的参数仅为示例性说明,在具体实现中光效参数的集合还可以包括其他参数,本申请实施例对此不作限定。
电子设备在第一拍摄模式下可以提供两个或两个以上的光效模板,不同光效模板对应不同的光效参数集合。采用不同的光效模板对图片进行处理,电子设备可以获得不同效果的图片。光效模板例如可以是柔光、剧场光、教堂光、树影光、窗影光、双色光等模板。
拍摄场景:在开启第一拍摄模式的情况下,电子设备通过相机应用拍摄的包含人物的图片中,人物所处的环境,即为拍摄场景。
光效渲染:对图片进行处理的方法,可以使图片呈现立体效果。本申请实施例中的光效渲染可以包括对人脸的光效渲染,或者本申请实施例中的光效渲染可以包括对人脸的光效渲染以及整体光效渲染。光效渲染的详细过程可见后续实施例中的描述。
下面介绍本申请以下实施例中提供的示例性电子设备10。
图1A示出了电子设备10的结构示意图。
电子设备10可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,用户标识模块(subscriber identification module,SIM)卡接口195,以及3D感测模块196等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感 器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备10的具体限定。在本申请另一些实施例中,电子设备10可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图片信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备10的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了***的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备10的结构限定。在本申请另一些实施例中,电子设备10也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备10的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备10的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备10中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备10上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备10上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备10的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备10可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidou navigation satellite system,BDS),准天顶卫星***(quasi-zenith satellite system,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。
电子设备10通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。本 申请中,GPU可以用于计算以下几个方面:人脸光效渲染过程中高光及漫反射模型的计算;光源与各网格面片的遮挡关系;高光图层、漫反射图层以及阴影图层的融合结果;在整体光效渲染的过程中对RGB图片的背景部分做高斯模糊;各网格顶点的投影纹理坐标;人像区域中人像纹理图层、人脸光效渲染结果、原始RGB图片的融合结果;背景区域中背景的纹理投影图层与高斯模糊后的背景的融合结果等。
显示屏194用于显示图片,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备10可以包括1个或N个显示屏194,N为大于1的正整数。本申请中,显示屏194可以用于显示待拍摄的图片、光效渲染后的图片等。
电子设备10可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图片。ISP还可以对图片的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图片或视频。物体通过镜头生成光学图片投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图片信号。ISP将数字图片信号输出到DSP加工处理。DSP将数字图片信号转换成标准的RGB,YUV等格式的图片信号。在一些实施例中,电子设备10可以包括1个或N个摄像头193,N为大于1的正整数。本申请中摄像头193分为两种,前置摄像头和后置摄像头。前置摄像头为位于电子设备10正面的摄像头,后置摄像头为位于电子设备10背面的摄像头。
数字信号处理器用于处理数字信号,除了可以处理数字图片信号,还可以处理其他数字信号。例如,当电子设备10在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备10可以支持一种或多种视频编解码器。这样,电子设备10可以播放或录制多种编码格式的视频,例如:动态图片专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备10的智能认知等应用,例如:图片识别,人脸识别,语音识别,文本理解等。本申请中,可以通过NPU实现电子设备10智能识别拍摄场景的功能,还可以通过NPU实现电子设备10智能识别光照方向的功能。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备10的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备10的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备10使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。本申请中,内部存储器121可以存储通过相机应用拍摄的图片,还可以用于存储拍摄场景与匹配的光效模板的映射关系表,还可以用于存储拍摄场景的识别结果、人脸光照方向的识别结果、人像分割的结果、生成的网格数据、五官分割的结果等。
电子设备10可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备10可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备10接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备10可以设置至少一个麦克风170C。在另一些实施例中,电子设备10可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备10还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。
陀螺仪传感器180B可以用于确定电子设备10的运动姿态。
气压传感器180C用于测量气压。
磁传感器180D包括霍尔传感器。
加速度传感器180E可检测电子设备10在各个方向上(一般为三轴)加速度的大小。
距离传感器180F,用于测量距离。电子设备10可以通过红外或激光测量距离。在一 些实施例中,拍摄场景,电子设备10可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。
环境光传感器180L用于感知环境光亮度。电子设备10可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备10是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。
温度传感器180J用于检测温度。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备10的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备10可以接收按键输入,产生与电子设备10的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过***SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备10的接触和分离。电子设备10可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时***多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备10通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备10采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备10中,不能和电子设备10分离。
3D感测模块196可以获取深度数据,在拍照过程中获取的深度数据可以传递给GPU对摄像头193获取的图片进行3D渲染。
以电子设备10是手机为例,结合图1B介绍安装在电子设备10上的3D感测模块196的结构。3D感测模块196可以是飞行时间(time of flight,TOF)3D感测模块或结构光3D感测模块,可设置于电子设备10的顶端,如电子设备10的“刘海”位置(即图1B中示出的区域AA)。可以知道,区域AA中除了包括3D感测模块196之外,还可以包括摄像头193、接近光传感器180G、受话器170B、麦克风170C等。本申请实施例以电子设备10中集成 有结构光3D感测模块196为例进行说明,结构光3D感测模块196在电子设备10中的布置形式为:结构光3D感测模块196包括红外光相机196-1及点阵投射器196-2等模组。其中,点阵投射器196-2包括高功率的激光器(如VCSEL)及衍射光学组件等,即结构光发射器,用于使用高功率的激光器发射出“结构”的红外光激光,投射在物体表面。
示例性的,上述结构光3D感测模块196获取深度数据的过程为:当处理器110检测到当前拍摄模式为人像模式时,控制点阵投射器196-2启动。点阵投射器196-2中的高功率的激光器发射红外光激光,这些红外光激光经由点阵投射器196-2中的衍射光学组件等结构的作用,产生形成许多(如大约3万个)“结构”光的光点投射到拍摄目标表面。利用这些结构光的光点所形成的阵列被拍摄目标表面不同位置反射,红外光相机196-1捕捉到拍摄目标表面反射的结构光的光点,从而获取到拍摄目标表面不同位置的深度数据,然后将所获取到的深度数据上传给处理器110。
此外,结构光3D感测模块196获取的深度数据还可以用于进行人脸识别,例如在电子设备10解锁时通过识别机主的人脸进行解锁。上述结构光3D感测模块196进行人脸识别时,除了包括上述红外光相机196-1及点阵投射器196-2外,还可以包括泛光照明器、红外影像传感器和前述接近光传感器180G等模组。其中,泛光照明器包括低功率的激光器(如VCSEL)及匀光片等,用于使用低功率的激光器发射出“非结构”的红外光激光,投射在物体表面。
当有物体(如人脸)靠近电子设备10时,接近光传感器180G感应到有物体靠近电子设备10,从而向电子设备10的处理器110发出有物体靠近的信号。处理器110接收该有物体靠近的信号,控制泛光照明器启动,泛光照明器中的低功率的激光器向物体表面投射红外光激光。物体表面反射泛光照明器所投射的红外光激光,红外光相机捕捉到物体表面所反射的红外光激光,从而获取到物体表面的影像信息,然后将所获取到的影像信息上传给处理器110。处理器110根据所上传的影像信息判断接近电子设备10的物体是否为人脸。当处理器110判断接近电子设备10的物体为人脸时,控制点阵投射器196-2启动。后续具体实现与前述检测到当前拍摄模式为人像模式时的具体实现类似,获取深度数据并上传给处理器110。处理器110将所上传的深度数据与预先存储在电子设备10中的用户脸部特征数据进行比对和计算,辨识该接近电子设备10的人脸是否为电子设备10的用户的脸部,如果是,则控制电子设备10解锁;如果否,控制电子设备10继续保持锁定状态。
电子设备10的软件***可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android***为例,示例性说明电子设备10的软件结构。
图1C是本发明实施例的电子设备10的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android***分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和***库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图1C所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN, 蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图1C所示,应用程序框架层可以包括窗口管理器,内容提供器,视图***,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图片,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图***包括可视控件,例如显示文字的控件,显示图片的控件等。视图***可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备10的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓***的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
***库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子***进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图片文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
下面结合捕获拍照的使用场景,示例性说明电子设备10软件以及硬件的工作流程。
当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。 以该触摸操作是触摸单击操作,该单击操作所对应的控件为相机应用图标的控件为例,相机应用调用应用框架层的接口,启动相机应用,进而通过调用内核层启动摄像头驱动,通过摄像头193捕获静态图像或视频。
图2示例性示出了用于电子设备10上的应用程序菜单的用户界面。
图2中用户界面20可包括状态栏202、时间组件图标204和天气组件图标203、多个应用程序的图标例如相机图标201、微信图标208、设置图标209、相册图标207、微博图标206、支付宝图标205等,界面20中还可以包括页面指示符210、电话图标211、短信图标212及联系人图标213等。其中:
状态栏202可以包括:运营商指示符(例如运营商的名称“***”)、无线高保真(wireless fidelity,Wi-Fi)信号的一个或多个信号强度指示符、移动通信信号(又可称为蜂窝信号)的一个或多个信号强度指示符和电池状态指示符。
时间组件图标204可用于指示当前时间,例如日期、星期几、时分信息等。
天气组件图标203可用于指示天气类型,例如多云转晴、小雨等,还可以用于指示气温等信息。
页面指示符210可用于指示用户当前浏览的是哪一个页面中的应用程序。用户可以左右滑动多个应用程序图标的区域,来浏览其他页面中的应用程序图标。
可以理解的是,图2仅仅示例性示出了电子设备10上的用户界面,不应构成对本申请实施例的限定。
电子设备10可以检测到作用于相机图标201的用户操作,响应于该操作,电子设备10可以显示用于拍摄照片的用户界面。该用户界面可以是图3实施例涉及的用户界面30。也即是说,用户可以点击相机图标201打开用于拍摄照片的用户界面。
下面结合本申请涉及的应用场景:图像拍摄场景。
图3示例性示出了用于图像拍摄的用户界面。该用户界面可以是用户点击图2实施例中的相机图标201打开的用户界面,不限于此,用户也可以在其他应用程序中打开用于拍摄照片的用户界面,例如用户在微信中点击拍摄控件来打开用于拍摄照片的用户界面。
如图3所示,用于拍摄照片的用户界面30可包括:取景框301、拍摄控件302、拍摄模式列表303、控件304及控件305。其中:
取景框301可用于显示摄像头193获取的图片。电子设备可以实时刷新其中的显示内容。其中,用于获取图片的摄像头193可以是后置摄像头,或者是前置摄像头。
拍摄控件302可用于监听触发拍摄的用户操作。电子设备可以检测到的作用于拍摄控件302的用户操作(如在拍摄控件302上的点击操作),响应于该操作,电子设备10可以确定拍摄的图片,并且在305中显示该拍摄的图片。也即是说,用户可以点击拍摄控件302来触发拍摄。其中,拍摄控件302可以是按钮,或者其他形式的控件。
拍摄模式列表303中可以显示有一个或多个拍摄模式选项。电子设备10可以检测到作用于拍摄模式选项的用户操作,响应该操作,电子设备10可以开启用户选择的拍摄模式。电子设备还可以检测到在拍摄模式列表303中的滑动操作(如向左或向右的滑动操作),响 应于该操作,电子设备10可以切换显示在拍摄模式列表303中的拍摄模式选项,以便用户浏览更多拍摄模式选项。其中,拍摄模式选项可以是图标,或者其他形式的选项。拍摄模式列表303中可以包括:人像拍摄模式的图标303A、拍照拍摄模式的图标303B、录像拍摄模式的图标303C、大光圈拍摄模式的图标303D、夜景拍摄模式的图标303E、慢动作拍摄模式的图标303F。
控件304可用于监听触发切换摄像头的用户操作。电子设备10可以检测到作用于控件304的用户操作(如在控件304上的点击操作),响应于该操作,电子设备10可以切换摄像头(如将后置摄像头切换为前置摄像头,或者将前置摄像头切换为后置摄像头)。
控件305可用于监听触发打开相册的用户操作。电子设备10可以检测到作用于控件305的用户操作(如在控件305上的点击操作),响应于该操作,电子设备10可以打开相册,显示最新保存的图片。
基于前述图像拍摄场景,下面介绍电子设备10上实现的用户界面(user interface,UI)的一些实施例。
图4示例性示出了用户界面30用于用户选择人像拍摄模式的UI实施例。
如图4所示,电子设备10可以检测到作用于拍摄模式列表303中的人像拍摄模式选项的用户操作(如在人像拍摄模式图标303A上的点击操作),响应于该操作,电子设备10可以开启第一拍摄模式。
在一些实施例中,响应于该操作,电子设备10还可以更新人像拍摄模式选项的显示状态,更新后的显示状态可表示人像拍摄模式已被选定。例如,更新后的显示状态可以是高亮拍摄模式图标303A对应的文本信息“人像”。不限于此,更新后的显示状态还可以呈现其他界面表现形式,如该文本信息“人像”的字体变大、该文本信息“人像”被加框、该文本信息“人像”被加下划线、图标303A颜色加深等。
在一些实施例中,响应于该操作,电子设备10还可以在用户界面30中显示控件306。控件306可用于监听打开光效模板选项栏的用户操作。
电子设备10可以检测到作用于控件306的用户操作,响应于该操作,电子设备10可以在用户界面30中显示光效模板选项栏307,可参考图5。光效模板选项栏307中包括两个或两个以上的光效模板选项。
光效模板选项栏307中的光效模板选项可用于监听用户的选择操作。具体的,电子设备可以在光效模板选项栏307中检测到作用于光效模板选项的用户操作(如在“光效1”上的点击操作),响应于该操作,电子设备可以确定已选定的光效模板为用于处理拍摄图像的光效模板。这里,已选定的光效模板可以为该操作所作用于的光效模板选项对应的光效模板。例如,如果该操作为点击“光效1”的操作,则已选定的光效模板为“光效1”对应的光效模板1。
在一些实施例中,响应于该操作,电子设备10可以更新被选定的光效模板选项的显示状态,更新后的显示状态可表示该光效模板已被选定。
例如,若被选定的光效模板为光效模板1,更新后的显示状态可以是高亮被选定的光效模板图标对应的文本信息“光效1”。不限于此,更新后的显示状态还可以呈现其他界面表 现形式,如该文本信息“光效1”的字体变大、该文本信息“光效1”被加框、该文本信息“光效4”被加下划线、被选定的光效模板图标颜色加深等。后续实施例中将该被选定的光效模板称为第一光效模板。
电子设备10还可以检测到在光效模板选项栏307中的滑动操作(如向左或向右的滑动操作),响应于该操作,电子设备10可以切换显示在光效模板选项栏307中的光效模板选项,以便用户浏览更多光效模板选项。电子设备10可以根据当前拍摄场景显示光效模板选项栏中的光效模板选项,光效模板选项栏用于监听选择光效模板的用户操作,可参考图6。在一些实施例中,响应于该操作,电子设备10还可以直接在用户界面30中显示光效模板选项栏,无需显示控件306,并通过控件306监听打开光效模板选项栏的用户操作。
图6示例性示出了电子设备10根据当前拍摄场景,推荐光效模板选项的UI实施例。
如图6所示,电子设备10在拍摄内容308内没有识别到当前拍摄场景为白云的情况下,光效模板选项栏307内包含的光效模板选项的排列顺序从左至右依次为:光效1、光效2、光效3、光效4、光效5。电子设备10在拍摄内容309内识别到当前拍摄场景为白云的情况下,光效模板选项栏307内包含的光效模板选项的排列顺序从左至右依次为:光效4、光效1、光效2、光效3、光效5。可以看出,电子设备10可将与拍摄场景“白云”匹配的光效模板4作为推荐光效模板,将其对应的选项“光效4”(307A)显示在光效模板选项栏307左侧的第一个位置。示例仅为本申请提供的一种实施例,不限于此,还可以有其他的实施例。
也即是说,电子设备10可以根据当前拍摄场景推荐光效模板选项。与当前拍摄场景匹配的光效模板选项的显示状态为第一显示状态。第一显示状态可用于突出该选项,提示用户该选项对应的模板是适合于当前拍摄场景的模板,便于用户快速识别并选择该选项,可有效向用户推荐该选项。第一显示状态可以通过以下一种或多种方式实现:该选项在选项栏中的显示位置为第一位置(如左侧第一个显示位置、正中间的显示位置等)、该选项被高亮显示、该选项对应文本信息(如“光效1”)的字体为大字体、该选项对应的图标呈现动态变化(例如呈现出心跳效果)。
图7示例性示出了用户界面30用于用户拍照的UI实施例,图7示出的用户界面30中,选项“光效4”被高亮,表示电子设备10已确定选项“光效4”对应的光效模板4为第一光效模板。
如图7所示,在电子设备10已确定光效模板4为第一光效模板的情况下,电子设备10可以检测到作用于拍摄控件302的用户操作(如在拍摄控件302上的点击操作),响应于该操作,电子设备10拍摄图片,并采用第一光效模板对应的光效参数对该图片进行处理。其中,电子设备10拍摄的图片可以是电子设备10在检测到上述用户操作的时刻拍摄的图片。电子设备10拍摄的图片还可以是电子设备10在检测到上述用户操作的时刻之前的一段时间内拍摄的一系列图片。该一段时间例如可以是5ms、10ms等。
在一些实施例中,响应于该操作,电子设备10还可在控件305中显示该图片的缩略图,可参考图8。其中,该图片的缩略图包含的像素点少于该图片包含的像素点。电子设备10 可以检测到作用于控件305的用户操作(如在控件305上的点击操作)。响应于该操作,电子设备10可以显示经过第一光效模板的光效参数处理后的图片的用户界面40,用户界面40可参考图9。也即是说,用户可以点击控件305打开用于显示图片的用户界面40。不限于此,用户也可以在其他应用程序中打开用于显示图片的用户界面,例如用户在界面20中点击相册应用的图标207来打开用于显示图片的用户界面,又例如用户在微信中点击照片控件来打开用于显示图片的用户界面。
图9-图10示例性示出了用户界面40。
如图9所示,用户界面40可以包括:图片内容显示区域401及控件402。图片内容显示区域401用于显示上述在第一拍摄模式下,经过第一光效模板的光效参数处理后生成的图片,可以将该图片称为第一图片。
电子设备10可以检测到作用于控件402的用户操作(如在控件402上的点击操作),响应于该操作,电子设备10还可以在用户界面40中显示:光源指示符403、光强指示符404、光效模板选项栏405、取消控件406和保存控件407,可参考图10。其中:
光源指示符403为根据光照方向设置的虚拟的光源的指示符,可以用于指示拍摄场景中实际的光源的光照方向。
电子设备10可以在识别拍摄场景时,根据取景框301内显示的人脸的图像识别光照方向。其中,根据人脸的图像识别光照方向的具体方式将在后续图17实施例中详细介绍,此处暂不详述。
电子设备10可以检测到作用于光源指示符403的用户操作(如在光源指示符403上的滑动操作),响应于该操作,电子设备10可以更新显示光源指示符403。其中,电子设备10更新显示光源指示符403的详细过程可参考图23实施例中的描述,此处暂不详述。在一些实施例中,响应于该操作,电子设备10还可以根据更新显示的光源指示符403指示的光照方向,更新显示图片内容显示区域401内的图片。
光强调节符404可以用于指示光源的光照强度。
电子设备10可以检测到作用于光强调节符404的第一用户操作(如在光强调节符404上的左滑操作),响应于该操作,电子设备10可以更新显示光强调节符404,更新显示的光强调节符404指示的光照强度变弱。响应于该操作,电子设备10还可以根据变弱的光照强度,更新显示图片内容显示区域401内的图片。电子设备10还可以检测到作用于光强调节符404的第二用户操作(如在光强调节符404上的右滑操作),响应于该操作,电子设备10更新显示光强调节符404,更新显示的光强调节符404指示的光照强度变强。
在一些实施例中,响应于该操作,电子设备10还可以根据变强的光照强度,更新显示图片内容显示区域401内的图片。不限于404所示的水平的光强调节符,还可以有其他形式的光强调节符,如竖直的光强调节符,或者以加号和减号形式呈现的光强调节符,本申请实施例对此不作限定。不限于左滑的用户操作,第一用户操作还可以是下滑或者点击的用户操作。不限于右滑的用户操作,第二用户操作还可以是上滑或者点击的用户操作。本申请实施例对此均不作限定。
光效模板选项栏405可包括两个或两个以上的光效模板选项,光效模板选项栏405中 第一光效模板的选项被特殊标记(如图10中的选项“光效4”),表示图片内容显示区域401内显示的图片已被第一光效模板对应的光效参数处理过。
电设备10可以检测到作用于光效模板选项栏405中第二光效模板选项的用户操作(如在选项“光效3”上的点击操作),响应于该操作,电子设备10更新显示第二光效模板选项和第一光效模板选项的显示状态。
其中,第二光效模板为除第一光效模板之外的其他光效模板。更新后的第二光效模板选项的显示状态可表示第二光效模板已被选定,更新后的第一光效模板选项的显示状态可表示第一光效模板已被取消选定。更新后的第二光效模板选项的显示状态可以与更新前的第一光效模板选项的显示状态一致,具体可参考图6实施例中的描述,此处不赘述。
其中,更新后的第一光效模板选项的显示状态可以与更新前的第二光效模板选项的显示状态一致。此外,响应于该操作,电子设备10还可以根据第二光效模板对应的光效参数更新显示图片内容显示区域401内的图片。
上述通过光源指示符403对光照方向的调节,通过光强调节符404对光照强度的调节以及通过光效模板选项栏405对光效模板的切换,均可称为光效编辑。
取消控件406可用于监听触发取消光效编辑的用户操作。电子设备10可以检测到作用于取消控件406的用户操作(如在取消控件406上的点击操作),响应于该操作,电子设备10可以取消对第一图片的光效编辑,更新显示图片内容显示区域401内的图片,即为第一图片。也即是说,用户可以点击取消控件406来触发取消对第一图片的光效编辑。
保存控件407可用于监听触发保存第二图片的用户操作。第二图片为图片内容显示区域401内显示的图片。电子设备10可以检测到作用于保存控件407的用户操作(如在保存控件407上的点击操作),响应于该操作,电子设备10可以保存图片内容显示区域401内显示的图片。第二图片可以是对第一图片光效编辑后生成的图片。也即是说,用户可以点击保存控件407来触发保存对第一图片光效编辑后生成的第二图片。
基于前述图4-图10的UI实施例,下面介绍本申请实施例提供的图像处理方法。
参见图11,图11是本申请提供的一种图像处理方法的流程示意图。本申请提供的图像处理方法主要分为三大过程:拍照、光效渲染、光效编辑。下面以电子设备为执行主体,展开描述:
首先,拍照的过程主要包括以下S101-S105。
S101:电子设备开启第一拍摄模式。
具体地,电子设备10开启第一拍摄模式的方式可以包括但不限于下述几种:
第一种方式,电子设备10可以通过触摸传感器180K在用户界面30中检测到作用在人像拍摄模式选项的用户操作,以开启第一拍摄模式。具体实现过程可参考图4实施例的描述,此处不赘述。
第二种方式:电子设备10可以通过触摸传感器180K在用户界面30中检测到作用在控件304的用户操作,以开启第一拍摄模式。也即是说,当电子设备10从后置摄像头切换为前置摄像头时,电子设备10即可开启第一拍摄模式。
此外,电子设备10开启第一拍摄模式后,还可以判断取景框301内是否存在人脸,若 存在,进一步判断人脸是否符合要求。若不存在人脸,则在取景框301内显示提示信息,如“未检测到人脸”。若判断出人脸不符合要求,则在取景框301内显示提示信息,如“未检测到符合要求的人脸”。其中,符合要求的人脸可以是以下一项或任意组合:单张人脸、人脸的角度不超过第一阈值、取景框301内人脸的面积与待拍摄的图片的总面积的比值大于或者等于第二阈值。人脸角度的检测方法在后续实施例中描述,此处暂不详述。
S102:电子设备识别待拍摄图片的拍摄场景。
具体地,在第一拍摄模式下,电子设备可以获取待拍摄图片的RGB数据,将该待拍摄图片的RGB数据输入第一模型,输出识别的拍摄场景。该第一模型由大量的已知拍摄场景的图片的RGB数据训练得到。该第一模型输出的结果可以是二进制的字符串,字符串的数值代表一个拍摄场景,字符串与拍摄场景的对应关系可以以表格的形式保存在电子设备10的内部存储器121中。例如,001代表场景1,010代表场景2,011代表场景3,100代表场景4等,依次类推。电子设备10可以根据第一模型输出的字符串,在表格中查找与该字符串对应的拍摄场景。字符串的位数可以依据所有的拍摄场景种类决定。本申请实施例中第一模型的输出形式进行示例性说明,在具体实现中还可以有其他的输出形式,本申请实施例对此不作限定。
此外,在识别拍摄场景的同时,电子设备10还可以识别人脸的光照方向,光照方向的识别结果可以用于后续的光效渲染过程中以及光效编辑过程中。光照方向的识别过程见后续实施例的描述,此处暂不详述。
S103:电子设备根据待拍摄图片的拍摄场景显示光效模板选项栏。
具体地,电子设备10内可以保存拍摄场景与匹配的光效模板的映射关系表。电子设备10根据映射关系表查找出与当前拍摄场景匹配的光效模板后,可以将该匹配的光效模板的选项的显示状态设置为第一显示状态,具体可参考图6实施例中的描述,此处不赘述。
以下示例性示出几种映射关系表。
在一些实施例中,拍摄场景与匹配的光效模板为一一对应的关系。如表1所示。
表1拍摄场景与匹配的光效模板的映射关系表
拍摄场景 光效模板
场景1 光效模板4
场景2 光效模板2
场景3 光效模板5
场景4 光效模板1
场景5 光效模板3
具体地,光效模板1对应的选项在界面30中显示为“光效1”,光效模板2-光效模板5类似,不赘述。
在一些实施例中,映射关系表中一个拍摄场景可以与多个匹配程度不同的光效模板对应。以一个拍摄场景对应三个匹配程度(高、中、低)不同的光效模板为例,如表2所示。
表2拍摄场景与匹配的光效模板的映射关系表
拍摄场景 光效模板(高) 光效模板(中) 光效模板(低)
场景1 光效模板4 光效模板2 光效模板1
场景2 光效模板2 光效模板4 光效模板5
场景3 光效模板5 光效模板1 光效模板3
场景4 光效模板1 光效模板3 光效模板2
场景5 光效模板3 光效模板5 光效模板4
表2中光效模板(高)表示匹配程度高的光效模板,光效模板(中)表示匹配程度中的光效模板,光效模板(低)表示匹配程度低的光效模板。
具体地,电子设备10可以根据表2,查找与拍摄场景匹配的三个光效模板,按照匹配程度从高到低依次将匹配的光效模板对应的选项显示在光效模板选项栏的最前面。
上述拍摄场景与匹配的光效模板的关系以映射关系表的形式仅为示例性说明,在具体实现中还可以有其他的形式,本申请实施例对此不作限定。
上述拍摄场景与匹配的光效模板的对应关系仅为示例性说明,在具体实现中还可以有其他的对应关系,本申请实施例对此不作限定。
S104:电子设备接收用于选择第一光效模板的用户操作。
具体地,用于选择第一光效模板的用户操作可以是作用于光效模板选项栏中第一光效模板选项的用户操作,如图6实施例中在图标307A上的点击操作,此处不赘述。电子设备接收用于选择第一光效模板的用户操作之后,开启第一光效模板,以使电子设备10在接收到拍照指令后,电子设备10可以确定第一光效模板为用于处理拍摄图像的光效模板,具体可参考图5实施例的描述,此处不赘述。
S105:电子设备在第一光效模板已被选定的情况下接收拍照指令。
具体地,拍照指令可以是作用于拍摄控件302的用户操作产生的指令,具体可见图7实施例的描述,此处不赘述。
具体地,电子设备10在t1时刻接收拍照指令后,可以获取t1时刻的RGB数据以及深度数据。为了避免由于RGB数据采集器件和深度数据采集器件之间的位置误差,需将RGB数据和深度数据进行坐标对齐,获得时间和坐标均对齐的RGBD数据(RGB数据和深度数据),用于后续的光效渲染过程及光效编辑过程。
在一些实施例中,RGB数据采集器件可以是后置摄像头,深度数据采集器件可以是后置摄像头,电子设备10可以根据后置摄像头采集的RGB数据计算出深度数据。电子设备10开启第一拍摄模式后,可实时计算深度数据。
在一些实施例中,RGB数据采集器件可以是前置摄像头,深度数据采集器件可以是3D感测模块196。电子设备10开启第一拍摄模式后,可实时采集深度数据。
其次,光效渲染的过程主要包括S106。由前述S105中的描述可知深度数据可以由后置摄像头获取的RGB数据计算得到,也可以由3D感测模块196采集得到。本申请实施例中涉及的光效渲染过程中使用的深度数据以由3D感测模块196采集得到为例进行说明。
S106:电子设备采用第一光效模板对应的光效参数对拍摄图片进行处理,生成第一图 片。
具体地,采用第一光效模板对应的光效参数对拍摄的图片进行光效渲染。光效渲染的过程可以包括人脸光效渲染,或者包括人脸光效渲染以及整体光效渲染。其中,人脸光效渲染即对图片中人脸部分的光效渲染,整体光效渲染即对整张图片的光效渲染。关于人脸光效渲染、整体光效渲染的具体实现,后续内容中会详细介绍,这里先不赘述。
最后,光效编辑的过程主要包括以下S107-S108。
S107:电子设备接收用户对第一图片进行光效编辑的指令。
具体地,电子设备10接收用户对第一图片进行光效编辑的指令之前,电子设备10在用户界面40中显示S105中生成的第一图片。电子设备10显示第一图片的过程可参考图8实施例的描述,此处不赘述。
具体地,电子设备10显示第一图片的界面可参考图9实施例中的用户界面40。用户对第一图片进行光效编辑的指令可由电子设备10检测到作用于控件402的用户操作产生。电子设备10检测到作用于控件402的用户操作后,响应于该操作,电子设备10可在用户界面40中进一步显示光源指示符403、光强指示符404、光效模板选项栏405、取消控件406和保存控件407,具体可参考图10实施例的描述,此处不赘述。
此外,响应于作用于控件402的用户操作,电子设备10在用户界面40中,还可以显示纹理图案投影位置的指示符,用户可以手动调节该指示符,改变背景及人像上的投影图案,增强用户与电子设备10的互动性。
S108:电子设备生成第二图片,并保存第二图片。
具体地,电子设备10检测到作用于光源指示符403的用户操作,或者作用于光强调节符404的用户操作,或者作用于光效模板选项栏405中第二光效模板选项的用户操作后,响应于上述用户操作,电子设备10可生成第二图片,显示在图片内容显示区域401内。
电子设备10检测到作用于保存控件407的用户操作后,响应于该操作,电子设备10保存第二图片。
可以知道的是,上述S102中输出的场景识别的结果、光照方向的识别结果,S201中建立的三维模型,S302获得的人像分割结果,S303获得的五官分割结果等中间结果均可被保存至电子设备10的内部存储器121中,以便用户对第一图片进行光效编辑时直接调用上述中间结果,减少计算量。
本申请实施例中,用户可以手动调节第一图片中的光照方向、光源强度以及更换光效模板,可以增强用户与电子设备10的互动性,提升用户体验。
接下来详细介绍S106中提到的人脸光效渲染、整体光效渲染的具体实现过程。
图12示出了本申请实施例涉及的人脸光效渲染,具体可以包括以下几个步骤:
阶段一(S201):建立三维模型。
S201:电子设备根据RGBD数据建立三维模型。
具体地,建立三维模型的过程包括以下几个步骤:
S2011:去除RGBD数据中的异常值,即去除离群点。
S2012:对去除异常值的RGBD数据进行补洞操作,即插值,以使数据连续、平滑且无空洞。
S2013:对补洞后的RGBD数据进行滤波操作,以去除噪声。
S2014:使滤波后的RGBD数据以固定步长的像素点组成规则三角面片网格。
具体地,网格通常由三角形、四边形或者其他简单凸多边形组成,以简化渲染过程。本申请实施例以网格由三角形组成为例进行说明。
阶段二(S202-S203):对图片进行分割。
S202:电子设备将拍摄的图片分割成人像和背景两部分,获得人像分割结果。
具体地,拍摄的图片即为电子设备10通过前置摄像头193采集的RGB数据构成的图片(以下称为RGB图片)。RGB图片包括多个像素点,每个像素点的像素值即为RGB值。
具体地,电子设备10可以通过计算前置摄像头193获取的RGB数据,得到人像分割图。例如可以采用基于边缘的分割方法进行人像分割,即计算各像素点的灰度值,找出图片中两个不同区域的边界线上连续的像素点的集合,这些连续的像素点两边的像素点的灰度值存在明显差异或者位于灰度值上升或下降的转折处。除了采用基于边缘的分割方法进行人像分割外,还可以采用其他方法,例如基于阈值的分割方法、基于区域的分割方法、基于图论的分割方法、基于能量泛函的分割方法等。上述人像分割的方法仅为示例性说明,本申请实施例对此不作限定。
具体地,人像分割图如图13所示,白色部分为人像部分,黑色部分为背景部分。将人像和背景分割,获得人像部分和背景部分,还可用于后续对整张图片进行渲染时,将人像部分和背景部分分开渲染,具体渲染过程可见后续实施例的描述,此处暂不介绍。
S203:电子设备在人像部分进行五官分割,获得五官分割结果。
具体地,电子设备将人像部分中人脸部分的RGB数据输入第三模型,可以输出分割结果,分割结果包括五官(眼睛、鼻子、眉毛、嘴巴、耳朵)、皮肤、头发和其他部分。根据第三模型输出的分割结果可获得五官分割图,如图14所示,不同灰度的区域表示不同的部分。第三模型由大量的已知分割结果的人脸部分的RGB数据训练得到。第三模型输出的结果形式可以以特定的二进制数字表示某像素点所属的部分(例如000表示眼睛、001表示鼻子、010表示眉毛、011表示嘴巴、100表示耳朵、101表示皮肤、110表示头发、111表示其他),处理器110可以将属于同一部分的像素点以相同的灰度表示,将不同部分的像素点以不同的灰度表示。上述五官分割的方法仅为示例性说明,在具体实现中还可以有其他的五官分割方法,本申请实施例对此不作限定。
上述S202-S203,与S201实现的顺序不作限定。即可以先执行S201再执行S202-S203,也可以先执行S202-S203再执行S201。
阶段三(S204-S206):分别计算三个图层中各个像素点的灰度值。
S204:电子设备将网格数据输入漫反射模型,输出漫反射图层。
具体地,本申请实施例中漫反射模型采用Oren-Nayar反射模型,Oren-Nayar反射模型的输入数据包括网格数据、五官分割图、光源照射至各个像素点时的光源强度、光源方向,Oren-Nayar反射模型的输出数据为各个像素点的灰度值,称为漫反射图层。其中,Oren-Nayar 反射模型的参数属于第一光效模板对应的光效参数集合,由S104中选定的光效模板决定。光源照射至各个像素点时的光源强度可由线性变换余弦(Linearly Transformed Cosines,LTC)算法计算得到。
S205:电子设备将网格数据输入高光反射模型,输出高光图层。
具体地,本申请实施例中高光反射模型采用GGX反射模型,输入数据与漫反射模型的输入数据与输出数据一致,GGX反射模型的输出称为高光图层。GGX反射模型的参数属于第一光效模板对应的光效参数集合,由S104中选定的光效模板决定。
S206:电子设备计算每个网格是否被遮挡,若被遮挡,对该网格进行阴影渲染,输出阴影图层。
具体地,根据光源方向、网格数据可以分别计算每个网格是否被遮挡。若被遮挡,将该网格对应的像素点的灰度值设为最低,若未被遮挡,将该网格对应的像素点的灰度值设为最高,最终输出阴影渲染后每个像素点的灰度值,称为阴影图层。其中,最高灰度值可以由图片的灰度级决定。本申请实施例中,图片的灰度级为2,最高灰度值为1,最低灰度值为0。本申请实施例中根据识别的拍照的场景中真实的光照方向计算每个网格的遮挡关系,并根据遮挡关系设置该网格对应的像素点的灰度值,可以增加真实感较强的阴影效果。
上述S204、S205中输出的各个像素点的灰度级可以是256,输出的各个像素点的灰度值范围为[0,1],即将范围为[0,255]的灰度值归一化为[0,1]的灰度值,以使S204、S205中输出的各个像素点的灰度值范围与S206中输出的各个像素点的灰度值范围一致,便于在S207中将三个图层(漫反射图层、高光图层和阴影图层)进行叠加融合。上述S204、S205、S206实现的先后顺序不作限定。
阶段四(S207):图层融合。
S207:电子设备将漫反射图层、高光图层、阴影图层进行叠加融合,根据融合结果与RGB数据输出人脸光效渲染结果。
具体地,将S204中输出的漫反射图层、S205中输出的高光图层、S206中输出的阴影图层进行叠加融合,即将每个图层中相同位置的像素点的灰度值加权求和,得到叠加融合后该每个像素点的灰度值。各个图层的像素点的灰度值占的权重即为图层融合参数,该图层融合参数属于第一光效模板对应的光效参数集合,由S104中选定的光效模板决定。将叠加融合后各个像素点的灰度值与该像素点的像素值相乘,即可获得人脸光效渲染后各个像素点的像素值,即为人脸光效渲染结果。本申请实施例中各个像素点的像素值的范围可以是[0,255]。
本申请实施例在人脸光效渲染过程中,将虚拟光源置于根据光照方向确定的光源位置,使后期施加的光效与图片原始的光照不冲突,且根据智能识别的光照方向计算每个网格的遮挡关系,并根据遮挡关系设置该网格对应的像素点的灰度值,渲染因遮挡造成的阴影,尤其可以渲染眼窝和鼻子部分光照投射的阴影,极大增强面部的立体感。
接下来结合图15示介绍本申请实施例涉及的整体光效渲染,具体过程可以包括以下几个步骤:
阶段一(S301):高斯模糊。
S301:电子设备对RGB图片的背景部分做高斯模糊。
具体地,RGB图片为根据前置摄像头193获取的RGB数据得到的图片。将背景部分中每个像素点的像素值与周围相邻像素点的像素值加权平均,计算出高斯模糊后该像素点的像素值。
阶段二(S302-S306):分别计算人像的投影纹理图层和背景的投影纹理图层。
S302:电子设备根据纹理图案投影方向和人像网格,计算每个网格顶点的纹理坐标。
具体地,纹理图案投影的位置坐标已知,投影的方向已知,可计算投影矩阵,投影矩阵是人像网格所在的空间坐标系与纹理图案投影的位置所在的空间坐标系之间的联系矩阵。人像网格所在的空间坐标系可以以人像网格的中心为坐标系原点,水平向右为x轴正方向,水平向前为y轴正方向,竖直向上为z轴正方向。纹理图案投影的位置所在的空间坐标系以该位置为坐标系原点,x轴、y轴、z轴分别与人像网格的x轴、y轴、z轴平行。确定投影矩阵后,可以根据投影纹理在x轴和y轴上的拉伸倍数、投影纹理的像素值,确定投影在人像网格上的投影图案。网格顶点在投影图案中的坐标位置即为纹理坐标。其中,纹理图案投影方向、纹理图案投影的位置坐标、投影纹理在x轴和y轴上的拉伸倍数、投影纹理的像素值属于光效参数集合,由S104中选定的光效模板决定。
S303:电子设备根据每个网格顶点的纹理坐标提取对应的纹理图案的像素值,输出人像的投影纹理图层。
具体地,网格顶点在投影图案中的坐标位置已知,投影在人像网格上的投影图案已知,可以提取每个网格顶点对应的纹理图案的像素值,从而获得人像网格中所有网格对应的纹理图案的像素值,称为人像的投影纹理图层。
S304:电子设备在人像背景部分设定一个垂直于人像所在地面的投影平面。
具体地,为了获得更加真实的立体效果,在背景部分设置虚拟的投影平面。该虚拟的投影平面与人像所在的地面垂直。
S305:电子设备根据纹理图案投影方向和投影平面,计算投影平面中像素点的纹理坐标。
具体地,投影平面的投影图案的确定与人像网格上投影图案的确定类似,在此不赘述。投影平面中像素点的纹理坐标即为该像素点在投影图案中的坐标位置。
S306:电子设备根据投影平面的纹理坐标提取对应的纹理图案的像素值,输出背景的投影纹理图层。
具体地,像素点在投影图案中的坐标位置已知,投影在投影平面上的投影图案已知,可以提取每个像素点对于的纹理图案的像素值,从而获得投影平面上所有像素点对应的纹理图案的像素值,称为背景的投影纹理图层。
阶段三(S307-S308):叠加融合。
S307:电子设备将人像的投影纹理图层、人脸光效渲染结果和RGB图片进行叠加融合。
具体地,在人像部分,将S303中人像的投影纹理图层、S207中人脸光效渲染结果和 前置摄像头193获取的RGB图片中人像部分,相同位置的像素点的像素值加权求和,可获得叠加融合后的人像部分的各个像素点的像素值。其中,人像的投影纹理图层、人脸光效渲染结果和前置摄像头193获取的RGB图片中的像素点的像素值各自占的权重属于第一光效模板对应的光效参数集合,由S104中选定的光效模板决定。
S308:电子设备将背景的投影纹理图层和高斯模糊的背景进行叠加融合。
具体地,在背景部分,将S306中的背景的投影纹理图层和S301中高斯模糊后的背景部分,相同位置的像素点的像素值加权求和,可获得叠加融合后的背景部分各个像素点的像素值。其中,背景的投影纹理图层和高斯模糊后的背景中像素点的像素值各自占的权重属于第一光效模板对应的光效参数集合,由S104中选定的光效模板决定。本申请实施例中在背景部分将背景的投影纹理图层和高斯模糊后的背景进行叠加融合,可以使光效渲染后的图片有光效背景外,还保留原背景的痕迹,增加渲染后的图片的真实感。
阶段四(S309):图片后处理。
S309:电子设备对叠加融合后的图片进行后处理。
具体地,叠加融合后的图片包括S307中人像部分的融合结果和S308中背景部分的融合结果,组成了整张图片。后处理可以包括对整张图片的色调、对比度和滤镜等处理。色调处理主要是通过调节H值来调整整张图片的总体色彩倾向。对比度处理主要是调整整张图片中最亮部分与最暗部分亮度的比值。滤镜处理是通过一个矩阵与整张图片中的每个像素点的像素值进行运算,得到滤镜处理后的各个像素点的像素值,以调整整张图片的整体效果。上述色调处理中的H值、对比度处理中最亮部分与最暗部分亮度的比值以及滤镜处理中的矩阵均属于第一光效模板对应的光效参数集合,由S104中选定的光效模板决定。
本申请实施例中,可以将人像部分和背景部分分开渲染,使用3D感测模块196采集的真实的深度数据,使光效在人像上错落起伏,增加图片的真实感和立体感。
若光效渲染的过程只包括人脸光效渲染,则S207中输出的人脸光效渲染的结果即为第一图片各个像素点的像素值。也即是说,人脸光效渲染后即可获得第一图片。
若光效渲染的过程可包括人脸光效渲染和整体光效渲染,则对拍摄的图片进行人脸光效渲染后,继续进行整体光效渲染,计算第一图片各个像素点的像素值。也即是说,整体光效渲染后即可获得第一图片。
接下来详细说明电子设备10中的各个部件在上述图11实施例包括的拍照过程(S101-S105)和光效编辑过程(S107-S108)的各个步骤中的协作关系。下面以RGB数据采集器件为前置摄像头、深度数据采集器件为3D感测模块196为例,进行说明。
在介绍电子设备10中的各个部件在S101中的协作关系之前,先介绍电子设备10如何显示用户界面30,以及如何开启前置摄像头采集RGB数据。如图16所示:
1、显示屏194显示用户界面20。用户界面20中显示有多个应用的应用图标,其中包括相机应用图标201。
2、触摸传感器180K检测到用户点击相机应用图标201。
3、触摸传感180K将用户点击相机应用图标201的事件上报至处理器110。
4、处理器110确定用户点击相机应用图标201的事件,向显示屏194发出显示用户界面30的指令。
5、显示屏194响应于处理器110发出的指令,显示用户界面30。
6、处理器110确定用户点击相机应用图标201的事件,向摄像头193发出开启摄像头193的指令。
7、摄像头193响应于处理器110发出的指令,开启后置摄像头,实时采集待拍摄的图片的RGB数据。
8、将实时采集的待拍摄的图片的RGB数据保存至内部存储器121。
9、触摸传感器180K检测到用户点击控件306。
10、触摸传感器180K将用户点击控件306的事件上报至处理器110。
11、处理器110确定用户点击控件306的事件,向摄像头193发出开启前置摄像头的指令。
12、摄像头193响应于处理器110发出的指令,开启前置摄像头。前置摄像头可实时采集待拍摄的图片的RGB数据,并将待拍摄的图片的RGB数据保存至内部存储器121中。
具体地,实时采集的待拍摄图片的RGB数据可携带时间戳,以便处理器110在后续处理中将RGB数据与深度数据进行时间对齐处理。
接下来介绍电子设备10中的各个部件在S101中的协作关系,如图17所示:
13、触摸传感器180K检测到用户点击图标303A。
14、触摸传感器180K将用户点击图标303A的事件上报至处理器110。
15、处理器110确定用户点击图标303A的事件,开启第一拍摄模式。
具体地,处理器110开启第一拍摄模式即为处理器110调节光圈大小、快门速度、感光度等拍摄参数。
具体地,处理器110开启第一拍摄模式后,还可以判断取景框301内是否存在人脸,若存在,进一步判断人脸是否符合要求,如S101中的描述,此处不赘述。接下来将详细介绍人脸角度的检测方法。
具体地,上述人脸角度可以是三维空间上的人脸的角度,第一阈值包括三个数据,分别是在标准三维坐标系中绕x轴旋转的俯仰角(pitch),绕y轴旋转的偏航角(yaw),绕z轴旋转的横滚角(roll),该标准三维坐标系可以是以正对电子设备的人脸的鼻尖位置为原点,水平向右的方向为x轴的正方向,水平向前的方向为y轴的正方向,垂直向上的方向为z轴的正方向。示例性地,符合要求的人脸的角度可以是俯仰角小于或者等于30°,偏航角小于或者等于30°,横滚角小于或者等于35°,则第一阈值为30°、30°、35°。人脸角度的检测可以通过建立人脸的三维模型来实现。具体可以通过深度数据建立待检测的人脸的三维模型,再旋转内部存储器121中保存的标准三维模型直至该标准三维模型与待检测的人脸的三维模型匹配,则该标准三维模型旋转的角度为待检测的人脸的角度。以上人脸角度检测的方法仅为示例性说明,在具体实现中还可以有其他的检测方法,本申请实施例对此不作限定。上述第二阈值例如可以是4%、10%等。上述第一阈值、第二阈值不限于上述列举的值,在具体实现中还可以是其他的值,本申请实施例对此不作限定。
16、处理器110向显示屏194发出更新图标303A的显示状态。
17、显示194响应于处理器110发出的指令,更新显示图标303A的显示状态。
18、处理器110向3D感测模块196发送采集深度数据的指令。
19、3D感测模块196响应于处理器110发送的指令,实时采集深度数据。
20、3D感测模块196保存深度数据至内部存储器121。
具体地,实时采集的深度数据可携带时间戳,以便处理器110在后续处理中将RGB数据与深度数据进行时间对齐处理。
接下来介绍电子设备10中的各个部件在S102中的协作关系,如图18所示:
21、触摸传感器180K检测到用户点击控件306。
22、触摸传感器180K将用户点击控件306的事件上报至处理器110。
23、处理器110确定用户点击控件306的事件,向显示屏194发送显示光效模板选项栏307的指令。
24、显示屏194响应于处理器110发送的指令,显示光效模板选项栏307。
25、处理器110从内部存储器121中读取待拍摄图片的RGB数据及深度数据。
26、识别拍摄场景及光照方向。
具体地,拍摄场景的识别方法可见S102中的描述。此处将详细介绍光照方向的识别方法。
具体地,处理器110可以将人脸部分的RGB数据输入至第二模型,输出人脸光照方向的结果。第二模型由大量的已知光照方向的人脸部分的RGB数据训练得到。人脸光照的方向包括三维空间中的三个数据,与xoy平面的夹角α、与xoz平面的夹角β、与yoz平面的夹角γ,其中,原点o为人脸鼻尖所在的位置,水平向右为x轴的正方向,水平向前为y轴的正方向,竖直向上为z轴的正方向,则第二模型的输出结果为(α、β、γ)。
本申请实施例中通过智能识别人脸的光照方向,可以在后续光效渲染的过程中将虚拟光源置于根据光照方向确定的光源位置,使后期施加的光效与图片原始的光照不冲突;还可以在后续光效编辑过程中,在界面40中显示虚拟光源的位置,用户可以通过调节虚拟光源的位置改变图片效果,提升用户与电子设备10的互动性;还可以在拍摄界面20中实时显示虚拟光源的位置,提升拍照过程中的趣味性。
27、处理器110将光照方向识别结果保存至内部存储器121中,以便在后续处理中直接调用该结果。
接下来介绍电子设备10中的各个部件在S103-S104中的协作关系,如图19所示:
28、处理器110从内部存储器121中读取拍摄场景与匹配的光效模板的映射关系表。
29、处理器110确定与拍摄场景匹配的光效模板(假设为光效模板4)。
30、处理器110向显示屏194发送更新显示光效模板选项栏307的指令。
具体地,更新显示的光效模板选项栏307中,与拍摄场景匹配的光效模板选项的显示状态为第一显示状态。
31、显示屏194响应于处理器110发出的指令,更新显示光效模板选项栏307。
32、触摸传感器180K检测到用户点击第一光效模板选项。
33、触摸传感器180K将用户点击第一光效模板选项的事件上报至处理器110。
34、处理器110确定用户点击第一光效模板选项的事件,向显示屏194发送更新第一光效模板选项的显示状态。
35、显示屏194响应于处理器110的指令,更新第一光效模板选项的显示状态。
接下来介绍电子设备10中的各个部件在S105中的协作关系,如图20所示:
36、触摸传感器180K检测到用户点击拍摄控件302。
37、触摸传感器180K将用户点击拍摄控件302的事件上报至处理器110。
38、处理器110确定用户点击拍摄控件302的事件,读取内部存储器121中保存的待拍摄图片的RGB数据。
39、处理器110确定用户点击拍摄控件302的事件,读取内部存储器121中保存的深度数据。
具体地,深度数据的时间戳与38中读取的待拍摄图片的RGB数据的时间戳一致,从而保证RGB数据与深度数据的时间对齐。
40、处理器110将RGB数据与深度数据坐标对齐,获得时间和坐标均对齐的RGBD数据。
请参考图21-图23,图21-图23详细介绍了电子设备10中的各个部件在光效编辑中的协作关系。
在介绍电子设备10中的各个部件在S106中的协作关系之前,先介绍电子设备10如何显示用户界面40。如图21所示:
1、显示屏194在控件305内显示第一图片。
2、触摸传感器180K检测到用户点击控件305。
3、触摸传感器180K将用户点击控件305的事件上报至处理器110。
4、处理器110确定用户点击控件305的事件,向显示屏194发送显示用户界面40的指令。
5、显示屏194响应于处理器110发送的指令,显示用户界面40。
6、触摸传感器180K检测到用户点击控件402。
7、触摸传感器180K将用户点击控件402的事件上报至处理器110。
8、处理器110确定用户点击控件402的事件,向显示屏194发送更新显示用户界面40的指令。
9、显示屏194响应于处理器110发送的指令,更新显示用户界面40。其中,更新显示的用户界面40可包括:光源指示符403、光强调节符404、光效模板选项栏405、取消控件406及保存控件407等。
接下来介绍电子设备10中的各个部件在S106中的协作关系,如图22所示:
10、触摸传感器180K检测用户滑动光源指示符403。11、触摸传感器180K将用户滑动光源指示符403的事件上报至处理器110。
12、处理器110确定用户滑动光源指示符403的事件,向显示屏194发送更新显示光源指示符403的指令。
13、显示屏194响应于处理器110发送的指令,更新显示光源指示符403。
14、处理器110确定新的光照方向,根据新的光照方向确定图片内容显示区域401内的图片。
15、处理器110向显示屏194发送更新显示图片内容显示区域401内的图片的指令。
16、显示屏194响应于处理器110发送的指令,更新显示图片内容显示区域401内的图片。
示例性地,用户对光源指示符403输入滑动操作,将光源指示符403从(x1,y1)移动至(x2,y2),如图23所示。触摸传感器180K检测到用户对光源指示符403的滑动操作,上报事件(用户对光源指示符403的滑动操作)至处理器110,处理器110确认该事件后,根据新的光照方向计算图片内容显示区域401内图片的RGB数据(即为该图片包含的每个像素点的像素值)。使显示屏194在(x2,y2)处显示光源指示符403,并更新显示图片内容显示区域401内的图片。可以知道的是,用户对光源指示符403的滑动操作是一个连续的动作,在滑动的过程中,电子设备10可以实时更新显示光源指示符403以及显示内容显示区域401内的图片。
上述根据新的光照方向计算图片内容显示区域401内图片的RGB数据,可以是在人脸光效渲染部分重新计算人脸各个网格的遮挡关系,根据遮挡关系重新设置各个网格的灰度值,输出阴影图层。进而将漫反射图层、高光图层、阴影图层进行叠加融合,根据融合结果与RGB数据输出人脸光效渲染结果。
上述12-13与14-16实现的先后顺序不做限定。
17、触摸传感器180K检测到用户滑动光强调节符404。
18、触摸传感器180K将用户滑动光强调节符404的事件上报至处理器110。
19、处理器110确定用户滑动光强调节符404的事件,向显示屏194发送更新显示光强调节符404的指令。
20、显示屏194响应于处理器110发送的指令,更新显示图片内容显示光强调节符404。
21、处理器110确定新的光照强度,根据新的光照强度确定图片内容显示区域401内的图片。
22、处理器110向显示屏194发送更新显示图片内容显示区域401内的图片的指令。
23、显示屏194响应于处理器110发送的指令,更新显示图片内容显示区域401内的图片。
具体地,用户对光源指示符403的滑动操作是一个连续的动作,在滑动的过程中,电子设备10可以实时更新显示光源指示符403以及图片内容显示区域401内的图片。
上述19-20与21-23实现的先后顺序不做限定。
24、触摸传感器180K检测到用户点击第二光效模板选项。
25、触摸传感器180K将用户点击第二光效模板选项的事件上报至处理器110。
26、处理器110确定用户点击第二光效模板选项的事件,向显示屏194发送更新显示第一光效模板选项和第二光效模板选项的指令。
27、显示屏194响应于处理器110发送的指令,更新显示第一光效模板选项和第二光效模板选项。
28、处理器110根据第二光效模板对应的光效参数确定图片内容显示区域401内的图片。
具体地,确定图片内容显示区域401内的图片即为根据第二光效模板对应的光效参数计算图片内容显示区域401内图片的RGB数据。
29、处理器110向显示屏194发送更新显示图片内容显示区域401内的图片的指令。
30、显示屏194响应于处理器110发送的指令,更新显示图片内容显示区域401内的图片。
上述26-27与28-30实现的先后顺序不作限定。上述10-16、17-23、24-30实现的先后顺序不作限定。S106可包含10-16、17-23、24-30中的部分或全部,本申请实施例对此不作限定。
接下来介绍电子设备10中的各个部件在S107中的协作关系,如图24所示:
31、触摸传感器180K检测到用户点击保存控件407。
32、触摸传感器180K将用户点击保存控件407的事件上报至处理器110。
33、处理器110确定用户点击保存控件407的事件,将第二图片保存至内部存储器121中,并将第一图片从内部存储器中删除。
具体地,16、23、30中更新显示的图片即为第二图片。上述将第二图片保存至内部存储器121中即为将第二图片的RGB数据保存至内部存储器121。上述将第一图片从内部存储器121中删除即为将第一图片的RGB数据从内部存储器121中删除。
本申请实施例提供的图像处理方法可以在拍照过程中根据识别的拍摄场景为用户推荐合适的光效模板,可以使用户快速选择合适的光效模板,减少用户操作,提高手机的使用效率。
在另外一种实施例中,根据拍摄场景为用户推荐合适的光效模板可以在光效编辑过程中实现。
具体地,电子设备10可以在拍照过程中开启第一拍摄模式,接收用户选择第一光效模板的操作后,再接收用户的拍照指令,完成拍照过程,确定待拍摄图片。
电子设备10接收用户的拍照指令后,根据第一光效模板对应的光效参数对待拍摄图片进行光效渲染,生成第一图片。在光效渲染之前,电子设备10可以识别人脸的光照方向,并在光效渲染过程中结合人脸的光照方向进行人脸光效渲染,具体的人脸光效渲染过程可参考图12实施例的描述。
电子设备10接收用户的光效编辑指令后(如对控件402的点击操作),可在光效模板选项栏中将与该拍摄场景匹配的光效模板选项的显示状态设置为第一显示状态,提示用户该选项对应的模板是适合于当前拍摄场景的模板,便于用户快速识别并选择该选项,可有效向用户推荐该选项。其中,电子设备10可以在光效编辑过程之前识别拍摄场景,并确定与该拍摄场景匹配的光效模板。拍摄场景的识别过程与图11实施例中S102描述的方法类 似,确定与该拍摄场景匹配的光效模板的方法与图11实施例中S103的描述的方法类似,在此不赘述。其中,第一显示状态与图6实施例中的第一显示状态一致,在此不赘述。
本申请实施例还提供了一种计算机可读存储介质。上述方法实施例中的全部或者部分流程可以由计算机程序来指令相关的硬件完成,该程序可存储于上述计算机存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。该计算机可读存储介质包括:只读存储器(read-only memory,ROM)或随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可存储程序代码的介质。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
本申请实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。
本申请实施例装置中的模块可以根据实际需要进行合并、划分和删减。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (23)

  1. 一种拍照方法,应用于电子设备,其特征在于,包括:
    所述电子设备开启摄像头,采集拍摄对象的图像;
    所述电子设备显示第一用户界面;其中,所述第一用户界面包括:第一显示区域、拍摄模式列表、光效模板选项栏;所述拍摄模式列表包括一个或多个拍摄模式的选项,所述一个或多个拍摄模式包括第一拍摄模式,所述第一拍摄模式已被选定,所述第一拍摄模式为突出显示拍摄的图片中包含的人物的拍摄模式,所述光效模板选项栏中包括两个或两个以上光效模板的选项;所述光效模板包括一个或多个光效参数,用于处理采用所述第一拍摄模式拍摄的图片;
    所述电子设备在所述第一显示区域中显示所述摄像头采集的图像;
    所述电子设备在所述光效模板选项栏中突出显示与拍摄场景匹配的光效模板的选项;其中,所述拍摄场景为所述第一显示区域中显示的图像对应的拍摄场景。
  2. 如权利要求1所述的拍照方法,其特征在于,所述第一用户界面还包括拍摄控件和第一控件;
    所述电子设备在所述光效模板选项栏中突出显示与拍摄场景匹配的光效模板的选项之后,所述方法还包括:
    所述电子设备在检测到作用于所述拍摄控件的用户操作后,采用已选定的光效模板对应的光效参数对拍摄的图片进行处理,生成第一图片;
    所述电子设备在所述第一控件中显示所述第一图片的缩略图;其中,所述第一图片的缩略图包含的像素点少于所述第一图片包含的像素点。
  3. 如权利要求2所述的拍照方法,其特征在于,所述已选定的光效模板为所述与拍摄场景匹配的光效模板。
  4. 如权利要求2或3所述的拍照方法,其特征在于,所述采用已选定的光效模板对应的光效参数对拍摄的图片进行处理,生成第一图片,包括:所述电子设备采用已选定的光效模板对应的光效参数、光照方向以及深度数据对拍摄的图片进行处理,生成第一图片;其中,所述光照方向为根据所述第一显示区域中显示的图片识别的光照方向,所述深度数据为所述拍摄对象的深度数据。
  5. 如权利4所述的拍照方法,其特征在于,所述采用已选定的光效模板对应的光效参数、光照方向以及深度数据对拍摄的图片进行处理之后,所述生成第一图片之前,所述方法还包括:根据所述已选定的光效模板对应的光效参数以及所述深度数据分别对人像部分和背景部分进行处理;其中,所述人像部分和所述背景部分为根据所述拍摄的图片分割得到。
  6. 如权利要求1-5任一项所述的拍照方法,其特征在于,所述在所述光效模板选项栏中突出显示与拍摄场景匹配的光效模板的选项包括以下一项或多项:在所述光效模板选项栏中的第一显示位置显示所述与拍摄场景匹配的光效模板的选项;在所述光效模板选项栏中高亮显示所述与拍摄场景匹配的光效模板的选项;在所述光效模板选项栏中动态显示所述与拍摄场景匹配的光效模板的选项。
  7. 如权利要求2所述的拍照方法,其特征在于,所述电子设备在所述第一控件中显示所述第一图片的缩略图之后,所述方法还包括:
    所述电子设备检测到作用于所述第一控件的第一用户操作,响应于所述第一用户操作,所述电子设备显示用于查看所述第一图片的第二用户界面。
  8. 如权利要求7所述的拍照方法,其特征在于,所述第二用户界面包括:第二显示区域和第二控件;其中:所述第二显示区域用于显示所述第一图片;
    所述方法还包括:所述电子设备检测到作用于所述第二控件的第二用户操作,响应于所述第二用户操作,所述电子设备显示用于编辑所述第一图片的第二用户界面。
  9. 如权利要求8所述的拍照方法,其特征在于,所述第二用户界面还包括:光源指示符;其中,所述光源指示符用于指示所述拍摄场景中光源的光照方向;
    所述方法还包括:所述电子设备检测到作用于所述光源指示符的第三用户操作,响应于所述第三用户操作,更新所述光照方向,并重新执行所述电子设备采用所述已选定的光效模板对应的光效参数、光照方向以及深度数据对拍摄的图片进行处理的步骤。
  10. 如权利要求8或9所述的拍照方法,其特征在于,所述第二用户界面还包括:光强指示符;其中,所述光强指示符用于指示所述光源的光照强度;
    所述方法还包括:所述电子设备检测到作用于所述光强指示符的第四用户操作,响应于所述第四用户操作,更新所述光源强度,并采用所述已选定的光效模板对应的光效参数、光照方向、光源强度以及深度数据对拍摄的图片进行处理。
  11. 如权利要求8-10任一项所述的拍照方法,其特征在于,所述第二用户界面还包括所述光效模板选项栏;
    所述方法还包括:所述电子设备检测到作用于所述光效模板选项栏的第五用户操作,响应于所述第五用户操作,更新所述已选定的光效模板,并重新执行所述电子设备采用所述已选定的光效模板对应的光效参数、光照方向以及深度数据对拍摄的图片进行处理的步骤。
  12. 一种电子设备,其特征在于,包括:一个或多个处理器、存储器、一个或多个摄像头、触摸屏;
    所述存储器、所述一个或多个摄像头以及所述触摸屏与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以执行:
    开启所述摄像头采集拍摄对象的图像;
    显示第一用户界面;其中,所述第一用户界面包括:第一显示区域、拍摄模式列表、光效模板选项栏;所述拍摄模式列表包括一个或多个拍摄模式的选项,所述一个或多个拍摄模式包括第一拍摄模式,所述第一拍摄模式已被选定,所述第一拍摄模式为突出显示拍摄的图片中包含的人物的拍摄模式,所述光效模板选项栏中包括两个或两个以上光效模板的选项;所述光效模板包括一个或多个光效参数,用于处理采用所述第一拍摄模式拍摄的图片;
    在所述第一显示区域中显示所述摄像头采集的图像;
    在所述光效模板选项栏中突出显示与拍摄场景匹配的光效模板的选项;其中,所述拍 摄场景为所述第一显示区域中显示的图像对应的拍摄场景。
  13. 如权利要求12所述的电子设备,其特征在于,所述第一用户界面还包括拍摄控件和第一控件;
    所述处理器在所述光效模板选项栏中突出显示与拍摄场景匹配的光效模板的选项之后,所述处理器还执行:
    在检测到作用于所述拍摄控件的用户操作后,采用已选定的光效模板对应的光效参数对拍摄的图片进行处理,生成第一图片;
    在所述第一控件中显示所述第一图片的缩略图;其中,所述第一图片的缩略图包含的像素点少于所述第一图片包含的像素点。
  14. 如权利要求13所述的电子设备,其特征在于,所述已选定的光效模板为所述与拍摄场景匹配的光效模板。
  15. 如权利要求13或14所述的电子设备,其特征在于,所述处理器采用已选定的光效模板对应的光效参数对拍摄的图片进行处理,生成第一图片时具体执行:所述处理器采用已选定的光效模板对应的光效参数、光照方向以及深度数据对拍摄的图片进行处理,生成第一图片;其中,所述光照方向为根据所述第一显示区域中显示的图片识别的光照方向,所述深度数据为所述拍摄对象的深度数据。
  16. 如权利要求15所述的电子设备,其特征在于,所述处理器采用已选定的光效模板对应的光效参数、光照方向以及深度数据对拍摄的图片进行处理之后,所述处理器生成第一图片之前,所述处理器还执行:根据所述已选定的光效模板对应的光效参数以及所述深度数据分别对人像部分和背景部分进行处理;其中,所述人像部分和所述背景部分为根据所述拍摄的图片分割得到。
  17. 如权利要求12-16任一项所述的电子设备,其特征在于,所述在所述光效模板选项栏中突出显示与拍摄场景匹配的光效模板的选项包括以下一项或多项:在所述光效模板选项栏中的第一显示位置显示所述与拍摄场景匹配的光效模板的选项;在所述光效模板选项栏中高亮显示所述与拍摄场景匹配的光效模板的选项;在所述光效模板选项栏中动态显示所述与拍摄场景匹配的光效模板的选项。
  18. 如权利要求13所述的电子设备,其特征在于,所述在所述第一控件中显示所述第一图片的缩略图之后,所述处理器还执行:检测到作用于所述第一控件的第一用户操作,响应于所述第一用户操作,所述电子设备显示用于查看所述第一图片的第二用户界面。
  19. 如权利要求18所述的电子设备,其特征在于,所述第二用户界面包括:第二显示区域和第二控件;其中:所述第二显示区域用于显示所述第一图片;
    所述处理器还执行:检测到作用于所述第二控件的第二用户操作,响应于所述第二用户操作,所述电子设备显示用于编辑所述第一图片的第二用户界面。
  20. 如权利要求19所述的电子设备,其特征在于,所述第二用户界面还包括:光源指示符;其中,所述光源指示符用于指示所述拍摄场景中光源的光照方向;
    所述处理器还执行:检测到作用于所述光源指示符的第三用户操作,响应于所述第三用户操作,更新所述光照方向,并重新执行所述采用所述已选定的光效模板对应的光效参数、光照方向以及深度数据对拍摄的图片进行处理。
  21. 如权利要求19或20所述的电子设备,其特征在于,所述第二用户界面还包括:光强指示符;其中,所述光强指示符用于指示所述光源的光照强度;
    所述处理器还执行:检测到作用于所述光强指示符的第四用户操作,响应于所述第四用户操作,更新所述光源强度,并采用所述已选定的光效模板对应的光效参数、光照方向、光源强度以及深度数据对拍摄的图片进行处理。
  22. 如权利要求19-21任一项所述的电子设备,其特征在于,所述第二用户界面还包括所述光效模板选项栏;
    所述处理器还执行:检测到作用于所述光效模板选项栏的第五用户操作,响应于所述第五用户操作,更新所述已选定的光效模板,并重新执行所述电子设备采用所述已选定的光效模板对应的光效参数、光照方向以及深度数据对拍摄的图片进行处理。
  23. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-11中任一项所述的拍照方法。
PCT/CN2018/116443 2018-11-20 2018-11-20 图像处理方法及电子设备 WO2020102978A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/116443 WO2020102978A1 (zh) 2018-11-20 2018-11-20 图像处理方法及电子设备
CN201880094372.1A CN112262563B (zh) 2018-11-20 2018-11-20 图像处理方法及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/116443 WO2020102978A1 (zh) 2018-11-20 2018-11-20 图像处理方法及电子设备

Publications (1)

Publication Number Publication Date
WO2020102978A1 true WO2020102978A1 (zh) 2020-05-28

Family

ID=70773103

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/116443 WO2020102978A1 (zh) 2018-11-20 2018-11-20 图像处理方法及电子设备

Country Status (2)

Country Link
CN (1) CN112262563B (zh)
WO (1) WO2020102978A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287790A (zh) * 2020-10-20 2021-01-29 北京字跳网络技术有限公司 影像处理方法、装置、存储介质及电子设备
CN112866773A (zh) * 2020-08-21 2021-05-28 海信视像科技股份有限公司 一种显示设备及多人场景下摄像头追踪方法
CN114979457A (zh) * 2021-02-26 2022-08-30 华为技术有限公司 一种图像处理方法及相关装置
CN115334239A (zh) * 2022-08-10 2022-11-11 青岛海信移动通信技术股份有限公司 前后摄像头拍照融合的方法、终端设备和存储介质
CN115439616A (zh) * 2022-11-07 2022-12-06 成都索贝数码科技股份有限公司 基于多对象图像α叠加的异构对象表征方法
WO2023142690A1 (zh) * 2022-01-25 2023-08-03 华为技术有限公司 一种复原拍摄的方法及电子设备
WO2024055823A1 (zh) * 2022-09-16 2024-03-21 荣耀终端有限公司 相机应用界面的交互方法及装置
WO2024082863A1 (zh) * 2022-10-21 2024-04-25 荣耀终端有限公司 图像处理方法及电子设备

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113645408B (zh) * 2021-08-12 2023-04-14 荣耀终端有限公司 拍摄方法、设备及存储介质
CN114422736B (zh) * 2022-03-28 2022-08-16 荣耀终端有限公司 一种视频处理方法、电子设备及计算机存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1936685A (zh) * 2005-09-21 2007-03-28 索尼株式会社 摄影设备、处理信息的方法和程序
CN101945217A (zh) * 2009-07-07 2011-01-12 三星电子株式会社 拍摄设备和方法
US20120057051A1 (en) * 2010-09-03 2012-03-08 Olympus Imaging Corp. Imaging apparatus, imaging method and computer-readable recording medium
CN103533244A (zh) * 2013-10-21 2014-01-22 深圳市中兴移动通信有限公司 拍摄装置及其自动视效处理拍摄方法
CN104243822A (zh) * 2014-09-12 2014-12-24 广州三星通信技术研究有限公司 拍摄图像的方法及装置
CN104660908A (zh) * 2015-03-09 2015-05-27 深圳市中兴移动通信有限公司 拍摄装置及其拍摄模式的自动匹配方法
CN106027902A (zh) * 2016-06-24 2016-10-12 依偎科技(南昌)有限公司 一种拍照方法及移动终端
CN108734754A (zh) * 2018-05-28 2018-11-02 北京小米移动软件有限公司 图像处理方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580920B (zh) * 2013-10-21 2018-03-13 华为技术有限公司 一种成像处理的方法及用户终端
CN105578056A (zh) * 2016-01-27 2016-05-11 努比亚技术有限公司 拍摄的终端及方法
JP6702752B2 (ja) * 2016-02-16 2020-06-03 キヤノン株式会社 画像処理装置、撮像装置、制御方法及びプログラム
CN108540716A (zh) * 2018-03-29 2018-09-14 广东欧珀移动通信有限公司 图像处理方法、装置、电子设备及计算机可读存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1936685A (zh) * 2005-09-21 2007-03-28 索尼株式会社 摄影设备、处理信息的方法和程序
CN101945217A (zh) * 2009-07-07 2011-01-12 三星电子株式会社 拍摄设备和方法
US20120057051A1 (en) * 2010-09-03 2012-03-08 Olympus Imaging Corp. Imaging apparatus, imaging method and computer-readable recording medium
CN103533244A (zh) * 2013-10-21 2014-01-22 深圳市中兴移动通信有限公司 拍摄装置及其自动视效处理拍摄方法
CN104243822A (zh) * 2014-09-12 2014-12-24 广州三星通信技术研究有限公司 拍摄图像的方法及装置
CN104660908A (zh) * 2015-03-09 2015-05-27 深圳市中兴移动通信有限公司 拍摄装置及其拍摄模式的自动匹配方法
CN106027902A (zh) * 2016-06-24 2016-10-12 依偎科技(南昌)有限公司 一种拍照方法及移动终端
CN108734754A (zh) * 2018-05-28 2018-11-02 北京小米移动软件有限公司 图像处理方法及装置

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866773A (zh) * 2020-08-21 2021-05-28 海信视像科技股份有限公司 一种显示设备及多人场景下摄像头追踪方法
CN112866773B (zh) * 2020-08-21 2023-09-26 海信视像科技股份有限公司 一种显示设备及多人场景下摄像头追踪方法
CN112287790A (zh) * 2020-10-20 2021-01-29 北京字跳网络技术有限公司 影像处理方法、装置、存储介质及电子设备
CN114979457A (zh) * 2021-02-26 2022-08-30 华为技术有限公司 一种图像处理方法及相关装置
CN114979457B (zh) * 2021-02-26 2023-04-07 华为技术有限公司 一种图像处理方法及相关装置
WO2023142690A1 (zh) * 2022-01-25 2023-08-03 华为技术有限公司 一种复原拍摄的方法及电子设备
CN115334239A (zh) * 2022-08-10 2022-11-11 青岛海信移动通信技术股份有限公司 前后摄像头拍照融合的方法、终端设备和存储介质
CN115334239B (zh) * 2022-08-10 2023-12-15 青岛海信移动通信技术有限公司 前后摄像头拍照融合的方法、终端设备和存储介质
WO2024055823A1 (zh) * 2022-09-16 2024-03-21 荣耀终端有限公司 相机应用界面的交互方法及装置
WO2024082863A1 (zh) * 2022-10-21 2024-04-25 荣耀终端有限公司 图像处理方法及电子设备
CN115439616A (zh) * 2022-11-07 2022-12-06 成都索贝数码科技股份有限公司 基于多对象图像α叠加的异构对象表征方法
CN115439616B (zh) * 2022-11-07 2023-02-14 成都索贝数码科技股份有限公司 基于多对象图像α叠加的异构对象表征方法

Also Published As

Publication number Publication date
CN112262563B (zh) 2022-07-22
CN112262563A (zh) 2021-01-22

Similar Documents

Publication Publication Date Title
WO2020102978A1 (zh) 图像处理方法及电子设备
WO2020125410A1 (zh) 一种图像处理的方法及电子设备
KR102535607B1 (ko) 사진 촬영 중 이미지를 표시하는 방법 및 전자 장치
WO2020134891A1 (zh) 电子设备的拍照预览方法、图形用户界面及电子设备
WO2021169394A1 (zh) 基于深度的人体图像美化方法及电子设备
WO2020029306A1 (zh) 一种图像拍摄方法及电子设备
WO2022017261A1 (zh) 图像合成方法和电子设备
CN113935898A (zh) 图像处理方法、***、电子设备及计算机可读存储介质
CN113170037B (zh) 一种拍摄长曝光图像的方法和电子设备
CN112712470A (zh) 一种图像增强方法及装置
CN113973189B (zh) 显示内容的切换方法、装置、终端及存储介质
CN113810603B (zh) 点光源图像检测方法和电子设备
US20240153209A1 (en) Object Reconstruction Method and Related Device
CN113542580B (zh) 去除眼镜光斑的方法、装置及电子设备
CN110138999B (zh) 一种用于移动终端的证件扫描方法及装置
CN112150499A (zh) 图像处理方法及相关装置
CN115964231A (zh) 基于负载模型的评估方法和装置
CN114444000A (zh) 页面布局文件的生成方法、装置、电子设备以及可读存储介质
CN114756184A (zh) 协同显示方法、终端设备及计算机可读存储介质
WO2023000746A1 (zh) 增强现实视频的处理方法与电子设备
CN114283195B (zh) 生成动态图像的方法、电子设备及可读存储介质
WO2021204103A1 (zh) 照片预览方法、电子设备和存储介质
CN113495733A (zh) 主题包安装方法、装置、电子设备及计算机可读存储介质
CN115328592B (zh) 显示方法及相关装置
WO2024114257A1 (zh) 转场动效生成方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18940727

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18940727

Country of ref document: EP

Kind code of ref document: A1