CN117201954A - Method, device, electronic equipment and medium for acquiring beauty parameters - Google Patents

Method, device, electronic equipment and medium for acquiring beauty parameters Download PDF

Info

Publication number
CN117201954A
CN117201954A CN202210577515.2A CN202210577515A CN117201954A CN 117201954 A CN117201954 A CN 117201954A CN 202210577515 A CN202210577515 A CN 202210577515A CN 117201954 A CN117201954 A CN 117201954A
Authority
CN
China
Prior art keywords
image
beauty
face
type
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210577515.2A
Other languages
Chinese (zh)
Inventor
张帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210577515.2A priority Critical patent/CN117201954A/en
Publication of CN117201954A publication Critical patent/CN117201954A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

A method, a device, electronic equipment and a medium for acquiring beauty parameters relate to the technical field of image processing, simplify operation steps for obtaining custom beauty parameters, reduce consumption time, and enable a beauty image after face image beauty treatment to be subjected to the custom beauty parameters, so that a user-satisfied beauty effect can be achieved. The specific scheme comprises the following steps: firstly, triggering and acquiring a self-defined beautifying parameter by receiving a first operation of a user, prompting the user to shoot a first type image in response to the first operation, acquiring the first type image in response to a second operation of the user, prompting the user to shoot a second type image of the same face image in correspondence to the prompting mode, acquiring the second type image in response to a third operation of the user, and finally acquiring the self-defined beautifying parameter according to the first type image and the second type image.

Description

Method, device, electronic equipment and medium for acquiring beauty parameters
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, an electronic device, and a medium for obtaining a beauty parameter.
Background
With the wide application of electronic devices with shooting functions, in general, automatic face beautifying can be selected in the process of shooting photos or videos, that is, face beautifying processing can be performed on face images according to beautifying parameters. In particular, automatic facial beautification may be based on aesthetic theory (e.g., average face, symmetry, golden section, etc.) to adjust the facial image. However, the universal beautifying treatment on the face image according to the aesthetic theory cannot meet the personalized requirements of the user.
In the related art, the beauty treatment may be performed on a partial region (e.g., face shape, eyes, mouth, etc.) in the face image through different adjustment items. The above adjustment items may be further refined, for example, the adjustment items corresponding to the eyes may be split into adjustment items corresponding to the eye size, the eye brightness, and the eyeball color, so as to perform fine face-beautifying processing on the face image. The user can adjust the adjustment items one by one through the user interaction interface so as to obtain the beauty parameters corresponding to the adjustment results of each adjustment item, and then the face image is subjected to beauty treatment by adopting the beauty parameters to obtain the beauty image.
However, since the number of the above-mentioned adjustment items may be several, ten or more, or even several tens, and the face images are subjected to the face beautifying process between the respective adjustment items, it is necessary to adjust the adjustment items one by one and repeatedly adjust according to the face beautifying effect. However, the user is not a professional debugger, the above-mentioned method for obtaining the beauty parameters is cumbersome to operate and takes a long time, and the beauty parameters are adopted to make the face image be the beauty image after the face image is subjected to the beauty treatment, so that the user's satisfied beauty effect is difficult to achieve.
Disclosure of Invention
The embodiment of the application provides a method, a device, electronic equipment and a medium for acquiring beauty parameters, which are used for solving the problems that the operation of a mode for acquiring the beauty parameters is complicated, the time consumption is long, and the beauty effect satisfied by a user is difficult to achieve by adopting the beauty parameters to perform the beauty treatment on the beauty images of face images.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical scheme:
in a first aspect, a method for obtaining a beauty parameter is provided, the method comprising: the customized beauty parameters are acquired by receiving and responding to the first operation of the user. Secondly, the user is prompted to take the first type image, the user is guided to input a second operation, and the first type image is acquired in response to the second operation of the user. Again, the user is prompted to take a second type of image, guided to input a third operation, and the second type of image is acquired in response to the third operation of the user. Finally, the user can obtain the custom beauty parameters according to the first image and the second image without inputting other operations by the user. Wherein the second type image and the first type image comprise the same face image, and the second type image and the first type image are different.
In one implementation, a first operation is received at a preview interface of a shot, where the first operation may be a two-finger up-slide operation, so as to quickly trigger to acquire a custom beauty parameter.
In another implementation, the user-defined beauty parameters are obtained through triggering through the preset triggering path and through multiple interactions of the user with the interface.
In the present application, prompting the user to take the first type of image and prompting the user to take the second type of image may adopt at least one prompting mode of: displaying text prompt information, displaying image prompt information and playing sound prompt information. For example, "please shoot a plain face image", play "how to aim the face at the face frame" animation, play "please draw a satisfactory make-up" audio for oneself, play a cue tone.
It should be understood that, in the shot preview interface, the text prompt information or the image prompt information may be displayed on the upper layer, and the preview image may be displayed on the lower layer. If the text or image cues partially overlap the preview image, the transparency of the text or image cues may be increased so that the user can see the complete preview image.
In the application, the first type image and the second type image can be stored in the cache, and after the custom beautifying parameters are obtained, the first type image and the second type image are temporarily reserved or deleted according to the use condition of the cache.
Thus, the user-defined beautifying parameters are obtained by sequentially responding to the first operation, the second operation and the third operation, repeated debugging of the beautifying parameters according to the beautifying effect is not needed, and the operation steps are simple and the time consumption is short. The face image with make-up can be understood as the face effect expected to be achieved by the user, so that the face image which is photographed daily can achieve the face effect satisfied by the user by adopting the face image which is obtained after the face treatment is carried out by adopting the custom face parameters.
In another possible design manner of the first aspect, the user-defined beauty parameters are applied in the beauty treatment process, and the first image may be collected, and the second image may be obtained by processing the first image according to the user-defined beauty parameters.
It should be understood that after the camera acquires the first image, a second image is obtained in response to an image acquisition operation of the user on the first image, where the second image is an image obtained by processing the first image according to the custom beauty parameters.
In this way, the first image acquired is subjected to the beautifying processing in the process of obtaining the second image, namely the first image is processed according to the user-defined beautifying parameters, so that the speed of displaying the first image in response to the user operation is improved.
In another possible design manner of the first aspect, the custom beauty Yan Canshu is applied in the beauty treatment process, and before the second image is obtained by processing the first image according to the custom beauty parameter, the second image may be displayed on the photographed preview interface.
Therefore, the second image is displayed through the shot preview interface, namely, the second image obtained by processing the first image according to the user-defined beautifying parameters is displayed, and the user can observe the beautifying effect image (the second image) through the preview interface in real time, so that the shooting speed of the user is improved, and further, the system resources are saved.
In another possible design manner of the first aspect, the custom beauty Yan Canshu is applied in a beauty treatment process, before the first image is processed according to the custom beauty parameter to obtain the second image, the first image may be displayed on a shot preview interface, the shot preview interface includes a custom beauty application control, and then the second image is displayed on the shot preview interface in response to a fourth operation of the user on the custom beauty application control, where the fourth operation is used to trigger to start a beauty effect.
In this way, the second image is displayed on the shooting preview interface in response to the fourth operation of the user on the custom beauty application control, so that the user can select to display the first image without beauty according to the instant requirement, or display the second image after the first image is processed according to the custom beauty parameter, so as to meet the diversified requirement of the user on the shot preview interface display. The user can observe the beauty effect image (second image) through the preview interface in real time so as to improve the shooting speed of the user and further save system resources.
In another possible design manner of the first aspect, for the second image, in a case of displaying the second image, a beautifying prompt message is sent to prompt the second image to be an image obtained by processing the first image according to the custom beautifying parameter.
It should be understood that the beauty prompt information may be text information, graphic information or sound information, and is sent by using a display screen or a sound tube as a carrier, for example, the beauty prompt information is displayed or played.
Therefore, by sending out the beautifying prompt information, the second image is prompted to be the beautifying image after the beautifying treatment so as to facilitate the user to determine whether to reprocess the second image.
In another possible design of the first aspect, after displaying the second image, a fifth operation input by the user on the second image may also be received, and an image editing interface may be displayed in response to the fifth operation. Wherein the image editing interface includes one or more controls for editing the second image.
In the present application, one or more controls may be used to adjust the hue, environmental enhancement, blurring, rotation, cropping, shooting parameters, filters, matting, graffiti, adding mosaics, adding special effects and text, etc. of the second image.
In this way, the second image can be reprocessed through one or more controls in the image editing interface so as to meet the diversified editing requirements of the user on the image effect.
In another possible design manner of the first aspect, the one or more controls include a beauty degree control bar, where the beauty degree control bar is used to adjust a strength of the second image with custom beauty Yan Canshu. On the basis, a third image is received and displayed in response to the user's adjustment operation of the beauty degree control bar. The third image is an image obtained by processing the second image according to the strength of the beauty treatment corresponding to the adjusted beauty degree control bar and the custom beauty parameters.
Therefore, the implementation strength of the custom beauty parameters is adjusted through the adjustment operation of the beauty degree control bar, so that the probability that the edited third image meets the diversified editing requirements is improved.
In another possible design manner of the first aspect, after prompting the user to capture the second type of image, receiving a third operation of the user, and in response to the third operation of the user, capturing the second type of image if the capturing trigger condition is satisfied. Wherein the shooting trigger condition includes at least one of: the background similarity of the second type image compared with the first type image is larger than a first preset threshold value, the ambient light similarity of the second type image compared with the first type image is larger than a second preset threshold value, and the shooting parameters of the second type image compared with the first type image are the same.
Therefore, the acquisition environment of the second type of image is limited by shooting triggering conditions, the influence of the acquisition environment on the display effect of the second type of image is weakened, the difference between the first type of image and the second type of image is only the cosmetic effect, the probability of obtaining the image after the user-defined beautifying parameters are adopted for carrying out the beautifying treatment is improved, and the user satisfaction effect can be achieved.
In another possible design of the first aspect, an effect selection interface may also be displayed before receiving the first operation of the user. The effect selection interface includes a custom beauty Yan Kongjian, and the first operation is a click operation of the custom beauty Yan Kongjian by the user.
Therefore, the electronic equipment can receive the first operation of the user aiming at the user-defined beauty Yan Kongjian in the effect selection interface, trigger and start the guiding flow for acquiring the beauty parameters, and improve the probability that the user-defined beauty parameters meet the user requirements, so that the speed for acquiring the user-defined beauty parameters is improved.
In another possible design manner of the first aspect, in the case that the first type image is a plain face image, the corresponding second type image is a makeup-carrying face image including the same face image as the first type image; or, in the case that the first type image is a face image with make-up, the corresponding second type image is a plain face image including the same face image as the first type image.
Therefore, according to the self-defined beautifying parameters obtained by the first type image and the second type image, the change from the plain face image to the face image with makeup can be represented, so that the image processed according to the self-defined beautifying parameters has the beautifying effect with makeup.
In another possible design manner of the first aspect, the custom beauty Yan Canshu is obtained according to the first type image and the second type image, firstly, the first type image and the second type image are divided into areas, and then, the pixel differences of the face areas corresponding to each group are compared to obtain the custom beauty parameters. The method specifically comprises the following steps: dividing plain face images in the first type image and the second type image into at least one first face area through a preset face area segmentation algorithm, and dividing makeup-carrying face images in the first type image and the second type image into at least one second face area through a preset face area segmentation algorithm; and comparing the pixel difference between the first face region and the second face region corresponding to the first face region according to the region type of the first face region to obtain the custom face-beautifying parameters, wherein the region type comprises a uniform color region type and a color difference region type.
Therefore, the first face region is classified according to the cosmetic effect characteristics (the region types comprise uniform color region types and different color region types), and then the color mapping relation between the plain face image and the face image with the cosmetic is established according to different region types, so that the efficiency of acquiring the self-defined cosmetic parameters can be improved.
In a second aspect, a first device is provided, the first device having functionality to implement the method of the first aspect described above. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a third aspect, an electronic device is provided, comprising: one or more cameras, a processor, a display screen, memory and a communication module; the camera, the display screen, the memory, the communication module and the processor are coupled; wherein the memory is for storing computer program code comprising computer instructions which, when executed by the electronic device, cause the electronic device to perform the method of obtaining a cosmetic parameter as in any of the first aspects.
In a fourth aspect, there is provided a computer readable storage medium having stored therein computer instructions which, when run on an electronic device, enable the electronic device to perform the method of obtaining a beauty parameter of any one of the first aspects above.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of obtaining a cosmetic parameter of any one of the first aspects described above.
In a sixth aspect, a chip is provided, the chip comprising a processor and a communication interface for communicating with a communication module preceding the chip, the processor being for running a computer program or instructions to implement a method of obtaining a beauty parameter for performing any one of the above first aspects. The chip may be formed of a chip, or may include a chip and other discrete devices.
The technical effects of any one of the design manners of the second aspect to the sixth aspect may be referred to the technical effects of the different design manners of the first aspect, and will not be repeated here.
Drawings
Fig. 1 is a schematic diagram of a method for acquiring beauty parameters according to an embodiment of the present application;
fig. 2 shows a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a device for acquiring beauty parameters according to an embodiment of the present application;
fig. 4 shows one of the flow diagrams of the method for acquiring the beauty parameters according to the embodiment of the present application;
FIG. 5 illustrates one of the schematic diagrams of the preview interface provided by the embodiments of the present application;
FIG. 6 is a schematic diagram illustrating an operation of a display effect selection interface according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an effect selection interface according to an embodiment of the present application;
Fig. 8 is a schematic diagram of a display interface of a drop-down menu of a custom beauty control according to an embodiment of the present application;
FIG. 9 is a second diagram illustrating an effect selection interface according to an embodiment of the application;
FIG. 10 is a schematic diagram of a display interface of a drop-down menu of a default beauty control provided by an embodiment of the present application;
FIG. 11 is a second diagram illustrating a preview interface provided by an embodiment of the present application;
FIG. 12 is a third diagram illustrating a preview interface provided by an embodiment of the present application;
FIG. 13 is a schematic diagram of an image display interface according to an embodiment of the present application;
FIG. 14 illustrates a reprographic operation schematic provided by embodiments of the present application;
FIG. 15 shows a fourth schematic diagram of a preview interface provided by an embodiment of the present application;
FIG. 16 is a diagram of a preview interface provided by an embodiment of the present application;
FIG. 17 is a diagram illustrating a preview interface provided by an embodiment of the present application;
FIG. 18 is a second flowchart of a method for obtaining beauty parameters according to an embodiment of the present application;
fig. 19 is a schematic diagram showing a front-to-back comparison of image beautification provided by the embodiment of the application;
FIG. 20 is a diagram of a preview interface provided by an embodiment of the present application;
FIG. 21 shows one of the schematic diagrams of an image editing interface provided by an embodiment of the present application;
FIG. 22 is a second schematic diagram of an image editing interface according to an embodiment of the present application;
fig. 23 shows a third flowchart of a method for obtaining a beauty parameter according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. In the description of the present application, unless otherwise indicated, "at least one item(s)" below or the like means any combination of these items, including any combination of single item(s) or plural item(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural. In addition, in order to clearly describe the technical solution of the embodiments of the present application, it will be understood by those skilled in the art that "first", "second", etc. words of the present application distinguish identical items or similar items having substantially the same function and effect, and do not limit the number and execution order, and that "first", "second", etc. words of the present application do not necessarily differ. Meanwhile, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion that may be readily understood.
In a self-timer scene, the user is particularly concerned with five sense organs coordination schedule, skin smoothness, skin color, hair color and make-up effect in a face image. And (3) carrying out one-by-one or partial adjustment on the adjustment items capable of carrying out parameter adjustment by a user, obtaining the beauty parameters according to the adjustment items, and carrying out beauty treatment on the images to be beautified according to the beauty parameters to obtain the beauty images. However, the method for acquiring the beauty parameters is complex in operation and long in time consumption, and because the degree of understanding of the mutual influence of the user on each adjustment item is limited, the beauty effect of the beauty image is difficult to achieve for the user.
The embodiment provides a method for obtaining the beauty parameters, which can simplify the beauty processing flow of the face image and obtain the beauty image meeting the user requirement. Specifically, please refer to fig. 1, which illustrates a schematic diagram of a method for obtaining a beauty parameter according to an embodiment of the present application. Firstly, the electronic device can acquire a plain face image and a face image with makeup of the same shooting object. Then, the effect difference analysis is carried out on the face image of the plain face and the face image with the makeup to obtain the beauty parameters which can show the optimization trend and the preference of the user to the self-makeup. Finally, the face beautifying parameters can be applied to face images acquired during shooting of the electronic equipment, and the face images are subjected to face beautifying processing to obtain face beautifying images.
The method for obtaining the beauty parameters provided by the embodiment of the application can be applied to electronic equipment, and refer to fig. 2, which shows a schematic structural diagram of the electronic equipment provided by the embodiment of the application.
As shown in fig. 2, the electronic device may include a processor 210, an external memory interface 220, an internal memory 221, a universal serial bus (universal serial bus, USB) interface 230, a charge management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 250, a wireless communication module 260, an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an earphone interface 270D, a sensor module 280, keys 290, a motor 291, an indicator 292, a camera 293, a display 294, a user identification module (subscriber identification module, SIM) card interface 295, and the like. The sensor module 280 may include a pressure sensor 280A, a gyroscope sensor 280B, a barometric sensor 280C, a magnetic sensor 280D, an acceleration sensor 280E, a distance sensor 280F, a proximity sensor 280G, a fingerprint sensor 280H, a temperature sensor 280J, a touch sensor 280K, an ambient light sensor 280L, a bone conduction sensor 280M, and the like.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device. In other embodiments of the application, the electronic device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units such as, for example: the processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can be a neural center and a command center of the electronic device. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that the processor 210 has just used or recycled. If the processor 210 needs to reuse the instruction or data, it may be called directly from memory. Repeated accesses are avoided and the latency of the processor 210 is reduced, thereby improving the efficiency of the system.
The electronic device implements display functions through the GPU, the display screen 294, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 294 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
The display 294 is used to display images, videos, and the like. The display 294 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device may include 1 or N displays 294, N being a positive integer greater than 1.
The electronic device may implement shooting functions through an ISP, a camera 293, a video codec, a GPU, a display 294, an application processor, and the like.
The ISP is used to process the data fed back by the camera 293. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, so that the electrical signal is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 293.
The camera 293 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the electronic device may include 1 or N cameras 293, N being a positive integer greater than 1.
The internal memory 221 may be used to store computer executable program code that includes instructions. The processor 210 executes various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 221. The internal memory 221 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device (e.g., audio data, phonebook, etc.), and so forth. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device may implement audio functions through an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an ear-headphone interface 270D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 270 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 270 may also be used to encode and decode audio signals. In some embodiments, the audio module 270 may be disposed in the processor 210, or some functional modules of the audio module 270 may be disposed in the processor 210.
Speaker 270A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device may listen to music, or to hands-free conversations, through speaker 270A.
A receiver 270B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the electronic device picks up a phone call or voice message, the voice can be picked up by placing the receiver 270B close to the human ear.
Microphone 270C, also referred to as a "microphone" or "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 270C through the mouth, inputting a sound signal to the microphone 270C. The electronic device may be provided with at least one microphone 270C. In other embodiments, the electronic device may be provided with two microphones 270C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device may also be provided with three, four, or more microphones 270C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 270D is for connecting a wired earphone. Earphone interface 270D may be USB interface 230 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 280A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 280A may be disposed on display 294. The pressure sensor 280A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. When a force is applied to the pressure sensor 280A, the capacitance between the electrodes changes. The electronics determine the strength of the pressure from the change in capacitance. When a touch operation is applied to the display panel 294, the electronic device detects the touch operation intensity from the pressure sensor 280A. The electronic device may also calculate the location of the touch based on the detection signal of the pressure sensor 280A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
A distance sensor 280F for measuring distance. The electronic device may measure the distance by infrared or laser. In some embodiments, the scene is photographed and the electronic device can range using the distance sensor 280F to achieve quick focus.
The ambient light sensor 280L is used to sense ambient light level. The electronic device may adaptively adjust the brightness of the display 294 based on the perceived ambient light level. The ambient light sensor 280L may also be used to automatically adjust white balance during photographing. Ambient light sensor 280L may also cooperate with proximity light sensor 280G to detect if the electronic device is in a pocket to prevent false touches.
The fingerprint sensor 280H is used to collect a fingerprint. The electronic equipment can utilize the collected fingerprint characteristics to realize fingerprint unlocking, access the application lock, fingerprint photographing, fingerprint incoming call answering and the like.
The touch sensor 280K, also referred to as a "touch panel". The touch sensor 280K may be disposed on the display screen 294, and the touch sensor 280K and the display screen 294 form a touch screen, which is also referred to as a "touch screen". The touch sensor 280K is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 294. In other embodiments, the touch sensor 280K may also be disposed on a surface of the electronic device at a different location than the display 294.
Keys 290 include a power on key, a volume key, etc. The keys 290 may be mechanical keys. Or may be a touch key. The electronic device may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device.
At the preview interface of shooting of the shooting application program, a user can observe the preview image, and at the same time, an application control for carrying out the beauty treatment on the preview image and a parameter adjustment control for adjusting the beauty parameters adopted by the beauty treatment can be also arranged at the shot preview interface, and based on the application control, the user can select whether to carry out the beauty treatment on the preview image and adjust the beauty parameters adopted by the beauty treatment.
Referring to fig. 3, a schematic structural diagram of obtaining a beauty parameter according to an embodiment of the present application is shown in fig. 2. The user is first guided to enter related operations by an interactive interface displayed on the display 294 of the electronic device. The face image of the same photographic subject and the face image with makeup (i.e., the makeup image after completion of makeup according to the preference of the user) are then acquired by the camera 293. Also, the plain face image and the makeup-provided face image may be stored or cached in the internal memory 221. Finally, the effect difference analysis is performed on the plain face image and the makeup-carrying face image by the processor 210. Finally, the custom beauty parameters are obtained according to the effect differences, wherein the program for implementing the effect difference analysis mode and the custom beauty parameters can be stored in the internal memory 221. After the user-defined beautifying parameters are obtained, the user-defined beautifying parameters can be adopted to carry out the beautifying treatment on the face image, so that the beautifying image after the beautifying treatment can achieve the satisfactory beautifying effect of the user.
The method for acquiring the beauty parameters provided by the embodiment of the disclosure is described below with reference to the accompanying drawings. Specifically, referring to fig. 4, one of the flow diagrams of the method for acquiring the beauty parameters provided by the present application is shown, and the method may be applied to an electronic device. As shown in fig. 4, the method includes:
step 401, the electronic device receives a first operation of a user.
The first operation is used for triggering and acquiring the customized beautifying parameters. In the embodiment of the application, the first operation may be a key triggering operation (for example, a volume enlarging key and a volume reducing key are pressed for a long time) input through a mechanical key, a preset operation or a screen triggering operation of a preset gesture input through a touch screen, an image recognition triggering operation of a specific image acquired through a camera, a voice triggering operation of preset voice received through an audio module, or a wireless triggering operation sent by other electronic devices.
In some embodiments, the electronic device may receive the first operation at a photographed preview interface. For example, the first operation may be a two-finger up operation by the user on the photographed preview interface shown in fig. 5.
In other embodiments, the electronic device can display an effect selection interface that includes custom beauty controls. The first operation may be a click operation of custom beauty Yan Kongjian by the user. The effect selection interface is displayed in response to the continuously input user operation through a preset path.
For example, the electronic apparatus displays a plurality of shooting parameter setting interfaces shown in fig. 6 (b) in response to a click operation by the user on more controls in the preview interface of shooting shown in fig. 6 (a). Then, the electronic device displays an effect selection interface shown in fig. 6 (c) in response to a click operation of the beauty effect control in the photographing parameter setting interface by the user. Finally, the electronic device may receive a first operation for custom beauty Yan Kongjian in the effects selection interface.
In one example, if the custom beauty parameter does not exist, the electronic device sets the custom beauty control to an on state in response to a click operation of a status switch shown in (c) in fig. 6 on the right side of custom beauty Yan Kongjian by a user, that is, receives a first operation for custom beauty Yan Kongjian at the effect selection interface, and triggers acquisition of the custom beauty parameter.
In another example, if the custom beauty parameter exists, the electronic device sets the custom beauty control to an on state in response to a click operation of a status switch on the right side of custom beauty Yan Kongjian by a user (as shown in (a) of fig. 7), and displays a drop-down menu of the custom beauty control, where the drop-down menu includes a beauty effect collection control and at least one custom beauty effect control. If the switch on the right side of custom beauty Yan Kongjian in the effect selection interface is in an on state, a drop-down menu of custom beauty control is displayed in response to a click operation (shown in (b) of fig. 7) by the user on the "> >" identifier.
As shown in fig. 8 (a), the electronic device may display a drop-down menu of the custom beauty control on the effect selection interface, and divide the effect selection interface into a first display area and a second display area. Specifically, the electronic device sets an area corresponding to the custom beauty Yan Kongjian as a first display area, sets an area corresponding to a pull-down menu of the custom beauty control as a second display area, and sets different fonts, font sizes, colors and shading for the control and the icon in the first display area and the second display area to perform differential display, for example, reduces the transparency of the first display area, sets the shading of the first display area as gray, and sets the first display area as a non-operation area.
As shown in fig. 8 (b), after receiving an operation of displaying a drop-down menu of the custom beauty control, the electronic device opens the custom beauty effect interface and displays the drop-down menu of the custom beauty control. The user-defined beautifying effect interface and the effect selection interface are different interfaces.
It should be noted that, the electronic device receives the first operation of the user input on the beauty effect acquisition control in the effect selection interface, that is, triggers to acquire the custom beauty parameters in a multistage control manner, so that the probability that the acquired custom beauty parameters meet the user requirements can be improved, and the speed of acquiring the custom beauty parameters is further improved.
In the embodiment of the present application, the effect selection interface may further include a default beauty Yan Kongjian, as shown in (a) in fig. 9, and the effect selection interface includes a default beauty control and a custom beauty Yan Kongjian. The default beauty control and the custom beauty Yan Kongjian cannot be in an on state at the same time.
In one example, as shown in (b) of fig. 9, if there is a unique default beautifying effect and the beautifying parameters cannot be modified, the electronic device sets the default beautifying control to an on state in response to a click operation of a status switch on the right side of the default beautifying Yan Kongjian by a user, that is, the image displayed on the photographed preview interface is subjected to the beautifying processing according to the beautifying parameters corresponding to the default beautifying effect.
In another example, as shown in (b) of fig. 9, if there are at least two default beauty effects, or/and a beauty parameter can be modified, the electronic device sets the default beauty control to an on state in response to a click operation of a status switch on the right side of the default beauty Yan Kongjian by the user, and displays a drop-down menu of the default beauty control. As shown in (c) of fig. 9, if the switch on the right side of default beauty Yan Kongjian is in an on state, the electronic device may display a drop-down menu of custom beauty control in response to a click operation by the user for the "> >" identifier.
It should be noted that, the drop-down menu of the default beauty control may include a default beauty effect, a personalized beauty effect, and a personalized setting. The default beautifying effect can be girl make-up effect, student make-up effect, banquet make-up effect and the like. The personalized beauty effect is to acquire personalized beauty parameters through the adjustment degree of the custom beauty adjustment items, and present the personalized beauty effect.
As shown in (a) in fig. 10, the electronic device may display a drop-down menu of a default beautifying control on the effect selection interface, divide the effect selection interface into a first display area and a second display area, specifically, the electronic device may set an area corresponding to the default beautifying control and the custom beauty Yan Kongjian as the first display area, set a display area corresponding to the drop-down menu of the default beautifying control as the second display area, and set different fonts, font sizes, colors and shading for distinguishing display by using the controls and icons in the first display area and the second display area. For example, the transparency of the first display area is reduced, the ground tint of the first display area is set to gray, and the first display area is set to a non-operation area.
As shown in fig. 10 (b), after receiving an operation of displaying a drop-down menu of the default beauty control, the electronic device may further open a default beauty effect interface to display the drop-down menu of the default beauty control. The default beautifying effect interface and the effect selection interface are different interfaces.
Therefore, the default beautifying control and the custom beautifying Yan Kongjian are displayed on the effect selection interface, two modes for acquiring the beautifying parameters are provided for the user, the diversity of the beautifying effect can be realized, and the possibility that the beautifying effect can meet the actual needs of the user is improved.
Step 402, the electronic device prompts a user to shoot a first type image in response to a first operation, and acquires the first type image in response to a second operation of the user.
In the embodiment of the application, the electronic equipment can respond to the first operation to display a shot preview interface. The electronic device may display a prompt message prompting the user to capture the first type of image on the captured preview interface. And the electronic equipment can display the preview image acquired by the camera on the shot preview interface.
The first type of image can be a plain face image or a face image with makeup.
In some embodiments, the prompting manner for prompting the user to take the first type of image may be: displaying text prompt information and/or displaying image prompt information. The image prompt information may be a static display image or a dynamic playing image (e.g., animation, video). For example, the text prompt may be please capture a plain face image, and the image prompt may be an animation of how to aim the face at the face frame. The text prompt information and the image prompt information displayed in a static state and/or the image prompt information displayed in a loop or single play and dynamic play can be displayed after the first operation is responded and before the second operation is responded.
In other embodiments, the prompting manner for prompting the user to take the first type of image may be: and playing the voice prompt information. Specifically, the electronic device may play the prompt voice information through the sound tube, and the voice prompt information may be a prompt tone or a prompt voice. For example, the voice prompt message may be a voice of "please draw a satisfactory makeup for oneself". The electronic device may set a loop play or a single play of the voice prompt.
It should be understood that the electronic device may set up to display text prompt information or display image prompt information on the upper layer and may also set up to display a preview image on the lower layer on the preview interface of the shooting. If the text or image cues partially overlap the preview image, the electronic device may alter the transparency of the text or image cues to enable the user to see the complete preview image.
In the application, aiming at the prompt information for prompting the user to shoot the first type image, the electronic equipment can respond to the operation of the user and does not prompt any more, or does not prompt any more under the condition that the prompt time exceeds the preset time, or does not prompt any more after receiving the second operation of the user.
Taking the first type of image as a face image of a plain face as an example, as shown in (a) of fig. 11, the electronic device displays text prompt information on a shot preview interface: please shoot a face image of the plain face. The prompt information can be displayed by a message prompt box, bubbles, labels and the like. For the display mode of the message prompt box, the prompt information can be statically displayed in the message prompt box at the preset position, can be scrolled and displayed in the message prompt box at the preset position, can be statically displayed in the message prompt box which is displayed according to the preset movement rule, and can be scrolled and displayed in the message prompt box which is displayed according to the preset movement rule.
For example, as shown in (b) in fig. 11, the electronic device may further display a face frame on the preview interface of the photographing, and display a prompt message of "place face in face frame". The boundary of the face frame is the outline of a face image (front face image), and the user is prompted to acquire the front face image so as to improve the probability of meeting the user requirements of the face image obtained after the face processing is carried out by adopting the custom face parameter. It will be appreciated that the boundaries of the above-described face box are universal blur contours that can fit all users' face images.
Optionally, in the embodiment of the present application, since the first type image may be a plain face image or a face image with makeup, the electronic device may set a face makeup attribute of the first type image to be a plain face image, may set a face makeup attribute of the first type image to be a face image with makeup, and may set a detection result of the face makeup attribute of the first type image to be a face image with makeup, where the detection result of the face makeup attribute is a plain face image or a face image with makeup.
It can be understood that the electronic device detects the facial makeup attribute of the first type of image, specifically, can detect the color of the skin around the eyes, the smoothness of the skin or the contour of the eyebrows through an image recognition algorithm, and detects whether the facial makeup attribute is a plain facial image or a makeup facial image. Thereafter, the electronic device marks the first image with the detection result as a tag. The detection result of the face dressing attribute can be identified by a preset code, a preset identifier or a preset file name, and the preset code can be 00, 01 or 88, the preset identifier can be triangle, circle or exclamation mark, the preset file name can be a single character with the first bit of the file name set as fixed, a plurality of characters with the first few bits of the file name set as fixed, a single character with the last bit of the file name set as fixed, and a plurality of characters with the last few bits of the file name set as fixed. The fixed plurality of characters may be the same character or different characters, which is not limited in the present application.
In an embodiment of the present application, as shown in fig. 12, after the preview image is displayed on the photographed preview interface, the electronic device obtains the first type image in response to a second operation (e.g., a click operation on the photographing operation control) of the user. In order to ensure the stability of the photographing light, the flash may be set to be turned off.
The second operation is used for acquiring the first type of image, and the second operation can be a key triggering operation (such as simultaneously pressing an amplifying volume key and a reducing volume key) input through a mechanical key, a preset operation or a screen triggering operation of a preset gesture input through a touch screen, an image recognition triggering operation of a specific image acquired through a camera, a voice triggering operation of preset voice received through an audio module, and a wireless triggering operation sent by other electronic equipment.
In one example, after the first type of image is acquired, the electronic device may not display the first type of image, directly buffer the first type of image, and jump to the next step.
In another example, as shown in fig. 13, after the first type of image is acquired, the electronic device may also display the first type of image and display a hint operation control including a re-beat control and a continue control to jump to the next step. The electronic device may prompt the user to capture the first type of image again in response to a click operation on the re-capture control, and re-capture the first type of image if a second operation is received again. The electronic device may also jump to the next step in response to a click operation on the continue control. In this repeated operation, a first type image satisfying the user is acquired.
Optionally, in the embodiment of the present application, after the first type of image is acquired, the electronic device may further detect whether the first type of image is a front face image, and jump to the next step if the first type of image is a front face image; in the case where the first type image is not a front face image, the first type image is reacquired or the process jumps to the next step in response to an operation by the user.
For example, after the preview image shown in (a) of fig. 14 is acquired by the captured preview interface, after the first type image is acquired and displayed in response to the clicking operation of the capturing operation control by the user, if the first type image is not the front face image through detection, as shown in (b) of fig. 14, the prompt information, the re-capturing control, and the continuing control may be further re-captured, where the re-capturing prompt information is that the front face image is requested to be captured. Thereafter, as shown in (c) in fig. 14, the electronic device deletes the first type image in response to the click operation of the user on the re-shooting control, and redisplays the preview interface for capturing the shots of the first type image as shown in (d) in fig. 14. Alternatively, as shown in (e) of fig. 14, the electronic device jumps to the next step in response to the click operation of the user on the continuation control, and displays a preview interface for capturing a photograph of the second type image as shown in (f) of fig. 14.
Wherein the re-beat control and the continue control can be operated in response to a single click, double click, or long press. After the first type image, the re-shooting prompt information, the re-shooting control and the continuous control are displayed, if the click operation of the user is not received after the preset time, the electronic equipment defaults to delete the first type image according to the click operation of the re-shooting control, and a shooting preview interface for shooting the first type image is redisplayed.
In this way, after the first type image is acquired, before the next step is executed, the first type image, the re-shooting prompt information, the re-shooting control and the continuing control are displayed so as to receive clicking operation of the re-shooting control or the continuing control, and acquire the first image which can be satisfied by the user.
Optionally, in the embodiment of the present application, before responding to the second operation of the user, the electronic device may further detect whether the preview image displayed on the photographed preview interface is a front face image; if the detection result is no, displaying adjustment prompt information shown in (a) in fig. 15, wherein the adjustment prompt information is used for prompting a user to adjust the distance and the angle between the face and the camera so as to acquire a preview image meeting the requirements; if the detection result is yes, an operation prompt message as shown in (b) in fig. 15 is displayed.
It should be understood that the adjustment prompt is determined according to the actual display state of the preview image, for example, please move the face upward, please move the face downward, please turn the head right, please turn the head left, and so on. The operation prompt information is used for prompting the user to input a second operation.
In this way, the preview image in the shot preview interface is detected whether to be the front face image, and the adjustment prompt information prompt is displayed, and the prompt information of the second operation is displayed and input under the condition that the preview image is the front face image, so that the probability that the first type image meets the user requirement can be improved.
Further optionally, in an embodiment of the present application, the electronic device detects whether the preview image or the first type image is a front face image, and specifically includes: and acquiring a face coordinate list corresponding to the preview image or the first type image, wherein the face coordinate list comprises two groups of coordinates of a face rectangular frame, left and right eye coordinates, nose coordinates and left and right mouth corner coordinates. Under the condition that a null value exists in a list value in the face coordinate list, determining that the first type image is not a positive face image; and determining that the preview image or the first type image is a front face image when the list value in the face coordinate list does not have a null value.
Therefore, through the method, the first type image can be ensured to be the front face image, so that the probability that the custom beauty parameters obtained according to the first type image meet the requirements of users is improved.
Optionally, in the embodiment of the present application, after prompting to capture the first type of image, the electronic device may further display the gallery interface in response to an image importing operation of the user, then select a target image from the gallery in response to a selection operation of the user for the image in the gallery interface, then display the target image on a captured preview interface, and finally obtain the first type of image in response to a second operation of the user.
Thus, the first type of image is obtained by importing the saved image from the gallery, so that when the environment where the user is located is not suitable for changing the face dressing attribute (namely, the face dressing state is changed from the makeup carrying state to the makeup carrying state or the face dressing state is changed from the makeup carrying state to the makeup removing state), the custom face can be obtained Yan Canshu, the environmental limit for obtaining the custom face parameter is reduced, and the environmental adaptability for obtaining the custom face function is improved. Meanwhile, a user can shoot a plurality of images in a plain state or a makeup state at the same time, and select a most satisfactory image from the images as a first type image, so that the problem that the satisfaction of a photo shot by the user in time is not high can be solved, and whether the user repeatedly acquires the first type image in the process of acquiring the custom beauty parameters can be avoided, and the efficiency of acquiring the custom beauty parameters is improved.
It should be noted that, in order to obtain the custom beauty parameters, the face image of the face with the makeup needs to be obtained, and in combination with the actual life scene, the makeup process from the face state to the makeup state or the makeup removal process from the makeup state to the face state may need a long time for any user's face makeup attribute. In this period, the electronic device may exit the photographing application, or the photographing application is in a background running state, and further, may cause interruption in the operation flow of acquiring the custom beauty parameters. Thus, the electronic device can set the beauty effect collection control to include an effect collection menu. The effect acquisition menu comprises a first image acquisition control and a second image acquisition control. If the operation interruption sent after the first type image is acquired is detected, the first type image is saved, the first type image is marked by the first type image acquisition control in the effect acquisition menu, the first type image is identified to be acquired, and after the first type image is saved, if the clicking operation on the beautifying effect acquisition control is received again, the effect acquisition menu is displayed in response to the clicking operation on the beautifying effect acquisition control. And, the electronic device may re-prompt the shooting of the first type of image in response to a click operation of the user on the capture first type of image control, and acquire the first type of image in response to a second operation. The electronic device can also jump to the next step in response to a click operation by the user on the capture second type image control.
Step 403, the electronic device prompts the user to shoot the second type of image, and the second type of image is acquired in response to the third operation of the user.
In the embodiment of the present application, the prompting mode for prompting the user to shoot the second type image and the prompting mode for prompting the user to shoot the first type image are not described herein. The second type image includes the same face image as the first type image, and the second type image is different from the first type image.
In the embodiment of the application, when the first type image is a face image of a plain face, the second type image is a face image with makeup, which comprises the same face image as the first type image; alternatively, in the case where the first type image is a makeup-carrying face image, the second type image is a plain face image that includes the same face image as the first type image.
In one example, if the detected result of the first type of image is a face image with a face, the second type of image should be a face image with makeup, and the image capturing requirement may be to prompt the user to make up or similar information, as shown in fig. 16, please draw a satisfactory makeup for himself.
In another example, if the detected result of the first type of image is a face image with makeup, the second type of image should be a face image with plain color, and the image capturing requirement may be a prompt to ask the user to remove makeup or similar information.
In the embodiment of the present application, after the first type image is acquired, as shown in fig. 16, the electronic device displays the photographed preview interface again. And the electronic equipment displays prompt information for prompting the user to shoot the second type of image on the shot preview interface. And the electronic equipment can also display the preview image acquired by the camera on a shot preview interface. The electronic device may receive a third operation of the user at the photographed preview interface, and acquire the second type image in response to the third operation.
Wherein the third operation is for acquiring a second type of image. In the embodiment of the application, the third operation may be a key triggering operation (for example, a volume enlarging key and a volume reducing key are pressed for a long time) input through a mechanical key, a preset operation or a screen triggering operation of a preset gesture input through a touch screen, an image recognition triggering operation of a specific image acquired through a camera, a voice triggering operation of preset voice received through an audio module, or a wireless triggering operation sent by other electronic devices.
Optionally, in the embodiment of the present application, the electronic device may further recognize a gesture through an interface control or a screen on the photographed preview interface, and return to the previous step to reacquire the first type image. The user can distinguish whether the currently photographed preview interface is used for acquiring the first type of image or the second type of image through the displayed prompt information.
Illustratively, as shown in (a) of fig. 17, the user moves the distance and angle between the electronic device and the user according to the prompt information as shown in fig. 16 so that the target image is displayed at the photographed preview interface. The electronic device also displays prompt information on the preview interface of the shooting, for example, please draw a satisfactory makeup for himself, please shoot a face image with makeup, and place the face in the face frame. And then, the electronic equipment responds to the click operation of the shooting control by the user to acquire a second image. After the second image is acquired, the electronic device may not display the second type of image, directly buffer the second type of image, and jump to the next step. After the second type of image is acquired, as shown in (b) of fig. 17, the electronic device may further display the second type of image and display a prompt operation control including a re-shooting control and an acquire beauty parameter control to jump to the next step. The electronic device may prompt the user to capture the second type of image again in response to a click operation on the re-capture control, and re-capture the second type of image if a third operation is received again. The electronic device may also jump to the next step in response to a click operation on the continue control. In this repeated operation, a second image satisfying the user is acquired.
In the embodiment of the application, the electronic device responds to the third operation of the user, and acquires the second type of image under the condition that the shooting trigger condition is met, wherein the shooting trigger condition comprises at least one of the following: the background similarity of the second type image compared with the first type image is larger than a first preset threshold value, the ambient light similarity of the second type image compared with the first type image is larger than a second preset threshold value, and the shooting parameters of the second type image compared with the first type image are the same. The closer the first preset threshold value and the second preset threshold value are to 100%, the more the obtained custom beauty parameters can meet the requirements of users. The flash states in the shooting parameters corresponding to the first type of image are all off states. Wherein ambient light may include light intensity and light color.
Further optionally, in the embodiment of the present application, after receiving the third operation of the user, if the shooting trigger condition is not satisfied, the electronic device detects whether the shooting trigger condition is satisfied again until it is detected that the shooting trigger condition is satisfied to acquire the second type image. If the third operation is received again after the third operation of the user is received and within a period of time when the shooting trigger condition is not satisfied, the electronic device responds to the third operation once after the shooting trigger condition is satisfied.
It should be further noted that the capturing of the first type image and the second type image under the indoor light may be selected to increase the probability that the similarity of the ambient light of the second type image to the first type image is greater than the second preset threshold.
Therefore, the acquisition environment of the second type of image is limited by shooting triggering conditions, the influence of the acquisition environment on the display effect of the second type of image is weakened, the difference between the first type of image and the second type of image is only the cosmetic effect, the probability of obtaining the image after the user-defined beautifying parameters are adopted for carrying out the beautifying treatment is improved, and the user satisfaction effect can be achieved.
Generally, in order to meet the personalized requirements of users on the beauty images, custom beauty Yan Canshu needs to be obtained, and the beauty treatment is performed on the face images of the users, and the custom beauty parameters are generally applied to the owners of the electronic devices, that is, the owners of the electronic devices. Therefore, the electronic device can set the authority for acquiring the custom beauty parameters in a face recognition mode, and specifically comprises the following steps: and carrying out face recognition on the face image in the shot preview interface, and acquiring a first type image and a second type image under the condition that the face image is consistent with the face of the user.
Optionally, in the embodiment of the present application, after prompting to capture the second type of image, the electronic device may further display a gallery interface in response to an image importing operation of the user, and then, the electronic device selects the target image from the gallery in response to a selection operation of the user for the image in the gallery interface. And finally, the electronic equipment responds to a third operation of the user to acquire a second type image.
Thus, by importing the saved images from the gallery, the second type of image is obtained, so that when the environment where the user is located is not suitable for changing the face dressing attribute (i.e. the face dressing state is changed from the makeup carrying state to the makeup carrying state or the face dressing state is removed from the makeup carrying state to the makeup carrying state), the custom beauty Yan Canshu can also be obtained, so that the environmental limitation of obtaining the custom beauty parameters is reduced, and the environmental adaptability of obtaining the custom beauty function is improved. Meanwhile, the user can shoot a plurality of images in the plain or makeup state at the same time, and select one of the most satisfactory images as a second type image for acquiring the custom beauty parameters, so that the problem of low satisfaction of the photos shot by the user in time can be solved, and whether the user repeatedly acquires the second type image in the process of acquiring the custom beauty parameters can be avoided, and the efficiency of acquiring the custom beauty parameters is improved.
It should be noted that the electronic device may set the beautifying effect collection control to include an effect collection menu. The effect acquisition menu comprises a first image acquisition control and a second image acquisition control. If the operation interruption sent before the second type image is acquired is detected, the electronic equipment stores the first type image, marks the first type image acquisition control in the effect acquisition menu, and identifies that the first type image is acquired. If the operation interruption sent after the second type of image is acquired is detected, the electronic equipment stores the first type of image and the second type of image, marks the acquired first type of image in the effect acquisition menu, marks the acquired first type of image, marks the acquired second type of image in the effect acquisition menu, and marks the acquired second type of image. Then, after the first type image or the first type image and the second type image are saved, if the clicking operation of the beautifying effect acquisition control is received again, the electronic equipment responds to the clicking operation of the beautifying effect acquisition control and displays the effect acquisition menu. And the electronic device may re-prompt the shooting of the first type of image in response to a click operation of the user on the capture of the first type of image control and acquire the first type of image in response to a second operation, or re-prompt the shooting of the second type of image in response to a click operation of the user on the capture of the second type of image control and acquire the second type of image in response to a third operation.
Step 404, the electronic device obtains the custom beauty parameters according to the first type image and the second type image.
Optionally, in the embodiment of the present application, the electronic device determines the face image of the plain face (as shown in fig. 13) and the face image of the makeup face (as shown in fig. 17 (b)) according to a default setting manner or an image identifier, and obtains the custom beauty parameters according to the pixel difference between the face image of the plain face and the face image of the makeup face. The process of acquiring the beauty parameters may be implemented by an image processing chip or by a processor in the electronic device (in which the application program having the function of determining the beauty parameters is installed). It should be noted that if the first type image is a face image of a plain face, the second type image is a face image of a makeup-carrying face, and conversely, if the first type image is a face image of a makeup-carrying face, the second type image is a face image of a plain face.
In the embodiment of the present application, the custom beauty parameter is a Look-Up Table (LUT), which may also be called a LUT file or LUT parameter, and is a mapping Table of Red Green Blue (RGB). Specifically, an image includes a plurality of pixels, each pixel being represented by an RGB value. The display screen of the electronic device may display the image according to the RGB values for each pixel in the image. That is, these RGB values tell the display how to illuminate to mix out the various colors for presentation to the user. If it is desired to change the color (or style, effect) of the image, these RGB values may be adjusted. The LUT is an RGB mapping table, which is used to characterize the correspondence between RGB values before and after adjustment. For example, please refer to table 1, which shows an example of a LUT.
TABLE 1
When the original RGB value is (14,22,24), the output RGB value is (6,9,4) through the mapping of the LUT shown in table 1. When the original RGB value is (61,34,67), the output RGB value is (66,17,47) through the mapping of the LUT shown in table 1. When the original RGB value is (94,14,171), the output RGB value is (117,82,187) through the mapping of the LUT shown in table 1. When the original RGB value is (241,216,222), the output RGB value is (255,247,243) through the mapping of the LUT shown in table 1.
The display effect of the image processed by the LUT is different from the display effect of the image processed by the LUT for the same image; and different LUTs are adopted to process the same image, so that display effects of different styles can be obtained. The "display effect" of the image in the embodiment of the application refers to an image effect that can be observed by human eyes after the image is displayed by the display screen.
In the embodiment of the application, because the face images contained in the face images with the plain face and the face images with the makeup are highly similar, only the difference of effect dimensions such as color, brightness and the like exists, the colors of the face images with the plain face and the face areas with the face images with the makeup can be extracted from pixel to pixel respectively, and a mapping relation of pixel level is established, and the mapping relation is used for representing the difference between the state with the plain face and the state with the makeup, namely, the mapping relation is equivalent to the self-defined beauty parameters.
Optionally, in the embodiment of the present application, the electronic device may first detect a face key point of a face image of a plain face or a face image with makeup, divide the face into a plurality of regions based on the key point, then extract color information of each region, and finally establish a pixel-level correspondence with respect to the face regions corresponding to the face image of the plain face and the face image with makeup, and generate the LUT, so as to improve a speed of obtaining the custom beauty parameter.
Further alternatively, in the embodiment of the present application, according to the general implementation concept: firstly, carrying out region division on a first image and a second image, and then comparing pixel differences of each group of corresponding face regions to obtain custom beautifying parameters. As shown in fig. 18, this step 404 may also be implemented by the following steps 1801 to 1802.
Step 1801, the electronic device divides the face image of the plain face into at least one first face region through a preset face region division algorithm, and divides the face image of the makeup face into at least one second face region through the preset face region division algorithm.
In step 1802, the electronic device obtains a custom beauty parameter according to the region type of the first face region, and the second face region corresponding to the first face region.
In the embodiment of the application, the face image with the face is a first type image or a second type image, and the face image with the makeup is a second type image or a first type image. The first face area corresponds to the second face area one by one, and the area types comprise uniform color area types and color difference area types. It can be understood that, because the face image of the plain face and the face image of the makeup face both conform to the boundary limit of the face frame, the size of the image as a whole, the face position and the facial feature position in the face are similar, so that the first face region and the second face region divided according to the same preset face region division algorithm correspond one by one.
Further optionally, in the embodiment of the present application, in the step 1801, the plain face image is divided into at least one first face area by a preset face area segmentation algorithm, which is specifically implemented as follows: training an active shape model (Active Shape Model, ASM) according to a preset sample, wherein the preset sample is a face image with facial feature point identifications; the face images of the plain faces are input into a trained ASM, acquiring a first search feature point of a face image of a plain face; and constructing a shape vector of the face region according to the first search feature points to obtain at least one first face region.
In the embodiment of the application, ASM is a method for extracting a shape based on a feature point distribution model (Point Distribution Model, PDM), in which geometric shapes of objects with similar shapes, such as a face, a hand, a heart, a lung and the like, can be sequentially connected in series through coordinates of a plurality of key feature points to form a shape vector. In the ASM training process, training samples are collected, facial feature points of people are marked, shape vectors are constructed, normalization and alignment are carried out on all face images, and local features are constructed for each feature point, so that ASM training is achieved. After training is completed, feature points can be searched in the face of the face image of the plain face according to the trained ASM, and the region segmentation result of the face feature points can be obtained. Thus, the first face region obtained by constructing the face region through the first search feature points obtained by the ASM has higher similarity with the outline of the facial features and corresponds to the actual cosmetic region, so that the accuracy of judging the region type corresponding to the first face region can be improved, and the satisfaction degree for self-defining the cosmetic parameters is further improved.
It can be appreciated that, similar to the above-mentioned process of dividing the face image with the face, in step 1801, the face image with the make-up is divided into at least one second face region by a preset face region dividing algorithm, which specifically includes: training ASM according to a preset sample, wherein the preset sample is a face image with facial feature point marks; inputting the face image with makeup into a trained ASM, and acquiring a first search feature point of the face image with makeup; and constructing a shape vector of the face region according to the first search feature points to obtain at least one first face region. The dividing process of the second face area corresponding to the face image with makeup is not described herein.
In the embodiment of the present application, other dividing methods may be adopted for the dividing manner of the face area, which is not limited herein.
Therefore, the first face region is classified according to the cosmetic effect characteristics (the region types comprise uniform color region types and different color region types), and then the color mapping relation between the plain face image and the face image with the cosmetic is established according to different region types, so that the acquisition efficiency of the custom face parameters can be improved.
Further optionally, in an embodiment of the present application, before obtaining the custom beauty Yan Canshu, the electronic device may further determine a region type of each of the at least one first face region. Specifically, the electronic device may determine the region type of the first face region that is more uniform and smooth in the image feature as a color uniformity region type. The judging method of the region type specifically comprises the following steps: acquiring a first color component mean value of pixels in an area of a first face area and a second color component mean value of pixels in an image of a plain face image, wherein the first color component mean value comprises: the first R component average, the first G component average, and the first B component average, the second color component average comprising: a second R component average, a second G component average, and a second B component average; comparing the first color component mean value with the second color component mean value; determining that the region type of the first face region is a uniform color type under the condition that the average difference between the first color component average value and the second color component average value is smaller than a third preset threshold value; and determining the region type of the first face region as a color difference type under the condition that the average difference between the first color component mean value and the second color component mean value is greater than or equal to a third preset threshold value.
It can be understood that the face area corresponding to the color uniformity type is uniform and flat, and the face area corresponding to the color difference type is rich in detail texture. The third preset threshold may be set to 10% by way of example.
Therefore, the color is the main difference between the face image with the plain face and the face image with the makeup face, and the region type of the first face region is judged according to the color component mean value, so that the pixel distribution characteristics of the same region type are similar, the accuracy of adopting different mapping modes aiming at different region types can be improved, and the speed of obtaining the custom face parameters can be improved.
In an example, under the condition that the region type of the first face region is of a uniform color type, the electronic device establishes a mapping relationship between the first face region and the second face region in a pixel-by-pixel mapping mode according to the first face region and the second face region corresponding to the first face region; under the condition that the region type of the first face region is a color difference type, the electronic equipment establishes a mapping relation between the first face region and the second face region by adopting a region average pixel integral mapping mode according to the first face region and the second face region corresponding to the first face region; and the electronic equipment determines the parameters in the mapping relation as custom beauty parameters.
In another example, when the region type of the first face region is a color difference type, the electronic device divides the first face region into at least one first rectangular region according to a preset region size with a first pixel point in the first face region as a starting point, and obtains a pixel average value of each first rectangular region in the at least one first rectangular region; the electronic equipment divides the second face area into at least one second rectangular area according to the size of a preset area by taking a second pixel point in the second face area as a starting point, and acquires the pixel average value of each second rectangular area in the at least one second rectangular area, wherein the first pixel point corresponds to the second pixel point; the electronic equipment establishes a mapping relation between the pixel average value of the first rectangular area and the pixel average value of the second rectangular area. The preset area size may be set to include 10000 pixels, for example.
It should be noted that the mapping relationship in the above example is implemented by the LUT. In terms of image processing, the LUT can be used to accomplish filter-like effects, the principle being that a mapping relationship, input colors (R, G, B), look up through the LUT, etc. to a new color (R, G, B), a mapping operation is accomplished. The LUT essentially belongs to the replacement of individual pixels, so that each pixel corresponds to a new color value.
It can be understood that, because the face image content in the face image with the face is highly similar to the face image content in the face image with the face, only the difference of the effect dimensions such as color, brightness and the like exists, the first face region and the color of the second face region corresponding to the first face region are extracted from pixel to pixel respectively, and a pixel-level mapping relation is established, namely, the difference between the face state and the face state of the same face is equivalent to the user-defined beauty parameters.
The method includes the steps of selecting a target first face region containing an eye region in at least one first face region and a target second face region corresponding to the target first face region, extracting pixel values (R, G, B) of target pixel points in the target first face region as an original value 1, extracting pixel values (R, G, B) of opposite pixel points in the target second face region corresponding to the target pixel points as an output value 1, and the like, acquiring complete original values and output values of the eye region, and storing all information as an LUT (look-up table) representing a color mapping relation, wherein the LUT is a user-defined eye region beauty parameter. When the face image captured by the camera comprises an eye area, the LUT is applied to the eye area of the face image captured currently, and a mapping is built according to the corresponding relation of the lookup table, so that the effect of the eye area after beautification is achieved.
Therefore, the user-defined beautifying parameters are determined by adopting different mapping modes according to different region types, and the speed of obtaining the user-defined beautifying parameters can be improved.
Optionally, in the embodiment of the present application, after obtaining the custom beauty parameters, the electronic device may detect whether the custom beauty parameters exist, directly store the custom beauty parameters if the custom beauty parameters do not exist, and display a prompt control if the custom beauty parameters exist, so as to prompt the user to select to replace the existing custom beauty parameters, or add a data item to store the custom beauty parameters. And the electronic equipment responds to an operation instruction of selecting a storage mode of the prompt control by a user, and stores the custom beauty parameters according to the corresponding storage mode. Therefore, a plurality of custom beauty Yan Canshu can be saved so as to meet the requirement of users on diversification of custom beauty parameters. It can be understood that, in order to balance the system resources and the user demands, the storage quantity of the custom beauty parameters can be set, and once the storage upper limit is reached, the custom beauty parameters cannot be stored in a new mode.
In the embodiment of the application, because the requirements of the user on the beauty treatment are different, the daily decoration of the user is different, and therefore, the first type of image can be a plain face image, a face image with makeup or a face image imitating makeup. The second type of image may be a plain face image, a vanity face image, or an image of a vanity face image that is different from the first type of image. Assuming that the first type of image is a face image with a face and the second type of image is a face image with a face, the face image with the face can be obtained by processing the face image with the face according to the self-defined face beautifying parameters. And if the first type of image is a plain face image and the second type of image is a makeup-like face image, processing the plain face image according to the user-defined beauty parameters to obtain the makeup-like face image. And if the first type of image is a makeup-carrying face image and the second type of image is a makeup-simulating face image, the makeup-carrying face image is processed according to the user-defined beauty parameters to obtain the makeup-simulating face image.
According to the method for acquiring the beauty parameters, the first operation, the second operation and the third operation are sequentially responded to obtain the custom beauty parameters, repeated debugging of the beauty parameters according to the beauty effect is not needed, and the operation steps are simple and time consumption is short. The face image with make-up can be understood as the face effect expected to be achieved by the user, so that the face image which is photographed daily can achieve the face effect satisfied by the user by adopting the face image which is obtained after the face treatment is carried out by adopting the custom face parameters.
Optionally, in the method for obtaining the beauty parameters provided by the embodiment of the present application, the custom beauty parameters are applied in the beauty treatment process, and the electronic device may: and acquiring the acquired image from the gallery, and responding to the user's beautifying operation on the acquired image to acquire a beautifying image obtained by processing the acquired image according to the custom beautifying parameters.
Therefore, the custom beauty parameters can be applied to the shot images, the application range of the custom beauty parameters is enlarged, and the utilization rate of the custom beauty parameters is provided.
Optionally, in the method for obtaining the beauty parameters provided by the embodiment of the present application, the custom beauty parameters are applied in the beauty treatment process, and the electronic device may further: and acquiring a first image, and processing the first image according to the user-defined beautifying parameters to obtain a second image.
It should be understood that after the camera acquires the first image, a second image is obtained in response to an image acquisition operation of the user on the first image, where the second image is an image obtained by processing the first image according to the custom beauty parameters. The second image obtained may be stored directly, or applied.
In this way, the first image acquired is subjected to the beautifying processing in the process of obtaining the second image, namely the first image is processed according to the user-defined beautifying parameters, so that the speed of displaying the first image in response to the user operation is improved.
Further optionally, in the method for obtaining a beauty parameter provided by the embodiment of the present application, custom beauty Yan Canshu is applied in a beauty processing process, and before obtaining the second image, the electronic device may further: and displaying a second image on the shot preview interface.
Therefore, the second image is displayed through the shot preview interface, namely, the second image obtained by processing the first image according to the user-defined beautifying parameters is displayed, and the user can observe the beautifying effect image (the second image) through the preview interface in real time, so that the shooting speed of the user is improved, and further, the system resources are saved.
Further optionally, in the method for obtaining a beauty parameter provided by the embodiment of the present application, custom beauty Yan Canshu is applied in a beauty processing process, and before obtaining the second image, the electronic device may further: and displaying the first image on the shot preview interface, wherein the shot preview interface comprises a custom beauty application control, and then, responding to a fourth operation of a user aiming at the custom beauty application control, displaying the second image on the shot preview interface, wherein the fourth operation is used for triggering and starting a beauty effect.
In the embodiment of the application, the fourth operation may be a key triggering operation (for example, a key for simultaneously pressing an up volume key and a down volume key) input through a mechanical key, a preset operation or a screen triggering operation of a preset gesture input through a touch screen, an image recognition triggering operation of a specific image acquired through a camera, a voice triggering operation of preset voice received through an audio module, or a wireless triggering operation sent by other electronic devices.
In the embodiment of the application, the face image in the first image and the face image contained in the first type image or the second type image are the same face image. Assuming that an image acquired by a camera of the electronic device is a first image as shown in (a) of fig. 19, processing the first image according to the custom beauty parameters to obtain a second image as shown in (b) of fig. 19.
After the user-defined beautifying parameters are obtained, the electronic device can default to adjust a switch on the right side of the user-defined beautifying effect control in the effect selection interface to be in an on state, and a second image obtained by processing the third image according to the user-defined beautifying parameters is displayed on the shot preview interface.
It should be further noted that, after obtaining the custom beautifying parameters, the electronic device may default to adjust the switch on the right side of the "custom beautifying effect" control in the effect selection interface to the off state. The electronic equipment displays the acquired first image on a shot preview interface; the electronic equipment responds to a fourth operation of a user and displays a second image obtained by processing the first image according to the user-defined beautifying parameters; the fourth operation is used for triggering and starting the beautifying effect.
In this way, by responding to the fourth operation of the user for the custom beauty application control, the second image is displayed on the shooting preview interface, so that the user can select to display the first image without beauty according to the instant requirement or process the second image after the first image according to the custom beauty parameter, and the diversified requirement of the user for the shot preview interface display is met. The user can observe the beauty effect image (second image) through the preview interface in real time, so that the shooting speed of the user is improved, and further, system resources such as electric quantity, residual available pressing times, system progress number and the like are saved.
Further optionally, the method for obtaining the beauty parameters provided by the embodiment of the application further includes: the electronic equipment sends out beauty prompt information which is used for prompting the second image to be an image obtained by processing the first image according to the custom beauty parameters.
The beautifying prompt information can be text prompt information, image prompt information or voice prompt information. It should be understood that, similar to other prompt information, the beauty prompt information may be text information, graphic information or sound information, and sent by using a display screen or a sound tube as a carrier, for example, the beauty prompt information is displayed or played.
For example, as shown in fig. 20, while the second image is displayed on the photographed preview interface, the "beautified" beautification prompt information is displayed on the top of the second image.
Therefore, by sending out the beautifying prompt information, the second image is prompted to be the beautifying image after the beautifying treatment so as to facilitate the user to determine whether to reprocess the second image.
Further optionally, in an embodiment of the present application, after displaying the second image, a method for obtaining a beauty parameter provided in the embodiment of the present application further includes: the electronic device displays an image editing interface in response to a fifth operation on the second image, the image editing interface including one or more controls for editing the second image. And the second image can be reprocessed through one or more controls in the image editing interface so as to meet the diversified editing requirements of users on the image.
In the embodiment of the application, one or more controls can be used for adjusting the tone, the environment enhancement, the blurring, the rotation, the clipping, the shooting parameters, the filter, the picture matting, the scrawling, the mosaic addition, the special effect and the text addition and the like of the fourth image.
Wherein the fifth operation is for editing the second image. In the embodiment of the application, the fifth operation may be a key triggering operation (for example, a key for simultaneously pressing an up volume key and a down volume key) input through a mechanical key, a preset operation or a screen triggering operation of a preset gesture input through a touch screen, an image recognition triggering operation of a specific image acquired through a camera, a voice triggering operation of preset voice received through an audio module, or a wireless triggering operation sent by other electronic devices.
In some embodiments, as shown in fig. 21 (a), the image editing interface may be configured to display one or more controls on the basis of a photographed preview interface. The one or more controls include a hue control, an environment enhancement control, a blurring control, and the like.
In other embodiments, as shown in fig. 21 (b), the image editing interface is configured to display the second image and one or more controls after the second image is acquired. The one or more controls include a hue control, an environment enhancement control, a blurring control, and the like.
It should be noted that, the one or more controls include a beauty degree control bar, where the beauty degree control bar is used to adjust a strength of performing a beauty treatment on the second image by using custom beauty Yan Canshu, and after displaying the image editing interface, the method for obtaining the beauty parameters provided by the embodiment of the present application further includes: the electronic equipment responds to the adjustment operation of the user on the beauty degree control bar, and a third image is displayed; the third image is an image obtained by processing the second image according to the strength of the beauty treatment corresponding to the adjusted beauty degree control bar and the custom beauty parameters.
Illustratively, the beauty treatment force can be changed by changing the mark position of the beauty degree control bar as shown in fig. 22 through a drag operation or a click operation. Taking the lip color as an example, the lip color in the plain face image is light pink, the lip color in the makeup face image is bright red, namely the lip color in the collected first image is light pink, and the lip color in the second image is bright red. Because the color of the reddish shade is more prominent and may not match with the hair color in the second image, the adjustment operation of the beauty degree control bar by the user may be received, and the mark position is adjusted to the middle position, that is, the strength of the beauty treatment corresponding to the beauty degree control bar is 50%. Assuming that RGB of the pixel corresponding to the lip in the first image is (255,192,203), RGB of the pixel corresponding to the lip in the second image is (255, 0), RGB of the pixel corresponding to the lip in the third image is (255,96,101).
Therefore, the implementation strength of the custom beauty parameters is adjusted through the adjustment operation of the beauty degree control bar, so that the probability that the edited third image meets the diversified editing requirements is improved.
In one example, if the second image is an image displayed at the photographed preview interface, the electronic device may display the third image at the photographed preview interface or obtain the third image after the image editing interface displays the third image in response to the user's operation.
In another example, if the second image is a captured image, the electronic device may capture the third image after the image editing interface displays the third image in response to a user operation.
Therefore, the second image can be reprocessed through the editing effect control in the image editing interface so as to meet the diversified editing requirements of users on the image effect.
Further optionally, in an embodiment of the present application, the electronic device may set the first image as an image acquired by the front camera. Under the condition that a user starts the front camera, the electronic equipment sets the automatic application custom beauty parameters to carry out beauty treatment on the captured preview image. The preview image includes a face image in the first type image or the second type image.
Further optionally, in an embodiment of the present application, the electronic device may further set the first image as an image acquired by the rear camera. Because the rear camera is generally used for shooting scenes or group images, in the case that a user starts the rear camera, the electronic device sets whether to apply custom beauty parameters to perform beauty treatment on the captured preview image according to user operation, wherein the preview image comprises a face image. It can be understood that, in order to ensure the adaptability of the custom beauty parameters and the face image, the electronic device may further set to acquire a target partial image in the preview image, and perform the beauty treatment on the target partial image according to the custom beauty parameters, where the target partial image is a partial image of the face image in the preview image including the first type image or the second type image.
Alternatively, in the embodiment of the present application, as shown in fig. 23, the above step 404 may also be implemented by the following steps 2301 and 2302.
In step 2301, when it is detected that the first type image and the second type image need to be subjected to a concealing process, the electronic device performs the concealing process on the first type image and the second type image according to a preset concealing algorithm, and updates the first type image and the second type image.
And 2302, the electronic device compares the pixel difference between the updated first type image and the updated second type image to obtain the custom beauty parameters.
Taking the first type image as an example, the electronic device may divide the first type image into a skin area and a five sense organs area, then determine whether a flaw exists in the skin area of the first type image and a flaw location, and finally perform a flaw removal process according to the flaw location.
It should be noted that, similar to the method of performing the concealing treatment on the second type image, the electronic device may use dividing the skin area to determine whether there is a flaw and a flaw position in the skin area of the second type image, and finally perform the concealing treatment according to the flaw position.
It can be understood that, in the method for performing the flaw removal processing on the second type image, the electronic device may determine whether a flaw exists in the skin area of the first type image and the flaw position according to the flaw position of the skin area of the first type image, and if so, perform the flaw removal processing.
Further alternatively, in the embodiment of the present application, the blemish that can be detected by the electronic device may be a blemish skin, such as a acne mark, a nevus, a wrinkle, or an effect that may be represented by makeup, so, in order to obtain the actual concealer removal intention of the user, it may be set after the first type image and the second type image are obtained, in response to the concealer removal selection operation input by the user, it is determined whether concealer removal is performed, and then it is detected whether concealer removal is required for the first type image and the second type image.
It can be understood that, for selecting whether to perform the concealing process, the electronic device may further set and record the concealing selection operation instruction input by the user, which is used as a default processing manner, so as to reduce the necessary steps required for obtaining the custom beauty parameters and obtain the efficiency of the custom beauty parameters. Alternatively, a default processing method (may be performed for performing the concealer or not) may be set as to whether or not the concealer is selected, and if the concealer selection operation by the user is subsequently received, the concealer selection operation by the user is recorded as the default processing method.
The embodiment of the application provides electronic equipment, which can comprise: one or more cameras, a processor, a display screen, memory and a communication module; the camera, the display screen, the memory, the communication module and the processor are coupled; wherein the memory is for storing computer program code comprising computer instructions which, when executed by the electronic device, cause the electronic device to perform the method of obtaining a cosmetic parameter as in any of the first aspects. The electronic device, when executing computer instructions, can perform the functions or steps of the method embodiments described above. The structure of the electronic device may refer to the structure of the electronic device 200 shown in fig. 2.
Embodiments of the present application also provide a computer storage medium including computer instructions which, when executed on an electronic device, cause the electronic device to perform the functions or steps of the method embodiments described above.
Embodiments of the present application also provide a computer program product which, when run on a computer, causes the computer to perform the functions or steps of the method embodiments described above.
The embodiment of the application also provides a chip, which comprises a processor and a communication interface, wherein the communication interface is used for communicating with a communication module in front of the chip, and the processor is used for running a computer program or instructions to realize executing the functions or steps in the embodiment of the method.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A method for obtaining a cosmetic parameter, comprising:
receiving a first operation of a user, wherein the first operation is used for triggering acquisition of custom beauty Yan Canshu;
prompting the user to shoot a first type image in response to the first operation, and acquiring the first type image in response to a second operation of the user;
prompting the user to shoot a second type image, and responding to a third operation of the user to acquire the second type image, wherein the second type image and the first type image comprise the same face image, and the second type image and the first type image are different;
and obtaining custom beauty parameters according to the first type image and the second type image.
2. The method according to claim 1, wherein the method further comprises:
and acquiring a first image, and processing the first image according to the custom beauty parameters to obtain a second image.
3. The method of claim 2, wherein before processing the first image according to the custom beauty parameters to obtain a second image, the method further comprises:
and displaying the second image on a shot preview interface.
4. The method of claim 2, wherein before processing the first image according to the custom beauty parameters to obtain a second image, the method further comprises:
displaying the first image on a shot preview interface, wherein the shot preview interface comprises a custom beauty application control;
and responding to fourth operation of the user on the custom beauty application control, displaying the second image on the shot preview interface, wherein the fourth operation is used for triggering and starting a beauty effect.
5. The method according to claim 3 or 4, characterized in that the method further comprises:
and sending out beauty prompt information, wherein the beauty prompt information is used for prompting that the second image is an image obtained by processing the first image according to the custom beauty parameters.
6. The method according to claim 3 or 4, characterized in that the method further comprises:
in response to a fifth operation of the user on the second image, an image editing interface is displayed, the image editing interface including one or more controls for editing the second image.
7. The method of claim 6, wherein the one or more controls include a beauty degree control bar for adjusting a force of the second image to be beautified using the custom beauty parameters;
Responding to the adjustment operation of the user on the beauty degree control bar, and displaying a third image;
the third image is an image obtained by processing the second image according to the self-defined beauty parameters according to the strength of the beauty treatment corresponding to the adjusted beauty degree control bar.
8. The method of any of claims 1-7, wherein the acquiring the second type of image in response to a third operation by the user comprises:
acquiring the second type image in response to a third operation of the user under the condition that a shooting trigger condition is met;
wherein the shooting trigger condition includes at least one of: the image background similarity of the second type image compared with the first type image is larger than a first preset threshold value, the ambient light similarity of the second type image compared with the first type image is larger than a second preset threshold value, and the shooting parameters of the second type image compared with the first type image are the same.
9. The method according to any one of claims 1-8, wherein prior to the first operation by the receiving user, the method further comprises:
Displaying an effect selection interface, wherein the effect selection interface comprises: custom beauty Yan Kongjian;
the first operation is a click operation of the custom beauty Yan Kongjian by the user.
10. The method of claim 1, wherein the first type of image is a plain face image and the second type of image is a vanity face image that includes the same face image as the first type of image; or the first type image is a face image with make-up, and the second type image is a face image with plain face, wherein the face image comprises the same face image as the first type image.
11. The method of claim 10, wherein the obtaining custom beauty parameters from the first type of image and the second type of image comprises:
dividing the plain face image into at least one first face region through a preset face region segmentation algorithm, and dividing the face image with make-up into at least one second face region through the preset face region segmentation algorithm;
and comparing pixel differences between the first face region and a second face region corresponding to the first face region according to the region type of the first face region to obtain the custom beauty Yan Canshu, wherein the region type comprises a uniform color region type and a color difference region type.
12. An electronic device, the electronic device comprising: one or more cameras, one or more processors, a display screen, a memory, and a communication module; the camera, the display screen, the memory, the communication module and the processor are coupled;
wherein the memory is for storing computer program code comprising computer instructions which, when executed by the electronic device, cause the electronic device to perform the method of any of claims 1-11.
13. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-11.
14. A computer program product, characterized in that the computer program product, when run on a computer, causes the computer to perform the method according to any of claims 1-11.
15. A chip comprising a processor and a communication interface for communicating with a module external to the chip, the processor for executing a computer program or instructions to implement the method of any of claims 1-11.
CN202210577515.2A 2022-05-25 2022-05-25 Method, device, electronic equipment and medium for acquiring beauty parameters Pending CN117201954A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210577515.2A CN117201954A (en) 2022-05-25 2022-05-25 Method, device, electronic equipment and medium for acquiring beauty parameters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210577515.2A CN117201954A (en) 2022-05-25 2022-05-25 Method, device, electronic equipment and medium for acquiring beauty parameters

Publications (1)

Publication Number Publication Date
CN117201954A true CN117201954A (en) 2023-12-08

Family

ID=88992859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210577515.2A Pending CN117201954A (en) 2022-05-25 2022-05-25 Method, device, electronic equipment and medium for acquiring beauty parameters

Country Status (1)

Country Link
CN (1) CN117201954A (en)

Similar Documents

Publication Publication Date Title
CN110929651B (en) Image processing method, image processing device, electronic equipment and storage medium
US20200372692A1 (en) Method and apparatus for generating cartoon face image, and computer storage medium
US10783353B2 (en) Method for detecting skin region and apparatus for detecting skin region
CN113129312B (en) Image processing method, device and equipment
CN109191410B (en) Face image fusion method and device and storage medium
US10438329B2 (en) Image processing method and image processing apparatus
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
CN109961453B (en) Image processing method, device and equipment
EP3582187B1 (en) Face image processing method and apparatus
CN107256555B (en) Image processing method, device and storage medium
CN111327814A (en) Image processing method and electronic equipment
CN108076290B (en) Image processing method and mobile terminal
CN111047511A (en) Image processing method and electronic equipment
CN114037692A (en) Image processing method, mobile terminal and storage medium
CN107563353B (en) Image processing method and device and mobile terminal
CN109859115A (en) A kind of image processing method, terminal and computer readable storage medium
CN112184540A (en) Image processing method, image processing device, electronic equipment and storage medium
WO2021218834A1 (en) Font file processing method, electronic device, and readable storage medium
CN114187166A (en) Image processing method, intelligent terminal and storage medium
CN112449098B (en) Shooting method, device, terminal and storage medium
CN111402271A (en) Image processing method and electronic equipment
CN117201954A (en) Method, device, electronic equipment and medium for acquiring beauty parameters
CN114845157B (en) Video processing method and electronic equipment
CN114998115A (en) Image beautification processing method and device and electronic equipment
CN114066724A (en) Image processing method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination