CN106056533A - Photographing method and terminal - Google Patents
Photographing method and terminal Download PDFInfo
- Publication number
- CN106056533A CN106056533A CN201610362789.4A CN201610362789A CN106056533A CN 106056533 A CN106056533 A CN 106056533A CN 201610362789 A CN201610362789 A CN 201610362789A CN 106056533 A CN106056533 A CN 106056533A
- Authority
- CN
- China
- Prior art keywords
- expressive features
- features data
- face parameter
- expression
- facial image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 230000001815 facial effect Effects 0.000 claims description 82
- 241000208340 Araliaceae Species 0.000 claims description 7
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims description 7
- 235000003140 Panax quinquefolius Nutrition 0.000 claims description 7
- 239000000284 extract Substances 0.000 claims description 7
- 235000008434 ginseng Nutrition 0.000 claims description 7
- 230000005540 biological transmission Effects 0.000 claims description 2
- 230000013011 mating Effects 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 29
- 230000003796 beauty Effects 0.000 abstract 3
- 238000010168 coupling process Methods 0.000 description 12
- 238000005859 coupling reaction Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 12
- 230000008878 coupling Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 9
- 230000008921 facial expression Effects 0.000 description 7
- 230000000153 supplemental effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000004549 pulsed laser deposition Methods 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 206010011469 Crying Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
Embodiments of the invention provide a photographing method and a terminal. The photographing method comprises the following steps: obtaining a human face image previewed by a terminal pick-up head, obtaining first expression characteristic data according to the human face image, obtaining a beauty parameter corresponding to the first expression characteristic data according to the obtained first expression characteristic data, and photographing according to the beauty parameter. The problem that beauty effects are not enough abundant when photographing is performed in the prior art is solved, and user experience is improved.
Description
Technical field
The present invention relates to field of terminal, especially relate to a kind of method taken pictures and terminal.
Background technology
Along with intelligent terminal's development, the intelligent terminal that can be used for taking pictures gets more and more, such as mobile phone, the digital phase of intelligence
Machine, panel computer etc..User is when using these intelligent terminal to take pictures, in order to take satisfied photo, more and more
U.S. face function that intelligent terminal manufacturer is built-in, it addition, application program market also provides increasing U.S. face application program.Existing
Have in technology, for the U.S. face function that intelligent terminal is built-in, or for U.S. face application program, or by identifying that face is automatic
Load U.S. face effect, or by identifying that background light intensity loads U.S. face effect, or by identifying that face complexion adds
Carry U.S. face effect.
But existing U.S. face effect is the abundantest, much customization more U.S. face that can not be more intelligent imitate
Really, Consumer's Experience is poor.
Summary of the invention
It is an object of the invention to provide a kind of method taken pictures and terminal, U.S.'s face effect during to solve to take pictures in prior art
The abundantest, that Consumer's Experience is poor problem.
In order to solve above-mentioned technical problem, first aspect, the invention provides a kind of method taken pictures, described method bag
Include:
Obtain the facial image of terminal camera preview;
According to described facial image, obtain the first expressive features data;
According to the described first expressive features data obtained, obtain the U.S. face ginseng corresponding with described first expressive features data
Number;
Take pictures according to described U.S. face parameter.
Second aspect, the invention provides a kind of terminal, and described terminal includes:
First acquisition module, for obtaining the facial image of terminal camera preview;
Second acquisition module, for the described facial image obtained according to described first acquisition module, obtains the first expression
Characteristic;
3rd acquisition module, for the described first expressive features data obtained according to described second acquisition module, obtains
The U.S. face parameter corresponding with described first expressive features data;
Photo module, takes pictures for the described U.S. face parameter obtained according to described 3rd acquisition module.
So, the embodiment of the present invention by obtain terminal camera preview facial image, and according to obtain face figure
Picture, obtains the first expressive features data, then according to the first expressive features data obtained, obtains and the first expressive features data
Corresponding U.S. face parameter, takes pictures finally according to the U.S. face parameter obtained so that terminal can be according to the facial expression of user
Select corresponding U.S. face effect, so that user can take the effect picture of various facial expression when taking pictures, solve
The problem that when taking pictures in prior art, U.S.'s face effect is enriched not, improves Consumer's Experience.
Accompanying drawing explanation
Fig. 1 represents the flow chart of steps of the method taken pictures in the first embodiment of the present invention;
Fig. 2 represents the flow chart of steps of the method taken pictures in the second embodiment of the present invention;
Fig. 3 represents one of structured flowchart of terminal in the third embodiment of the present invention;
Fig. 4 represents in the third embodiment of the present invention the two of the structured flowchart of terminal;
Fig. 5 represents the structured flowchart of terminal in the fourth embodiment of the present invention;
Fig. 6 represents the structured flowchart of terminal in the fifth embodiment of the present invention.
Detailed description of the invention
It is more fully described the exemplary embodiment of the disclosure below with reference to accompanying drawings.Although accompanying drawing shows the disclosure
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure and should be by embodiments set forth here
Limited.On the contrary, it is provided that these embodiments are able to be best understood from the disclosure, and can be by the scope of the present disclosure
Complete conveys to those skilled in the art.
First embodiment:
As it is shown in figure 1, the flow chart of steps of the method for taking pictures in the first embodiment of the present invention, the method includes as follows
Step:
Step 101, obtains the facial image of terminal camera preview.
In this step, concrete, when on opening a terminal for the application program taken pictures, preview region on a terminal screen
There will be the facial image of user in territory, now obtain the facial image of terminal camera preview.
Step 102, according to facial image, obtains the first expressive features data.
In this step, terminal can obtain the first expressive features data according to the facial image obtained.Concrete, eventually
End can periodically obtain the first expressive features data.At this, the first expressive features data are explained.Work as user
When taking pictures, user may make various expression, such as, the expression of smile, the expression of laugh, the expression cried with
And surprised expression etc., such various expression characteristic of correspondence data can be as the first expressive features data.
Step 103, according to the first expressive features data obtained, obtains the U.S. face ginseng corresponding with the first expressive features data
Number.
In this step, concrete, terminal can obtain and the first expression spy according to the first expressive features data obtained
Levy the U.S. face parameter that data are corresponding.Concrete, U.S. face parameter can include the colour of skin of facial image, exposure when taking pictures, or
The parameter at the positions such as the shape of face of person's facial image, eyes, face.Such as, U.S. corresponding with the first expressive features data is being obtained
During face parameter, when corresponding first expression of the first expressive features data, the colour of skin brightness of acquisition can be preset more than or equal to one
Brightness, when corresponding second expression of the first expressive features data, colour of skin brightness can be less than this predetermined luminance.The most such as, when
During corresponding first expression of one expressive features data, the exposure of acquisition can preset exposure, when the first table more than or equal to one
During corresponding second expression of feelings characteristic, exposure can be less than this default exposure.Below with concrete facial expression to this
It is specifically described.
Such as, when the first expressive features data corresponding first expression for laughing at time, U.S. face parameter can be by facial image
Face amplifies, and the color of lip is adjusted to more ruddy, is reduced by the eyes of facial image, and make the shape of face of facial image
Rounder and more smooth, it is more beautiful by the colour of skin brightness adjustment of facial image, and the exposure of photographing device in terminal is heightened.Again
Such as, when the first expressive features data corresponding second expression for crying time, can by the most curved for the face of facial image, and
By the most curved for the face of facial image, it is also possible to synthesize tear below the eyes of facial image, by the skin of facial image
Colour brightness is adjusted to more gloomy, and the exposure of photographing device in terminal is reduced so that keynote when taking pictures is the dimest.Again
Such as, when the second expression that the first expressive features data are corresponding is surprised, the eyes of facial image can be shown more
Bright, the preview area on terminal screen is amplified display, and facial image is shifted to the center of preview area on terminal screen
Position.
Step 104, takes pictures according to U.S. face parameter.
In this step, concrete, after getting the U.S. face parameter corresponding with the first expressive features data, can be by
The U.S. face effect corresponding with the U.S. face parameter obtained is loaded onto in the preview area of terminal screen, then when taking pictures, according to this
U.S. face parameter is taken pictures.
So, this embodiment is by obtaining the facial image of terminal camera preview, and according to the facial image obtained, obtains
Take the first expressive features data, then according to the first expressive features data obtained, obtain corresponding with the first expressive features data
U.S. face parameter, take pictures finally according to the U.S. face parameter obtained, enabling select correspondence according to the facial expression of user
U.S. face parameter so that the effect picture of various expression can be taken when taking pictures according to U.S. face parameter, solve existing skill
The problem that when taking pictures in art, U.S.'s face effect is enriched not so that the U.S. face effect of user is abundanter, improves Consumer's Experience.
Second embodiment:
When first embodiment implements, terminal can be implemented separately, it is also possible to combines server and realizes.
Concrete, as in figure 2 it is shown, the flow chart of steps of the method for taking pictures in the second embodiment of the present invention, this enforcement
In example, method includes:
Step 201, obtains the facial image of terminal camera preview.
Step 202, terminal, directly according to facial image, extracts the first expressive features data;Or terminal is by facial image
Periodically upload onto the server, and receive the first expressive features data obtained according to facial image that server sends.
In this step, terminal local can obtain the first expressive features data, it is also possible to combines Network Capture the first table
Feelings characteristic.Concrete, when terminal obtains the first expressive features data in this locality, terminal can be directly according to face figure
Picture, extracts the first expressive features data;When terminal combines Network Capture the first expressive features data, terminal can be by face figure
As periodically uploading onto the server, and receive the first expressive features data obtained according to facial image that server sends.
Step 203, according to the first expressive features data obtained and default expressive features data base and U.S. face parameter
Data base, obtains the U.S. face parameter corresponding with the first expressive features data.
In this step, concrete, terminal is preset with expressive features data base and U.S. face parameter database, and expressive features
Default expression characteristic in data base and the U.S. face parameter in U.S. face parameter database have corresponding relation.So, terminal
Can be according to default expressive features data base and U.S. face parameter database, local acquisition is corresponding with the first expressive features data
U.S. face parameter.Concrete, terminal can be first by the first expressive features data and the default expressive features in expressive features data base
Data are mated, and get the default expression characteristic with the first expressive features data match;Then according to expression spy
Levy data base presets and expression characteristic and U.S. face parameter database preset the corresponding relation between U.S. face parameter, obtain with
Preset the default U.S. face parameter that expression characteristic is corresponding, wherein, preset U.S. face parameter and be and the first expressive features data pair
The U.S. face parameter answered.
Certainly, in order to ensure the accuracy of U.S. face parameter that gets and rich, terminal can also combine network in this locality
Obtain U.S. face parameter.When combining Network Capture U.S. face parameter in this locality, terminal can be by the first expressive features data and expression
When default expression characteristic in property data base is mated, first obtain the first expressive features data and default expressive features
Matching degree between data, then get with the default expression characteristic of the first expressive features data match after,
When matching degree is more than or equal to a predetermined threshold value, illustrate that the expressive features data base preset can meet the demand of user, this
Time local can obtain the U.S. face parameter corresponding with the first expressive features data, i.e. can be according to default in expressive features data base
Expressive features data and U.S. face parameter database preset the corresponding relation between U.S. face parameter, obtains and preset expression characteristic number
According to corresponding default U.S. face parameter.When matching degree is less than this predetermined threshold value, illustrate that the expressive features data base preset does not has
The default expression characteristic close with the first expressive features data of facial image, the expressive features data base the most now preset
The demand of user can not be met, now i.e. facial image can be uploaded onto the server in conjunction with Network Capture U.S. face parameter,
And receive the first expressive features data obtained according to facial image and corresponding with the first expressive features data that server sends
U.S. face parameter.As such, it is possible to farthest reach the U.S. face effect needed for user, improve the Experience Degree of user.
Step 204, takes pictures according to U.S. face parameter.
So, when the U.S. face parameter corresponding according to the first expressive features data acquisition, needed for user can being got
Or real-time U.S. face parameter, it is possible to meet the demand of user, it is provided that more U.S. face parameter, and make the user can be according to need
Ask and pre-set expressive features data, so that user can actively select the U.S. face parameter of preference, improve Consumer's Experience.
Additionally, terminal can also update default expressive features data base and U.S. face parameter database.Concrete, terminal can
The expressive features data base preset with the most more new terminal and U.S. face parameter database.Certainly, the expression preset is being updated
When property data base and U.S. face parameter database, equally local update and network updates.
When local update, the second expressive features data that terminal can directly will be extracted from the facial image preserved
Add to expressive features data base, and set up the second expressive features data and a default U.S. face parameter in U.S. face parameter database
Corresponding relation.Now, owing to the second expressive features data are that the facial image preserved according to terminal extracts, i.e. user is to the
Two expressive features data have certain Preference, and therefore terminal can arrange the coupling priority of the second expressive features data,
Wherein, the coupling priority of the second expressive features data higher than in expressive features data base other preset expression characteristics
Join priority.I.e. when terminal gets the first expressive features data, preferentially by the table after the first expressive features data and renewal
The second expressive features data in feelings property data base are mated, if the first expressive features data and the second expressive features data
The match is successful, then obtain the default U.S. face parameter corresponding with the second expressive features data, and this presets U.S. face parameter as needing
U.S. face parameter to be obtained.
When network updates, the facial image preserved can be uploaded onto the server by terminal, and receives server transmission
The the second expressive features data obtained according to facial image, and set up the second expressive features data with in U.S. face parameter database
One corresponding relation presetting U.S. face parameter;Can also receive from server download the 3rd expressive features data and with the 3rd expression
The U.S. face supplemental characteristic that characteristic is corresponding, and the 3rd expressive features data are loaded onto in expressive features data base, by U.S.'s face
Supplemental characteristic is loaded onto in U.S. face parameter database.Concrete, update expressive features data base and U.S. face supplemental characteristic at network
Behind storehouse, it is also possible to the coupling priority of the second expressive features data, wherein, the coupling priority of the second expressive features data are set
Higher than the coupling priority of other default expression characteristics in expressive features data base.Certainly, optionally, it is also possible to arrange
The coupling priority of three expressive features data, i.e. can arrange the coupling priority of the second expressive features data higher than the 3rd expression
The priority of characteristic, the priority of the 3rd expressive features data is higher than other default expressive features in expressive features data base
The coupling priority of data.
The present embodiment can combine net by local or Network Capture the first expressive features data by local or this locality
The mode of network obtains the U.S. face parameter corresponding with the first expressive features data, and can be by local, network and local combination
The mode of network more new terminal preset expressive features data base and U.S. face parameter database so that user can obtain when taking pictures
Take more U.S. face parameter, also allow users to update expressive features data base according to hobby, enable a user to choosing
Select the U.S. face effect oneself liked, solve the problem that when taking pictures in prior art, U.S.'s face effect is enriched not, enrich U.S. face
Effect, improves Consumer's Experience.
3rd embodiment:
As it is shown on figure 3, be one of structured flowchart of terminal in the third embodiment of the present invention, this terminal includes:
First acquisition module 301, for obtaining the facial image of terminal camera preview;
Second acquisition module 302, for the facial image obtained according to the first acquisition module 301, obtains the first expression spy
Levy data;
3rd acquisition module 303, for according to second acquisition module 302 obtain the first expressive features data, obtain with
The U.S. face parameter that first expressive features data are corresponding;
Photo module 304, takes pictures for the U.S. face parameter obtained according to the 3rd acquisition module 303.
So, this embodiment obtains the facial image of terminal camera preview by the first acquisition module 301;Second obtains
Module 302, according to the facial image obtained, obtains the first expressive features data;3rd acquisition module 303 is according to first obtained
Expressive features data, obtain the U.S. face parameter corresponding with the first expressive features data;Photo module 304 is according to the U.S. face ginseng obtained
Number is taken pictures.The present embodiment enables the terminal to the facial expression according to user and selects corresponding U.S. face parameter, so that
The effect picture of various expression can be taken when taking pictures according to U.S. face parameter, solve when prior art is taken pictures U.S.'s face effect not
Enough abundant problems so that the U.S. face effect of user is abundanter, improves Consumer's Experience.
Concrete, as shown in Figure 4, in the third embodiment of the present invention the two of the structured flowchart of terminal.
Optionally, the second acquisition module 302 includes: the first acquiring unit 3021, for directly according to facial image, extracting
First expressive features data;Or, second acquisition unit 3022, for facial image is periodically uploaded onto the server, and
Receive the first expressive features data obtained according to facial image that server sends.
Optionally, the 3rd acquisition module 303 is used for, special according to the first expressive features data obtained and default expression
Levy data base and U.S. face parameter database, obtain the U.S. face parameter corresponding with the first expressive features data.
Optionally, the 3rd acquisition module 303 includes: matching unit 3031, for by the first expressive features data and expression
Default expression characteristic in property data base is mated, and gets and the preset table of the first expressive features data match
Feelings characteristic;3rd acquiring unit 3032, for according to presetting expression characteristic and U.S. face ginseng in expressive features data base
Number data base presets the corresponding relation between U.S. face parameter, obtains the default U.S. face ginseng corresponding with presetting expression characteristic
Number, wherein, presetting U.S. face parameter is the U.S. face parameter corresponding with the first expressive features data.
Optionally, matching unit 3031 is additionally operable to, and obtains the first expressive features data and presets between expression characteristic
Matching degree, and for get with the default expression characteristic of the first expressive features data match after, work as coupling
When degree is more than or equal to a predetermined threshold value, trigger the 3rd acquiring unit 3032;3rd acquisition module 303 also includes sending and receiving
Unit 3033, for when matching degree is less than predetermined threshold value, being uploaded onto the server by facial image, and receives what server sent
The the first expressive features data obtained according to facial image and the U.S. face parameter corresponding with the first expressive features data.
Optionally, U.S. face parameter includes the colour of skin brightness of facial image, and the 3rd acquisition module 303 is used for, when the first expression
During corresponding first expression of characteristic, the colour of skin brightness of acquisition is more than or equal to predetermined luminance;When the first expressive features data pair
When answering the second expression, the colour of skin brightness of acquisition is less than predetermined luminance.
Optionally, U.S. face parameter includes that exposure when taking pictures, the 3rd acquisition module 303 are used for, when the first expressive features
During corresponding first expression of data, exposure is more than or equal to presetting exposure;When corresponding second expression of the first expressive features data
Time, exposure is less than presetting exposure.
So, the present embodiment is by U.S. face parameter corresponding to the first expressive features data acquisition obtained so that Yong Hu
More U.S. face parameter can be obtained when taking pictures, enable a user to the U.S. face effect selecting oneself to like, solve existing
There is the problem that when taking pictures in technology, U.S.'s face effect is enriched not, enrich U.S. face effect, improve Consumer's Experience.
4th embodiment:
As it is shown in figure 5, be the structured flowchart of terminal in the fourth embodiment of the present invention, the terminal 500 shown in Fig. 5 includes:
At least one processor 501, memorizer 502, at least one network interface 504, user interface 503.Each group in terminal 500
Part is coupled by bus system 505.It is understood that bus system 505 is for realizing the connection communication between these assemblies.
Bus system 505, in addition to including data/address bus, also includes power bus, controls bus and status signal bus in addition.But in order to
For the sake of clear explanation, in Figure 5 various buses are all designated as bus system 505.Wherein, user interface 503 can include display
Device, keyboard or pointing device (such as, mouse, trace ball (trackball), touch-sensitive plate or touch screen etc..
The memorizer 502 being appreciated that in the embodiment of the present invention can be volatile memory or nonvolatile memory,
Maybe can include volatibility and nonvolatile memory.Wherein, nonvolatile memory can be read only memory (Read-
OnlyMemory, ROM), programmable read only memory (ProgrammableROM, PROM), Erasable Programmable Read Only Memory EPROM
(ErasablePROM, EPROM), Electrically Erasable Read Only Memory (ElectricallyEPROM, EEPROM) or sudden strain of a muscle
Deposit.Volatile memory can be random access memory (RandomAccessMemory, RAM), and it is used as outside the most slow
Deposit.By exemplary but be not restricted explanation, the RAM of many forms can use, such as static RAM
(StaticRAM, SRAM), dynamic random access memory (DynamicRAM, DRAM), Synchronous Dynamic Random Access Memory
(SynchronousDRAM, SDRAM), double data speed synchronous dynamic RAM (DoubleDataRate
SDRAM, DDRSDRAM), enhancement mode Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links
Dynamic random access memory (SynchlinkDRAM, SLDRAM) and direct rambus random access memory
(DirectRambusRAM, DRRAM).The memorizer 502 of the system and method that the embodiment of the present invention describes is intended to include but does not limits
In these and the memorizer of any other applicable type.
In some embodiments, memorizer 502 stores following element, executable module or data structure, or
Their subset of person, or their superset: operating system 5021 and application program 5022.Concrete, memorizer 502 is deposited
Contain and module and configuration file are set.
Wherein, operating system 5021, comprise various system program, such as ccf layer, core library layer, driving layer etc., be used for
Realize various basic business and process hardware based task.Application program 5022, comprises various application program, such as media
Player (MediaPlayer), browser (Browser) etc., be used for realizing various applied business.Realize embodiment of the present invention side
The program of method may be embodied in application program 5022.
In embodiments of the present invention, by calling program or the instruction of memorizer 502 storage, concrete, can be application
The program stored in program 5022 or instruction.Wherein, processor 501 is used for: obtain the facial image of terminal camera preview;Root
According to described facial image, obtain the first expressive features data;According to the described first expressive features data obtained, obtain with described
The U.S. face parameter that first expressive features data are corresponding;Take pictures according to described U.S. face parameter.
The method that the invention described above embodiment discloses can apply in processor 501, or is realized by processor 501.
Processor 501 is probably a kind of IC chip, has the disposal ability of signal.During realizing, said method each
Step can be completed by the instruction of the integrated logic circuit of the hardware in processor 501 or software form.Above-mentioned process
Device 501 can be general processor, digital signal processor (DigitalSignalProcessor, DSP), special IC
(Application SpecificIntegratedCircuit, ASIC), field programmable gate array
(FieldProgrammableGateArray, FPGA) or other PLDs, discrete gate or transistor logic
Device, discrete hardware components.Can realize or perform disclosed each method, step and the box in the embodiment of the present invention
Figure.The processor etc. that general processor can be microprocessor or this processor can also be any routine.In conjunction with the present invention
The step of the method disclosed in embodiment can be embodied directly in hardware decoding processor and perform, or uses decoding processor
In hardware and software module combination execution complete.Software module may be located at random access memory, and flash memory, read only memory can
In the storage medium that this areas such as program read-only memory or electrically erasable programmable memorizer, depositor are ripe.This storage
Medium is positioned at memorizer 502, and processor 501 reads the information in memorizer 502, completes the step of said method in conjunction with its hardware
Suddenly.Additionally, memorizer 502 internal memory contains the corresponding relation etc. used in said method.
It is understood that the embodiment of the present invention describe these embodiments can use hardware, software, firmware, middleware,
Microcode or a combination thereof realize.Realizing for hardware, processing unit can be implemented in one or more special IC
(Application SpecificIntegratedCircuit, ASIC), digital signal processor
(DigitalSignalProcessor, DSP), digital signal processing appts (DSPDevice, DSPD), programmable logic device
(Programmable LogicDevice, PLD), field programmable gate array (FieldProgrammableGateArray,
FPGA), general processor, controller, microcontroller, microprocessor, for perform the application function other electronic unit or
In a combination thereof.
Software is realized, can be realized by the module (such as process, function etc.) performing embodiment of the present invention function
The technology of the embodiment of the present invention.Software code is storable in performing in memorizer and by processor.Memorizer can process
Realize in device or outside processor.
Alternatively, processor 501 is additionally operable to: directly according to described facial image, extract the first expressive features data;Or
Person, periodically uploads onto the server described facial image, and receive described server send according to described facial image
The the first expressive features data obtained.
Alternatively, as another embodiment, processor 501 is additionally operable to: according to the described first expressive features number obtained
According to this and preset expressive features data base and U.S. face parameter database, obtain U.S. corresponding with described first expressive features data
Face parameter.
Alternatively, as another embodiment, processor 501 is additionally operable to: by described first expressive features data with described
Default expression characteristic in expressive features data base is mated, and gets and described first expressive features data match
Default expression characteristic;According to described expressive features data base presets expression characteristic and described U.S. face supplemental characteristic
Storehouse is preset the corresponding relation between U.S. face parameter, obtains the default U.S. face parameter corresponding with described default expression characteristic,
Wherein, described default U.S. face parameter is the U.S. face parameter corresponding with described first expressive features data.
Alternatively, as another embodiment, processor 501 is additionally operable to: by described first expressive features data and institute
State after the default expression characteristic in expressive features data base mates, obtain described first expressive features data and institute
State the matching degree preset between expression characteristic;Getting and the default expression of described first expressive features data match
After characteristic, when described matching degree is more than or equal to a predetermined threshold value, enter according in described expressive features data base
Preset in expression characteristic and described U.S. face parameter database and preset the corresponding relation between U.S. face parameter, obtain pre-with described
If the step of the default U.S. face parameter that expressive features data are corresponding;When described matching degree is less than described predetermined threshold value, by described
Facial image uploads onto the server, and receives the first expressive features obtained according to described facial image that described server sends
Data and the U.S. face parameter corresponding with described first expressive features data.
Alternatively, as another embodiment, described U.S. face parameter includes the colour of skin brightness of facial image, processor 501
It is additionally operable to: when corresponding first expression of described first expressive features data, described colour of skin brightness is more than or equal to predetermined luminance;When
During corresponding second expression of described first expressive features data, described colour of skin brightness is less than described predetermined luminance.
Alternatively, as another embodiment, described U.S. face parameter includes that exposure when taking pictures, processor 501 are also used
In: during when described first expressive features data corresponding first expression, described exposure is more than or equal to presetting exposure;When described
During corresponding second expression of the first expressive features data, described exposure is less than described default exposure.
Terminal 500 is capable of each process that in previous embodiment, terminal realizes, for avoiding repeating, the most superfluous
State.
The terminal provided in the above embodiment of the present invention, by obtaining the facial image of terminal camera preview, and root
According to the facial image obtained, obtain the first expressive features data, then according to the first expressive features data obtained, obtain and the
The U.S. face parameter that one expressive features data are corresponding, takes pictures finally according to the U.S. face parameter obtained so that can be according to user
Facial expression select corresponding U.S. face effect so that the effect picture of various expression can be taken when taking pictures, solve
The problem that when taking pictures in prior art, U.S.'s face effect is enriched not, improves Consumer's Experience.
5th embodiment:
Fig. 6 represents the structured flowchart of terminal in the fifth embodiment of the present invention.Specifically, the terminal 600 in Fig. 6 can be
Mobile phone, panel computer, personal digital assistant (PersonalDigital Assistant, PDA) or vehicle-mounted computer etc..
Terminal 600 in Fig. 6 includes radio frequency (RadioFrequency, RF) circuit 610, memorizer 620, input block
630, display unit 640, processor 660, voicefrequency circuit 670, WiFi (Wireless Fidelity) module 680, power supply 690.
Wherein, input block 630 can be used for receiving numeral or the character information of user's input, and produce and set with the user of terminal 600
Put and function controls relevant signal input.Specifically, in the embodiment of the present invention, this input block 630 can include touch-control
Panel 631.Contact panel 631, also referred to as touch screen, can collect user thereon or neighbouring touch operation (such as user makes
With any applicable object such as finger, stylus or adnexa operation on contact panel 631), and according to formula set in advance
Drive corresponding attachment means.Optionally, contact panel 631 can include touch detecting apparatus and two parts of touch controller.
Wherein, the touch orientation of touch detecting apparatus detection user, and detect the signal that touch operation brings, transmit a signal to touch
Controller;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives at this
Reason device 660, and order that processor 660 sends can be received and performed.Furthermore, it is possible to use resistance-type, condenser type, infrared
The polytype such as line and surface acoustic wave realizes contact panel 631.Except contact panel 631, input block 630 can also include
Other input equipments 632, other input equipments 632 can include but not limited to that (such as volume controls to press for physical keyboard, function key
Key, switch key etc.), trace ball, mouse, one or more in action bars etc..
Wherein, display unit 640 can be used for showing the information inputted by user or the information being supplied to user and terminal
The various menu interfaces of 600.Display unit 640 can include display floater 641, optionally, can use LCD or organic light emission two
The forms such as pole pipe (OrganicLight-EmittingDiode, OLED) configure display floater 641.
It should be noted that contact panel 631 can cover display floater 641, formed and touch display screen, when this touch display screen is examined
Measure thereon or after neighbouring touch operation, send processor 660 to determine the type of touch event, with preprocessor
660 provide corresponding visual output according to the type of touch event on touch display screen.
Touch display screen and include Application Program Interface viewing area and conventional control viewing area.This Application Program Interface viewing area
And the arrangement mode of this conventional control viewing area does not limit, can be arranged above and below, left-right situs etc. can be distinguished two and show
Show the arrangement mode in district.This Application Program Interface viewing area is displayed for the interface of application program.Each interface is permissible
The interface elements such as the icon and/or the widget desktop control that comprise at least one application program.This Application Program Interface viewing area
It can also be the empty interface not comprising any content.This conventional control viewing area is for showing the control that utilization rate is higher, such as,
The application icons etc. such as settings button, interface numbering, scroll bar, phone directory icon.
Wherein, processor 660 is the control centre of terminal 600, utilizes each of various interface and the whole mobile phone of connection
Individual part, is stored in the software program in first memory 621 and/or module by running or performing, and calls and be stored in
Data in second memory 622, perform the various functions of terminal 600 and process data, thus terminal 600 carries out overall prison
Control.Optionally, processor 660 can include one or more processing unit.
In embodiments of the present invention, by call the software program and/or module stored in this first memory 621 and/
Or the data in this second memory 622, wherein, first memory 621 includes module and configuration file are set.Processor
660 are used for: obtain the facial image of terminal camera preview;According to described facial image, obtain the first expressive features data;Root
According to the described first expressive features data obtained, obtain the U.S. face parameter corresponding with described first expressive features data;According to institute
State U.S. face parameter to take pictures.
Alternatively, processor 660 is additionally operable to: directly according to described facial image, extract the first expressive features data;Or
Person, periodically uploads onto the server described facial image, and receive described server send according to described facial image
The the first expressive features data obtained.
Alternatively, as another embodiment, processor 660 is additionally operable to: according to the described first expressive features number obtained
According to this and preset expressive features data base and U.S. face parameter database, obtain U.S. corresponding with described first expressive features data
Face parameter.
Alternatively, as another embodiment, processor 660 is additionally operable to: by described first expressive features data with described
Default expression characteristic in expressive features data base is mated, and gets and described first expressive features data match
Default expression characteristic;According to described expressive features data base presets expression characteristic and described U.S. face supplemental characteristic
Storehouse is preset the corresponding relation between U.S. face parameter, obtains the default U.S. face parameter corresponding with described default expression characteristic,
Wherein, described default U.S. face parameter is the U.S. face parameter corresponding with described first expressive features data.
Alternatively, as another embodiment, processor 660 is additionally operable to: by described first expressive features data and institute
State after the default expression characteristic in expressive features data base mates, obtain described first expressive features data and institute
State the matching degree preset between expression characteristic, and getting and the preset table of described first expressive features data match
After feelings characteristic, when described matching degree is more than or equal to a predetermined threshold value, enter according to described expressive features data base
In preset in expression characteristic and described U.S. face parameter database and preset the corresponding relation between U.S. face parameter, obtain with described
Preset the step of default U.S. face parameter corresponding to expression characteristic;When described matching degree is less than described predetermined threshold value, by institute
State facial image to upload onto the server, and receive the first expression spy obtained according to described facial image that described server sends
Levy data and the U.S. face parameter corresponding with described first expressive features data.
Alternatively, as another embodiment, described U.S. face parameter includes the colour of skin brightness of facial image, processor 660
It is additionally operable to: when corresponding first expression of described first expressive features data, described colour of skin brightness is more than or equal to predetermined luminance;When
During corresponding second expression of described first expressive features data, described colour of skin brightness is less than described predetermined luminance.
Alternatively, described U.S. face parameter includes exposure when taking pictures, and as another embodiment, processor 660 is also used
In: during when described first expressive features data corresponding first expression, described exposure is more than or equal to presetting exposure;When described
During corresponding second expression of the first expressive features data, described exposure is less than described default exposure.
The terminal provided in the above embodiment of the present invention, by obtaining the facial image of terminal camera preview, and root
According to the facial image obtained, obtain the first expressive features data, then according to the first expressive features data obtained, obtain and the
The U.S. face parameter that one expressive features data are corresponding, takes pictures finally according to the U.S. face parameter obtained so that terminal can basis
The facial expression of user selects corresponding U.S. face parameter, so that can take various expression according to U.S. face parameter when taking pictures
Effect picture, solves the problem that when taking pictures in prior art, U.S.'s face effect is enriched not so that the U.S. face effect of user is more
Abundant, improve Consumer's Experience.
Terminal 600 is capable of each process that in previous embodiment, terminal realizes, for avoiding repeating, the most superfluous
State.
Those of ordinary skill in the art are it is to be appreciated that combine that the disclosed embodiments in the embodiment of the present invention describe is each
The unit of example and algorithm steps, it is possible to being implemented in combination in of electronic hardware or computer software and electronic hardware.These
Function performs with hardware or software mode actually, depends on application-specific and the design constraint of technical scheme.Specialty
Technical staff specifically should can be used for using different methods to realize described function to each, but this realization should not
Think beyond the scope of this invention.
Those skilled in the art is it can be understood that arrive, for convenience and simplicity of description, the system of foregoing description,
The specific works process of device and unit, is referred to the corresponding process in preceding method embodiment, does not repeats them here.
In embodiment provided herein, it should be understood that disclosed apparatus and method, can be passed through other
Mode realizes.Such as, device embodiment described above is only schematically, such as, the division of described unit, it is only
A kind of logic function divides, actual can have when realizing other dividing mode, the most multiple unit or assembly can in conjunction with or
Person is desirably integrated into another system, or some features can be ignored, or does not performs.Another point, shown or discussed is mutual
Between coupling direct-coupling or communication connection can be the INDIRECT COUPLING by some interfaces, device or unit or communication link
Connect, can be electrical, machinery or other form.
The described unit illustrated as separating component can be or may not be physically separate, shows as unit
The parts shown can be or may not be physical location, i.e. may be located at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected according to the actual needs to realize embodiment of the present invention scheme
Purpose.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to
It is that unit is individually physically present, it is also possible to two or more unit are integrated in a unit.
If described function is using the form realization of SFU software functional unit and as independent production marketing or use, permissible
It is stored in a computer read/write memory medium.Based on such understanding, technical scheme is the most in other words
The part contributing prior art or the part of this technical scheme can embody with the form of software product, this meter
Calculation machine software product is stored in a storage medium, including some instructions with so that a computer equipment (can be individual
People's computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention.
And aforesaid storage medium includes: USB flash disk, portable hard drive, ROM, RAM, magnetic disc or CD etc. are various can store program code
Medium.
The above, the only detailed description of the invention of the present invention, but protection scope of the present invention is not limited thereto, and any
Those familiar with the art, in the technical scope that the invention discloses, can readily occur in change or replace, should contain
Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with scope of the claims.
Claims (14)
1. the method taken pictures, it is characterised in that described method includes:
Obtain the facial image of terminal camera preview;
According to described facial image, obtain the first expressive features data;
According to the described first expressive features data obtained, obtain the U.S. face parameter corresponding with described first expressive features data;
Take pictures according to described U.S. face parameter.
Method the most according to claim 1, it is characterised in that described according to described facial image, obtains the first expression spy
The step levying data includes:
Terminal, directly according to described facial image, extracts the first expressive features data;Or,
Described facial image is periodically uploaded onto the server by terminal, and receive described server send according to described face
The first expressive features data that image obtains.
Method the most according to claim 1, it is characterised in that described according to the described first expressive features data obtained,
The step obtaining the U.S. face parameter corresponding with described first expressive features data includes:
According to the described first expressive features data obtained and default expressive features data base and U.S. face parameter database, obtain
Take the U.S. face parameter corresponding with described first expressive features data.
Method the most according to claim 3, it is characterised in that described according to obtain described first expressive features data with
And the expressive features data base that presets and U.S. face parameter database, obtains the U.S. face corresponding with described first expressive features data and join
The step of number includes:
Described first expressive features data are mated with the default expression characteristic in described expressive features data base, obtains
Get and the default expression characteristic of described first expressive features data match;
U.S. face ginseng is preset in described U.S. face parameter database according to described expressive features data base presets expression characteristic
Corresponding relation between number, obtains the default U.S. face parameter corresponding with described default expression characteristic, wherein, described default U.S.
Face parameter is the U.S. face parameter corresponding with described first expressive features data.
Method the most according to claim 4, it is characterised in that described by described first expressive features data and described expression
After default expression characteristic in property data base is mated, described method includes:
Obtain the matching degree between described first expressive features data and described default expression characteristic;
Described get with the default expression characteristic of described first expressive features data match after, farther include:
When described matching degree is more than or equal to a predetermined threshold value, enter according to described expressive features data base presets expression spy
Levy in data and described U.S. face parameter database and preset the corresponding relation between U.S. face parameter, obtain and described default expressive features
The step of the default U.S. face parameter that data are corresponding;
When described matching degree is less than described predetermined threshold value, described facial image is uploaded onto the server, and receives described service
The the first expressive features data obtained according to described facial image and corresponding with described first expressive features data that device sends
U.S. face parameter.
Method the most according to claim 1, it is characterised in that described U.S. face parameter includes the colour of skin brightness of facial image,
Described according to the described first expressive features data obtained, obtain the U.S. face parameter corresponding with described first expressive features data
Step includes:
When corresponding first expression of described first expressive features data, described colour of skin brightness is more than or equal to predetermined luminance;
When corresponding second expression of described first expressive features data, described colour of skin brightness is less than described predetermined luminance.
7. according to the method described in claim 1 or 6, it is characterised in that described U.S. face parameter includes exposure when taking pictures, institute
State according to the described first expressive features data obtained, obtain the step of the U.S. face parameter corresponding with described first expressive features data
Suddenly include:
When corresponding first expression of described first expressive features data, described exposure is more than or equal to presetting exposure;
When corresponding second expression of described first expressive features data, described exposure is less than described default exposure.
8. a terminal, it is characterised in that described terminal includes:
First acquisition module, for obtaining the facial image of terminal camera preview;
Second acquisition module, for the described facial image obtained according to described first acquisition module, obtains the first expressive features
Data;
3rd acquisition module, for the described first expressive features data obtained according to described second acquisition module, obtains and institute
State the U.S. face parameter that the first expressive features data are corresponding;
Photo module, takes pictures for the described U.S. face parameter obtained according to described 3rd acquisition module.
Terminal the most according to claim 8, it is characterised in that described second acquisition module includes:
First acquiring unit, for directly according to described facial image, extracts the first expressive features data;Or,
Second acquisition unit, for periodically being uploaded onto the server by described facial image, and receives the transmission of described server
The the first expressive features data obtained according to described facial image.
Terminal the most according to claim 8, it is characterised in that described 3rd acquisition module is used for, according to obtaining
First expressive features data and default expressive features data base and U.S. face parameter database, obtain and described first expression spy
Levy the U.S. face parameter that data are corresponding.
11. terminals according to claim 10, it is characterised in that described 3rd acquisition module includes:
Matching unit, for by described first expressive features data and the default expression characteristic number in described expressive features data base
According to mating, get and the default expression characteristic of described first expressive features data match;
3rd acquiring unit, for according to presetting expression characteristic and described U.S. face parameter number in described expressive features data base
According to the corresponding relation preset in storehouse between U.S. face parameter, obtain the default U.S. face ginseng corresponding with described default expression characteristic
Number, wherein, described default U.S. face parameter is the U.S. face parameter corresponding with described first expressive features data.
12. terminals according to claim 11, it is characterised in that described matching unit is additionally operable to:
Obtain the matching degree between described first expressive features data and described default expression characteristic, and for getting
After the default expression characteristic of described first expressive features data match, when described matching degree is pre-more than or equal to one
If during threshold value, trigger described 3rd acquiring unit;
Described 3rd acquisition module also includes transmitting and receiving unit, is used for when described matching degree is less than described predetermined threshold value,
Described facial image is uploaded onto the server, and receives the first table obtained according to described facial image that described server sends
Feelings characteristic and the U.S. face parameter corresponding with described first expressive features data.
13. terminals according to claim 8, it is characterised in that
Described U.S. face parameter includes the colour of skin brightness of facial image;
Described 3rd acquisition module is used for: when corresponding first expression of described first expressive features data, the described colour of skin of acquisition
Brightness is more than or equal to predetermined luminance;When corresponding second expression of described first expressive features data, the described colour of skin of acquisition is bright
Degree is less than described predetermined luminance.
Terminal described in 14. according to Claim 8 or 13, it is characterised in that
Described U.S. face parameter includes exposure when taking pictures;
Described 3rd acquisition module is used for: when corresponding first expression of described first expressive features data, described exposure is more than
Or equal to presetting exposure;When corresponding second expression of described first expressive features data, described exposure is preset less than described
Exposure.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610362789.4A CN106056533B (en) | 2016-05-26 | 2016-05-26 | A kind of method and terminal taken pictures |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610362789.4A CN106056533B (en) | 2016-05-26 | 2016-05-26 | A kind of method and terminal taken pictures |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106056533A true CN106056533A (en) | 2016-10-26 |
CN106056533B CN106056533B (en) | 2019-08-20 |
Family
ID=57176090
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610362789.4A Active CN106056533B (en) | 2016-05-26 | 2016-05-26 | A kind of method and terminal taken pictures |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106056533B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105809637A (en) * | 2016-02-29 | 2016-07-27 | 广东欧珀移动通信有限公司 | Control method, control apparatus and electronic apparatus |
CN107369142A (en) * | 2017-06-29 | 2017-11-21 | 北京小米移动软件有限公司 | Image processing method and device |
CN107424117A (en) * | 2017-07-17 | 2017-12-01 | 广东欧珀移动通信有限公司 | Image U.S. face method, apparatus, computer-readable recording medium and computer equipment |
CN107592457A (en) * | 2017-09-08 | 2018-01-16 | 维沃移动通信有限公司 | A kind of U.S. face method and mobile terminal |
CN107657652A (en) * | 2017-09-11 | 2018-02-02 | 广东欧珀移动通信有限公司 | Image processing method and device |
CN107705356A (en) * | 2017-09-11 | 2018-02-16 | 广东欧珀移动通信有限公司 | Image processing method and device |
CN107832784A (en) * | 2017-10-27 | 2018-03-23 | 维沃移动通信有限公司 | A kind of method of image beautification and a kind of mobile terminal |
CN107995415A (en) * | 2017-11-09 | 2018-05-04 | 深圳市金立通信设备有限公司 | A kind of image processing method, terminal and computer-readable medium |
CN108537749A (en) * | 2018-03-29 | 2018-09-14 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer readable storage medium |
CN108683841A (en) * | 2018-04-13 | 2018-10-19 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN108765264A (en) * | 2018-05-21 | 2018-11-06 | 深圳市梦网科技发展有限公司 | Image U.S. face method, apparatus, equipment and storage medium |
CN110390704A (en) * | 2019-07-11 | 2019-10-29 | 深圳追一科技有限公司 | Image processing method, device, terminal device and storage medium |
CN110765969A (en) * | 2019-10-30 | 2020-02-07 | 珠海格力电器股份有限公司 | Photographing method and system |
CN111445417A (en) * | 2020-03-31 | 2020-07-24 | 维沃移动通信有限公司 | Image processing method, image processing apparatus, electronic device, and medium |
CN111771372A (en) * | 2018-12-21 | 2020-10-13 | 华为技术有限公司 | Method and device for determining camera shooting parameters |
CN116347220A (en) * | 2023-05-29 | 2023-06-27 | 合肥工业大学 | Portrait shooting method and related equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013029097A2 (en) * | 2011-08-30 | 2013-03-07 | Monash University | System and method for processing sensor data for the visually impaired |
CN104144289A (en) * | 2013-05-10 | 2014-11-12 | 华为技术有限公司 | Photographing method and device |
CN104751408A (en) * | 2015-03-26 | 2015-07-01 | 广东欧珀移动通信有限公司 | Face image adjusting method and device |
CN104902177A (en) * | 2015-05-26 | 2015-09-09 | 广东欧珀移动通信有限公司 | Intelligent photographing method and terminal |
-
2016
- 2016-05-26 CN CN201610362789.4A patent/CN106056533B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013029097A2 (en) * | 2011-08-30 | 2013-03-07 | Monash University | System and method for processing sensor data for the visually impaired |
CN104144289A (en) * | 2013-05-10 | 2014-11-12 | 华为技术有限公司 | Photographing method and device |
CN104751408A (en) * | 2015-03-26 | 2015-07-01 | 广东欧珀移动通信有限公司 | Face image adjusting method and device |
CN104902177A (en) * | 2015-05-26 | 2015-09-09 | 广东欧珀移动通信有限公司 | Intelligent photographing method and terminal |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105809637A (en) * | 2016-02-29 | 2016-07-27 | 广东欧珀移动通信有限公司 | Control method, control apparatus and electronic apparatus |
CN107369142A (en) * | 2017-06-29 | 2017-11-21 | 北京小米移动软件有限公司 | Image processing method and device |
CN107424117A (en) * | 2017-07-17 | 2017-12-01 | 广东欧珀移动通信有限公司 | Image U.S. face method, apparatus, computer-readable recording medium and computer equipment |
CN107592457B (en) * | 2017-09-08 | 2020-05-15 | 维沃移动通信有限公司 | Beautifying method and mobile terminal |
CN107592457A (en) * | 2017-09-08 | 2018-01-16 | 维沃移动通信有限公司 | A kind of U.S. face method and mobile terminal |
CN107657652A (en) * | 2017-09-11 | 2018-02-02 | 广东欧珀移动通信有限公司 | Image processing method and device |
CN107705356A (en) * | 2017-09-11 | 2018-02-16 | 广东欧珀移动通信有限公司 | Image processing method and device |
CN107832784A (en) * | 2017-10-27 | 2018-03-23 | 维沃移动通信有限公司 | A kind of method of image beautification and a kind of mobile terminal |
CN107995415A (en) * | 2017-11-09 | 2018-05-04 | 深圳市金立通信设备有限公司 | A kind of image processing method, terminal and computer-readable medium |
CN108537749A (en) * | 2018-03-29 | 2018-09-14 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer readable storage medium |
CN108683841A (en) * | 2018-04-13 | 2018-10-19 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN108683841B (en) * | 2018-04-13 | 2021-02-19 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN108765264A (en) * | 2018-05-21 | 2018-11-06 | 深圳市梦网科技发展有限公司 | Image U.S. face method, apparatus, equipment and storage medium |
CN108765264B (en) * | 2018-05-21 | 2022-05-20 | 深圳市梦网科技发展有限公司 | Image beautifying method, device, equipment and storage medium |
CN111771372A (en) * | 2018-12-21 | 2020-10-13 | 华为技术有限公司 | Method and device for determining camera shooting parameters |
CN110390704A (en) * | 2019-07-11 | 2019-10-29 | 深圳追一科技有限公司 | Image processing method, device, terminal device and storage medium |
CN110765969A (en) * | 2019-10-30 | 2020-02-07 | 珠海格力电器股份有限公司 | Photographing method and system |
CN111445417A (en) * | 2020-03-31 | 2020-07-24 | 维沃移动通信有限公司 | Image processing method, image processing apparatus, electronic device, and medium |
CN111445417B (en) * | 2020-03-31 | 2023-12-19 | 维沃移动通信有限公司 | Image processing method, device, electronic equipment and medium |
CN116347220A (en) * | 2023-05-29 | 2023-06-27 | 合肥工业大学 | Portrait shooting method and related equipment |
CN116347220B (en) * | 2023-05-29 | 2023-07-21 | 合肥工业大学 | Portrait shooting method and related equipment |
Also Published As
Publication number | Publication date |
---|---|
CN106056533B (en) | 2019-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106056533A (en) | Photographing method and terminal | |
CN107197169A (en) | A kind of high dynamic range images image pickup method and mobile terminal | |
CN107257439A (en) | A kind of image pickup method and mobile terminal | |
CN103207728A (en) | Method Of Providing Augmented Reality And Terminal Supporting The Same | |
CN106780685B (en) | A kind of generation method and terminal of dynamic picture | |
CN106341608A (en) | Emotion based shooting method and mobile terminal | |
CN106210526A (en) | A kind of image pickup method and mobile terminal | |
CN105847674A (en) | Preview image processing method based on mobile terminal, and mobile terminal therein | |
CN106231187A (en) | A kind of method shooting image and mobile terminal | |
CN106101767A (en) | A kind of screen recording method and mobile terminal | |
CN108037863A (en) | A kind of method and apparatus for showing image | |
CN107370887A (en) | A kind of expression generation method and mobile terminal | |
CN106101545A (en) | A kind of image processing method and mobile terminal | |
US10078436B2 (en) | User interface adjusting method and apparatus using the same | |
CN106658141A (en) | Video processing method and mobile terminal | |
CN106777329A (en) | The processing method and mobile terminal of a kind of image information | |
CN106339436A (en) | Picture-based shopping method and mobile terminal | |
CN108307068A (en) | Pair screen shows interface switching method, mobile terminal and storage medium | |
CN106803888A (en) | The method and electronic equipment of composograph | |
CN107026982B (en) | A kind of photographic method and mobile terminal of mobile terminal | |
CN106341538A (en) | Lyrics poster push method and mobile terminal | |
CN107087137A (en) | The method and apparatus and terminal device of video are presented | |
CN106909366A (en) | The method and device that a kind of widget shows | |
CN105101354A (en) | Wireless network connection method and device | |
CN106855744A (en) | A kind of screen display method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |