CN109361852A - A kind of image processing method and device - Google Patents

A kind of image processing method and device Download PDF

Info

Publication number
CN109361852A
CN109361852A CN201811217342.3A CN201811217342A CN109361852A CN 109361852 A CN109361852 A CN 109361852A CN 201811217342 A CN201811217342 A CN 201811217342A CN 109361852 A CN109361852 A CN 109361852A
Authority
CN
China
Prior art keywords
textures
target
information
type
style
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811217342.3A
Other languages
Chinese (zh)
Inventor
夏康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811217342.3A priority Critical patent/CN109361852A/en
Publication of CN109361852A publication Critical patent/CN109361852A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention provides a kind of image processing method and devices.This method comprises: determining the first textures element that target user chooses;The first textures element is added on preview image;Acquire the default biological information of the preview image;According to the default biological information, the first textures element is adjusted.The present invention passes through after user chooses a first textures element to preview image, the the first textures element chosen can be adjusted according to the default biological information of user in preview image, so after the default biological information of the user in preview image changes, the method of the embodiment of the present invention can adjust textures element added in preview image according to the variation in real time, it manually adjusts textures element every time without user, the first textures element of addition can be automatically adjusted according to the actual content of preview image.

Description

A kind of image processing method and device
Technical field
The present invention relates to technical field of image processing more particularly to a kind of image processing methods and device.
Background technique
Currently, mobile terminal when shooting photo, can add the human face region in preview image relevant to face Textures element (such as the elements such as hair decorations, full animation expression, dressing), further, it is also possible to add textures element to entire preview image (such as the elements such as logo, light beam), so as to which so that the image co-registration of shooting has the textures element of addition, promotion image taking is happy Interest.
But in the prior art, to shooting image addition textures element when, mainly by the shooting preview stage by User selects a textures element from a variety of textures elements that system provides, so that the textures element that this is selected is added to In preview image.After user selects some textures element, if the user find that the textures element is not appropriate for preview image, User is then needed to manually select another textures element again, the textures element until repeatedly attempting to choose suitable preview image It takes pictures.
The obvious prior art is automatic there is that can not be carried out according to the actual content of preview image to the textures element of addition The problem of adjustment.
Summary of the invention
The embodiment of the present invention provides a kind of image processing method and device, to solve image textures scheme in the related technology It is existing the problem of can not being automatically adjusted according to the actual content of preview image to the textures element of addition.
In order to solve the above-mentioned technical problem, the present invention is implemented as follows:
In a first aspect, the embodiment of the invention provides a kind of image processing methods, which comprises
Determine the first textures element that target user chooses;
The first textures element is added on preview image;
Acquire the default biological information of the preview image;
According to the default biological information, the first textures element is adjusted.
Second aspect, the embodiment of the invention also provides a kind of image processing apparatus, described device includes:
First determining module, the first textures element chosen for determining target user;
Adding module, for adding the first textures element on preview image;
Acquisition module, for acquiring the default biological information of the preview image;
The first adjustment module, for adjusting the first textures element according to the default biological information.
The third aspect, the embodiment of the invention also provides a kind of mobile terminals, comprising: memory, processor and is stored in On the memory and the computer program that can run on the processor, the computer program are executed by the processor The step of image processing method described in Shi Shixian.
Fourth aspect, it is described computer-readable to deposit the embodiment of the invention also provides a kind of computer readable storage medium It is stored with computer program on storage media, the image processing method is realized when the computer program is executed by processor Step.
In this way, the embodiment of the present invention by user to preview image choose a first textures element after, Neng Gougen The the first textures element chosen is adjusted according to the default biological information of user in preview image, then when pre- After the default biological information of the user look in image changes, the method for the embodiment of the present invention can be come according to the variation Textures element added in preview image is adjusted in real time, manually adjusts textures element, energy every time without user It is enough that the first textures element of addition is automatically adjusted according to the actual content of preview image.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention Example, for those of ordinary skill in the art, without any creative labor, can also be according to these attached drawings Obtain other attached drawings.
Fig. 1 is the flow chart of the image processing method of one embodiment of the invention;
Fig. 2 is the flow chart of the image processing method of another embodiment of the present invention;
Fig. 3 is the flow chart of the image processing method of another embodiment of the invention;
Fig. 4 is the block diagram of the image processing apparatus of one embodiment of the invention;
Fig. 5 is the hardware structural diagram of the mobile terminal of one embodiment of the invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
Referring to Fig.1, the flow chart of the image processing method of one embodiment of the invention is shown, mobile terminal is applied to, The method can specifically include following steps:
Step 101, the first textures element that target user chooses is determined;
Wherein, after the camera function starting of mobile terminal, photographing mode, under photographing mode, the present invention can be entered The method of embodiment can provide a plurality of types of textures elements, and the textures element classification of each type is shown.Wherein, textures member Element type (i.e. textures type) include but is not limited to makeups textures, expression textures, text textures, filter textures, music textures, Decorate textures (decoration includes but is not limited to hair decorations, jewellery, beard, cap etc.) etc..
For the textures element of any one type, the textures element of a variety of styles can have, wherein the wind of textures Lattice can to user as it can be seen that or user it is invisible.
It can be selected from the textures element for multiple types that camera mode provides using the target user of the camera function Then one textures type chooses this and takes pictures and think textures element to be used from multiple textures elements of the textures type, That is the first textures element.
Step 102, the first textures element is added on preview image;
After user has chosen textures element, which can be added to pre- by the method for the embodiment of the present invention It lookes in image.For the first textures element specific location locating in preview image, then according to belonging to the first textures element Textures type difference and position is different.For example, the first textures element that target user chooses is some expression textures, then should Expression textures can be added to the human face region in preview image;For another example, the first textures element that target user chooses is some filter Mirror textures, then the filter textures can be added to the whole region of preview image, and be not human face region.Other kinds of textures Element can be according to difference and the textures member of preview image content the location of in preview image in addition, for it The type of element is different and flexible setting, no longer repeats one by one here.
Step 103, the default biological information of the preview image is acquired;
Wherein, when in preview image including facial image, default biological information can be acquired to preview image.Its In, default biological information may include any one following biological information: human face expression feature, face face are special Sign, hair feature etc..
Step 104, according to the default biological information, the first textures element is adjusted.
Wherein, the method for the embodiment of the present invention can according to the default biological information of personage in preview image and to The first textures element that family has been chosen is adjusted flexibly.
In this way, the embodiment of the present invention by user to preview image choose a first textures element after, Neng Gougen The the first textures element chosen is adjusted according to the default biological information of user in preview image, then when pre- After the default biological information of the user look in image changes, the method for the embodiment of the present invention can be come according to the variation Textures element added in preview image is adjusted in real time, manually adjusts textures element, energy every time without user It is enough that the first textures element of addition is automatically adjusted according to the actual content of preview image.
It optionally, in one embodiment, can be by any one in following three kinds of modes when executing step 104 Kind of mode or various ways are adjusted the first textures element.
Mode one: according to the default biological information, the first textures element is switched to the second textures element.
That is, the first textures member that the biological information according to personage in preview image has chosen user Element switches to another textures element.
Mode two: according to the default biological information, the second textures element is added to the preview image.
That is, the biological information according to personage in preview image to increase by second textures to preview image Element, wherein the first textures element continuation of selection originally being added retains in preview image.
Mode three: according to the default biological information, the first textures attribute of an element is adjusted.
Wherein, the first textures attribute of an element can include but is not limited to size, color, brightness etc..
In this mode, can come according to the biological information of personage in preview image to added in preview image The first textures attribute of an element be adjusted so that the first textures element adjusted and current preview image phase Match.
In this way, the embodiment of the present invention according to the default biological information of target user in preview image come to preview graph When being adjusted flexibly as the first textures element added, the mode of adjustment can include but is not limited to cutting for textures element It changes, the increase of textures element, be adjusted etc. modes to the textures attribute of an element added, so that the adjustment to textures element Mode is more flexible controllable.
Optionally, in one embodiment, before step 101, can also include: according to the method for the embodiment of the present invention
According to multiple users to the usage record data of textures element, the textures style of the textures element and described Textures type belonging to textures element determines the textures type that each user likes in multiple textures types, and, described In the multiple textures styles for the textures type that each user likes, the textures style that each user likes is determined.
As described above, textures element provided in an embodiment of the present invention is divided into many types, i.e., including multiple textures types Textures element, and multiple groups textures member can be divided into according to textures style for the textures element of any one textures type Element.It therefore, may include the textures element of one or more textures styles for the textures element of any one textures type.
The definition of textures type is elaborated above, which is not described herein again;And for textures type For textures style, the textures style can be understood as a kind of groupings of all textures elements under some textures type according to According to the textures style being specifically defined can be with flexible configuration.
Here illustrated with several specific examples:
For makeups textures, textures style includes but is not limited to sootiness adornment, roaring flame heavy make-up, pure and fresh light make-up, naked adornment, American Adornment etc.;
For text textures, textures style can be divided according to font, textures style include but is not limited to cartoon font, Modern face, english font, ancient Chinese prose font etc..
For the usage record data of textures element, user using present invention method camera function into After row is taken pictures, can take pictures every time to the user textures element of institute's final choice of the method for the embodiment of the present invention is recorded.Institute With the usage record data of the textures element of some user may include that some user used which textures element, each patch The information such as the access times (or frequency of use) of pel element.In this way, the textures element of each user by means of system record Usage record data, so that it may determine the textures element that user is commonly used.
And since each textures element of the embodiment of the present invention has textures type and textures style belonging to it, It can also determine textures type and the textures element institute under the textures type belonging to textures element that user is commonly used The textures style of category.In this way, usage record data, the textures style of the textures element, the patch according to user to textures element Textures type belonging to pel element, so that it may the patch that the user likes is determined in the pre-set multiple textures types of the present invention Graph type, and the textures style that determining user likes in multiple textures styles of the textures type.
Further, since mobile terminal can be for multiple users using taking pictures, so the usage record of the embodiment of the present invention Data are usage record data of multiple users to textures element, wherein in the use note for generating the textures element of each user When recording data, the identity (such as the methods of Application on Voiceprint Recognition, fingerprint recognition, recognition of face etc.) of user can be determined before shooting, In this manner it is possible to generate the usage record data of the respective textures element of each user respectively for different user.
In this way, the embodiment of the present invention is according to each user to the usage record data of textures element and textures member Textures type and textures style belonging to element, so that it is determined that the textures type that each user is commonly used, and in the textures class The textures style being commonly used in type exists textures type belonging to textures element that user is commonly used, the textures element Textures style belonging under the textures type, is determined as the textures type and multiple patches under the textures type that user likes The textures style liked in figure style.
Optionally, in one embodiment, before step 101, can also include: according to the method for the embodiment of the present invention
According to the textures type that each user likes, the first object textures type that target user likes is determined;
Wherein it is possible to use camera function target user carry out identity validation (such as Application on Voiceprint Recognition, fingerprint recognition, The methods of recognition of face etc.), then, in the textures type liked of each user of above-mentioned determination, search target user happiness Joyous textures type.
According to the textures style that each user likes, the first object textures wind that the target user likes is determined Lattice;
Wherein it is possible to search target user happiness in the textures style that each user that above-described embodiment determines likes Joyous textures style.
Multiple first textures elements of the first object textures style of the first object textures type are pushed to Image preview interface;
Wherein it is possible to which the multiple first textures elements for belonging to first object textures style of first object textures type are pushed away It is sent to image preview interface.
So when executing step 101, then it can determine target user in the multiple first textures element of push The the first textures element chosen.
In this way, the embodiment of the present invention according to user to the usage record data of textures element, generate what each user liked The textures style liked under textures type and the textures type then can should then when user reuses camera and takes pictures The multiple textures elements for the textures style liked under the textures type that user likes are pushed to the selection of image preview interface for users, Then, the first textures element that user selects is added in preview image.Since the textures element type that system provides is numerous, The textures element of each type includes many textures elements again, then user needs repeatedly selection to attempt, can just choose symbol Close its textures element for shooting demand.And the method for the embodiment of the present invention using user to the usage record data of textures element, Come the textures element for recommending the user to be commonly used, the mistake that user's selection cumbersome from numerous textures elements is selected can be avoided Journey, it is only necessary to which selecting this to take pictures from the textures element of recommendation needs textures element to be used.
Optionally, in one embodiment, when the default biological information includes human face expression feature, then When executing step 104, the target expression type information to match with the human face expression feature can be determined first;Then, really The fixed target color information to match with the target expression type information;Finally, the color of the first textures element is believed Breath is adjusted to the target color information to realize.
Specifically, in the determining target expression type information to match with human face expression feature, it can be by preparatory Trained expression classification model determines.Wherein it is possible to train expression classification model in advance, which can be to defeated The human face expression feature entered makes expression classification.Wherein, which can be any one neural network model, this In repeat no more.
The expression type that the expression classification model trained in advance can identify include but is not limited to smile, be worried, is excited, Sadness etc..
It should be noted that determining that the mode of the target expression type information to match with human face expression feature is not limited to The mode of the classification judgement of above-mentioned expression classification model, can also use other schemes, which is not described herein again.
In addition, the embodiment of the present invention can preset the corresponding colouring information of every kind of expression type.
Such as smile expression corresponds to the orange of warm colour;Sad expression corresponds to the black of cool colour.
Wherein, when corresponding color is arranged to every kind of expression type, it can according to need flexible setting, however it is not limited on State citing.Principle to every kind of expression type setting color is that the color can reflect out the corresponding user mood of the expression.
So by means of the corresponding relationship of pre-set expression type information and colouring information, so that it may determine preview graph The corresponding target color information of target expression type information of personage as in.Then, by institute added in the preview image The colouring information for stating the first textures element is adjusted to the target color information.
In this way, the embodiment of the present invention passes through human face expression feature in identification preview image, so that it is determined that in the preview image The expression type information of personage, then, it is determined that target color information corresponding with the expression type information, it will be in the preview image The colouring information of the first added textures element is adjusted to the target color information, so that the patch used has been selected in user The color of pel element can follow the mood of user in preview image to change and change, by the textures for changing preview image in real time The mode of the color style of element to reflect in real time the mood variation of the user to take pictures, so that the photo of shooting is rich in artistic beauty Sense.
Optionally, when the default biological information includes height feature, then then may be used when executing step 104 To be the size with the height characteristic matching by the size adjusting of the first textures element.
Wherein, it is special can to acquire height to personage's (need to be whole body images) in preview image for the method for the embodiment of the present invention Sign, so that it is determined that height information.In addition, the embodiment of the present invention can preset pair of the size of height information and textures element It should be related to, so as to search the size that height information corresponding with the height feature matches according to the corresponding relationship, finally, It is the size found by the size adjusting of the first textures element.
In this way, the embodiment of the present invention can be according to the height information of personage in preview image, come to the textures member added The size of element is adjusted, and matches its size with the height information, and after the height of personage in preview image changes, Personage i.e. in preview image changes, then the method for the embodiment of the present invention can be to the textures element added in preview image Size carry out real-time flexible modulation, be allowed to the height information always with personage newest in preview image and match.
Optionally, in one embodiment, referring to Fig. 2, when the default biological information includes face features When, then can be then accomplished by the following way when executing step 104:
S21, the determining age attribute to match with the face features;
Wherein, which is attribute related with the age, including but not limited to age bracket, age type.
Default age type can include but is not limited to children and adolescents, adult, person in middle and old age.
Default age bracket can include but is not limited to 0 years old~6 years old, 7 years old~17 years old, 18 years old~40 years old, 41 years old or more.
It should be noted that default age type and default age bracket are not limited to the example above.,
In addition, when the age attribute had not only included age type but also included age bracket, between age type and age bracket Corresponding relationship can include but is not limited to the corresponding one or more age brackets of age type, an age bracket it is corresponding one or Multiple age types.
By taking age attribute includes age bracket as an example, and when executing S22, can be classified mould by age bracket trained in advance Type determines.Wherein it is possible to preparatory sport career age disaggregated model, which can be to the face features of input Make age bracket classification.Wherein, which is that disaggregated model can be any one neural network model, and which is not described herein again. In this way, by the way that the collected face features are input to the age bracket disaggregated model, so as to export determine with The age bracket of the people's face portion characteristic matching, identifies age bracket locating for the age of user in preview image.
S22, in multiple textures types, determining the second target textures type to match with the age attribute;
Wherein, as described above, the embodiment of the present invention can preset multiple textures types, and can be in advance according to the age Attribute (being illustrated here with age bracket, similarly, which is not described herein again for the implementation of age type) come to multiple textures types into Row divides, so that each age bracket has matching textures type.Therefore, this step can be from pre-set age bracket In the corresponding relationship of textures type, searches determining and S21 and determine the second target textures type that age bracket matches.
S23, in multiple textures styles of the second target textures type, what the determining and age attribute matched Second target textures style;
In addition, each age bracket can also be arranged in advance matching textures style in the embodiment of the present invention, need to infuse Meaning not can carry out being correspondingly arranged for age bracket to the textures style of all textures types, for example, such as age 1 corresponding textures type of section is textures Class1, then when corresponding textures style is arranged to age bracket 1, then based on patch Multiple textures styles of graph type 1 select one and the matched textures style of age bracket 1 in multiple textures style, will The textures style of selection is set as and the matched textures style of the age bracket 1.
Therefore, this step, can be in multiple textures styles of the second target textures type, the determining year determined with S21 The age matched second target textures style of section.
S24, if not including the first textures element in multiple second textures elements of the second target textures style, Then push the multiple second textures element;
Wherein, if not including that user starts choose first in multiple second textures elements of the second target textures style Textures element then illustrates therefore the age of personage in the unsuitable preview image of the first textures element can will be suitble to its year Multiple second textures elements in age push in image preview interface.
In practical applications, there may be the unlatching camera function of user 1 and carry out the face for being directed at oneself, then, from system A first textures element is selected in the textures element of recommendation, then, the face of the camera acquisition of the camera changes into again Another user 2 can then have the case where the first textures element is not belonging to multiple second textures elements here.
S25 determines the second textures element being selected in the multiple second textures element;
Wherein, such as user 2 can choose this need textures element to be used from multiple second textures elements.
The first textures element in the preview image is switched to the second textures element being selected by S26.
Wherein it is possible to which the first textures element being added in preview image is switched to this second patch being selected Pel element.
In this way, the embodiment of the present invention is identified by the age attribute to personage in preview image, and the determining and year The textures type and textures style of age attributes match are not belonging to above-mentioned textures class in the textures element that step 101 is chosen in user In the case where type and textures style, i.e., during camera acquires preview image, before non-actual photographed, if camera Preceding user replaces, and the method for the embodiment of the present invention can be by the facial characteristics of identification preview image, so that it is determined that its year Age attribute pushes the textures element for meeting its age attribute.
Such as the first textures element in step 101 is then some makeups element that the triggering of user 1 is chosen changes into The father of user 1 faces camera, so that the personage in preview image is its father, in this way, the embodiment of the present invention Method is pushed by the textures element (such as beard decorative element) that the age attribute with its father matches, to avoid father Father uses the awkward scene of makeups element.
Optionally, in one embodiment, if in the embodiment shown in fig. 1 in preview image including multiple users' Facial image, such as family of three, then the method Jing Guo Fig. 1 embodiment can add three people in preview image respectively Above-mentioned first textures element.
Such as owner is daughter in above three people, the method that only daughter used the embodiment of the present invention carried out textures It takes pictures, therefore, system only preserves the owner to the usage record data of textures element, therefore, first textures being selected Element be owner usually like using textures element, such as some exaggeration makeups element.In this way, one three in preview image The human face region of mouth is all added to the makeups element of the exaggeration, it is clear that its parent is not appropriate for the makeups element using the exaggeration.
Method so Jing Guo Fig. 2 embodiment of the present invention, then can according to the age attribute of the parent in preview image, and Recommend the multiple second textures elements being consistent with its age attribute respectively to them, such as is pushed away for the face features of mother The multiple naked adornment elements for having recommended naked adornment style recommend the various beards member in decoration textures for the face features of father Element, then, user can select this to take pictures to think textures element to be used in two groups of textures elements of recommendation, certainly, It can also recommend corresponding one group of second textures element for the face features of owner, which is not described herein again.It is so last Each of preview image face region can be added the textures element for meeting the age attribute of the face, will not make one Face region is added with the textures element for being not suitable for its age, and it is possible to carry out textures element according to the age of multiple faces Personalized recommendation.
It is combined alternatively, the method for the embodiment of the present invention can also be preset for the conjunction of different people array, all ages and classes Textures suit, such as fraternal textures suit, three-person household textures suit, sister's textures suit etc..When in preview image include it is more When a facial image, can also root the relationship between multiple users, example determined according to the corresponding age attribute of each facial image Such as three-person household, user is pushed to so as to each textures element combinations in being set with textures between three mouthfuls, wherein for Any one textures element combinations, wherein the corresponding textures element of the face features of all ages and classes attribute is different, such as mother Mother, the corresponding textures element of face of three people of father and daughter are all different.
Optionally, in one embodiment, when the default biological information includes hair feature, then executing When step 104, then it can be accomplished by the following way:
Firstly, determining the hair style information and color development information of the target user according to the hair feature;
Wherein, the method for the embodiment of the present invention can acquire the hair feature of preview image, then, according to the hair feature It determines the profile information of the hair feature, the hair style information of user is determined according to the profile information, further, it is also possible to according to this The RGB numerical value of hair feature, to determine target user's color development information.
Then, in multiple textures types, determining the second target textures type to match with the hair style information;
Wherein, as described above, the embodiment of the present invention can preset multiple textures types, and can be in advance according to hair style Difference multiple textures types are divided so that every kind of hair style all has matching textures type.Therefore, this step Suddenly determination and the hair style information phase in preview image can be searched from the corresponding relationship of pre-set hair style and textures type Matched second target textures type.Such as the second target textures type is hair decorations textures.
Then, in multiple textures styles of the second target textures type, determination matches with the hair style information The second target textures style;
In addition, the embodiment of the present invention each hair style can also be arranged in advance matching textures style, it should be noted that , being not correspondingly arranged for hair style can be carried out to the textures style of all textures types, for example, such as hair style information 1 corresponding textures type is textures Class1, then when corresponding textures style is arranged to hair style information 1, then based on patch Multiple textures styles of graph type 1 select one and the matched textures style of hair style information 1 in multiple textures style, It sets the textures style of selection to and the matched textures style of the hair style information 1.
Therefore, this step, can be in multiple textures styles of the second target textures type, in determining and preview image The second target textures style that hair style information matches.Such as the second target textures style is the lovely style of hair decorations textures.
Subsequently, in multiple second textures elements of the second target textures style, the determining and color development information The the second target textures element to match;
Wherein, the hair decorations member of lovely style is known as multiple, and multiple hair decorations elements of the lovely style can be set respectively in advance It is equipped with corresponding color development information, such as maroon corresponds to the hair decorations element 1 of lovely style, black corresponds to the hair decorations element of lovely style 2.Therefore, can be in multiple hair decorations elements of the lovely style, determining and color development information (such as maroon) matched second mesh Labeling pel element, i.e., the hair decorations element 1 of lovely style.
Finally, adding the second target textures element to the reference object in the preview image.
Wherein it is possible to add the hair decorations element 1 of the lovely style to the reference object in preview image.Here shooting pair As for the personage in preview image, but the position of the hair decorations element 1 addition is on the hair of personage.
It should be noted that being only illustrated by taking the textures of this type of hair decorations textures as an example here, still, in this example The second target textures type be not limited to hair decorations textures, can also be makeups textures etc..
In this way, the embodiment of the present invention is identified by the hair feature to personage in preview image, and according to hair spy The hair style information and color development information for determining personage are levied, is matched to be searched in pre-set textures type with the hair style information Textures type and textures style, and in multiple second textures elements under the textures type and textures style search determine With the second target textures element of the color development information matches, and increase the second target textures element in preview image, it can Flexibly increase matching textures element according to the hair style information of personage in preview image and color development information, so that preview graph The textures element added as in can be consistent with the actual content of preview image.
Likewise, the method for the embodiment of the present invention also can be applied to the field in preview image including multiple facial images Scape, principle is similar with the scene of multiple facial images of previous embodiment, and which is not described herein again.
Optionally, in one embodiment, referring to Fig. 3, after step 102, according to the method for the embodiment of the present invention also May include:
Step 105, the scene characteristic of the preview image is acquired;
Wherein, which is the feature of scene locating for the picture in preview image for identification.
The scene that the embodiment of the present invention can identify includes but is not limited to night scene, park scene, subway scene, office field Scape, concert scene etc..
Step 106, in multiple textures types, determining the second target textures type to match with the scene characteristic;
Wherein, after collecting scene characteristic, when can use scene characteristic identification scene, it specifically can use AI core Piece carrys out machine learning to realize the identification of scene.
Such as preview image Green plant is more, then is park scene.
Wherein, as described above, the embodiment of the present invention can preset multiple textures types, and can be in advance according to scene Difference multiple textures types are divided so that every kind of scene all has matching textures type.Therefore, this step Suddenly determining and preview image can be searched from the corresponding relationship of pre-set scene (or scene characteristic) and textures type In the second target textures type for matching of scene characteristic.Such as the second target textures type is filter textures.
Step 107, the determining target color information to match with the scene characteristic, object brightness information;
Wherein, the embodiment of the present invention can embody the color of its scene to each scene (scene characteristic) setting in advance Information and luminance information, such as the corresponding color of night scene are the dark gray of cool colour, and corresponding brightness is darker brightness, park The corresponding color of scene is the yellow of warm colour, and corresponding brightness is brighter brightness.
In this way, by being carried out to pre-set luminance information, colouring information to different scenes (scene characteristic) setting It searches, can determine the target color information to match with the scene characteristic, object brightness information.
Step 108, in multiple second textures elements of the second target textures type, search colouring information with it is described Second target textures element of target color information matching and luminance information and the object brightness information matches;
Continue to be illustrated so that the second target textures type is filter textures as an example, includes different face in the filter textures Therefore multiple second textures elements of color and different brightness can search luminance information and target in this multiple filter textures Luminance information matching and colouring information and the matched second target textures element of target color information.
Here color-match can be color difference and be less than default color difference threshold, and brightness matching here can be luminance difference Less than predetermined luminance threshold value.
Step 109, if the textures type of the first textures element is identical as the second target textures type, by institute State the first textures element color be adjusted to the matched color of the target color information, by the bright of the first textures element Degree is adjusted to the brightness to match with the object brightness information;
Wherein, if the textures type of the first textures element is identical as the second target textures type, illustrate mesh It is also a kind of filter textures that user, which is marked, in the first textures element chosen for the first time, illustrates that the textures element of user's selection is to meet this (being not the textures element for being not suitable for the scene such as makeups element) of the scene of preview image therefore can be directly right The brightness of the first textures element added and color are adjusted, and are object brightness by its brightness adjustment, its color is adjusted For color of object.So that the coloration of the first textures element adjusted and brightness more meet the scene characteristic in preview image.
Step 110, if the textures type of the first textures element is different from the second target textures type, by institute It states the first textures element and is switched to the second target textures element.
Wherein, if the textures type of the first textures element is different from the second target textures type, illustrate mesh Marking user not being in the first textures element chosen for the first time is a kind of filter textures, illustrates that the textures element of user's selection is not to be inconsistent (such as the first textures element is makeups element) for closing the scene of the preview image, then will directly can add in preview image The the first textures element added is deleted, but the second target textures element is added in preview image.Due to the first textures element Not identical with the textures type of the second target textures element, therefore, two elements location in preview image is general Difference, so, it is complete substitution on position that switching here, which is not necessarily referring to generation,.
It should be noted that be only illustrated so that the second target textures type is filter textures as an example here, still, In other embodiments, which can also be other kinds of textures, and the present invention does not limit this.
In this way, the embodiment of the present invention by preview image acquire scene characteristic, and determination match with the scene characteristic Textures type, target color information and object brightness information then searched in multiple textures elements of the textures type bright Spend information and object brightness information matches and colouring information and the matched second target textures element of target color information.And The textures type for the first textures element that target user has chosen is identical with the textures type determined here according to scene characteristic In the case of, the adjustment of colouring information and luminance information is directly carried out to the first textures element for having added, is allowed to and color of object Information and object brightness information matches;In addition, textures type and this Rigen of the first textures element chosen in target user In the different types of situation of textures determined according to scene characteristic, then illustrate the textures type for the first textures element that user has chosen It is not inconsistent with the scene characteristic, the first textures element for not meeting scene can be directly switched to the second mesh for meeting scene characteristic Labeling pel element.Attribute tuning flexibly can be carried out to the textures element added according to the scene characteristic in preview image Or the switching of textures element, the textures element added in preview image is consistent with the actual content of preview image. The embodiment of the present invention can adjust in real time textures element according to environmental change, the scene properties of AI textures be assigned, for exclusive field Scape is specially optimized.
By means of the above-mentioned technical proposal of the embodiment of the present invention, textures element can be made special according to the biology of scene, face It levies and changes automatically, and variation is not limited only to the switching of different type paster, further include a certain light levels for choosing paster, face Color, size dimension, combination etc. variation.Pass through the default biological characteristic and preview graph of user in real-time detection preview image The scene characteristic of picture realizes the diversified real-time change of textures element, allows user that can obtain more personalized textures member Plain design effect.
Referring to Fig. 4, the block diagram of the image processing apparatus of one embodiment of the invention is shown.The image of the embodiment of the present invention Processing unit is able to achieve the details of the image processing method in above-described embodiment, and reaches identical effect.At image shown in Fig. 4 Managing device includes:
First determining module 401, the first textures element chosen for determining target user;
Adding module 402, for adding the first textures element on preview image;
Acquisition module 403, for acquiring the default biological information of the preview image;
The first adjustment module 404, for adjusting the first textures element according to the default biological information.
In this way, the embodiment of the present invention by user to preview image choose a first textures element after, Neng Gougen The the first textures element chosen is adjusted according to the default biological information of user in preview image, then when pre- After the default biological information of the user look in image changes, the method for the embodiment of the present invention can be come according to the variation Textures element added in preview image is adjusted in real time, manually adjusts textures element, energy every time without user It is enough that the first textures element of addition is automatically adjusted according to the actual content of preview image.
Optionally, described device further include:
Second determining module, for according to multiple users to the usage record data of textures element, the textures element Textures type belonging to textures style and the textures element determines the patch that each user likes in multiple textures types Graph type, and, in the multiple textures styles for the textures type that each user likes, determine that each user likes Textures style;
Third determining module, the textures type for being liked according to each user determine target user likes One target textures type;
4th determining module, the textures style for being liked according to each user, determines that the target user likes First object textures style;
Pushing module, for being pasted multiple the first of the first object textures style of the first object textures type Pel element pushes to image preview interface;
First determining module 401 is also used to determine target user in the multiple first textures element of push The the first textures element chosen.
Optionally, the first adjustment module 404 is also used to according to the default biological information, by described first Textures element is switched to the second textures element, and/or, it is also used to according to the default biological information, to the preview graph As the second textures element of addition, and/or, it is also used to according to the default biological information, to the first textures element Attribute is adjusted.
Optionally, the first adjustment module 404 includes:
First determines submodule, for when the default biological information includes human face expression feature, determining and institute State the target expression type information that human face expression feature matches;
Second determines submodule, for the determining target color information to match with the target expression type information;
Adjusting submodule is also used to the colouring information of the first textures element being adjusted to the target color information.
Optionally, the first adjustment module 404 includes:
Third determines submodule, for when the default biological information includes face features, determining and institute State the age attribute that face features match;
4th determines submodule, for the second mesh that in multiple textures types, the determining and age attribute matches Labeling graph type;
5th determines submodule, in multiple textures styles of the second target textures type, it is determining with it is described The second target textures style that age attribute matches;
Submodule is pushed, if not including described the in multiple second textures elements for the second target textures style One textures element then pushes the multiple second textures element;
6th determines submodule, for determining the second textures element being selected in the multiple second textures element;
Switching submodule, for the first textures element in the preview image to be switched to the be selected second patch Pel element.
Optionally, the first adjustment module 404 includes:
7th determines submodule, for when the default biological information includes hair feature, according to the hair Feature determines the hair style information and color development information of the target user;
8th determines submodule, for the second mesh that in multiple textures types, the determining and hair style information matches Labeling graph type;
9th determines submodule, in multiple textures styles of the second target textures type, it is determining with it is described The second target textures style that hair style information matches;
Tenth determines submodule, in multiple second textures elements of the second target textures style, determine with The second target textures element that the color development information matches;
Submodule is added, for adding the second target textures element to the reference object in the preview image.
Optionally, described device further include:
Acquisition module, for acquiring the scene characteristic of the preview image;
5th determining module, for the second target that in multiple textures types, the determining and scene characteristic matches Textures type;
6th determining module, for the determining target color information to match with the scene characteristic, object brightness information;
Searching module, for searching colouring information in multiple second textures elements of the second target textures type It is matched with the target color information and the second target textures element of luminance information and the object brightness information matches;
Second adjustment module, if textures type and the second target textures type phase for the first textures element Together, then by the color of the first textures element be adjusted to the matched color of the target color information, described first is pasted The brightness adjustment of pel element is the brightness to match with the object brightness information;
Switching module, if the textures type for the first textures element is different from the second target textures type, The first textures element is then switched to the second target textures element.
Image processing apparatus provided in an embodiment of the present invention can be realized each of above-mentioned image processing method embodiment realization A process, to avoid repeating, which is not described herein again.
A kind of hardware structural diagram of Fig. 5 mobile terminal of each embodiment to realize the present invention,
The mobile terminal 500 includes but is not limited to: radio frequency unit 501, network module 502, audio output unit 503, defeated Enter unit 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, processor The components such as 510 and power supply 511.It will be understood by those skilled in the art that mobile terminal structure shown in Fig. 5 is not constituted Restriction to mobile terminal, mobile terminal may include than illustrating more or fewer components, perhaps combine certain components or Different component layouts.In embodiments of the present invention, mobile terminal include but is not limited to mobile phone, tablet computer, laptop, Palm PC, car-mounted terminal, wearable device and pedometer etc..
Processor 510, the first textures element chosen for determining target user;Described first is added on preview image Textures element;Acquire the default biological information of the preview image;According to the default biological information, described in adjustment First textures element.
In this way, the embodiment of the present invention by user to preview image choose a first textures element after, Neng Gougen The the first textures element chosen is adjusted according to the default biological information of user in preview image, then when pre- After the default biological information of the user look in image changes, the method for the embodiment of the present invention can be come according to the variation Textures element added in preview image is adjusted in real time, manually adjusts textures element, energy every time without user It is enough that the first textures element of addition is automatically adjusted according to the actual content of preview image.
It should be understood that the embodiment of the present invention in, radio frequency unit 501 can be used for receiving and sending messages or communication process in, signal Send and receive, specifically, by from base station downlink data receive after, to processor 510 handle;In addition, by uplink Data are sent to base station.In general, radio frequency unit 501 includes but is not limited to antenna, at least one amplifier, transceiver, coupling Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 501 can also by wireless communication system and network and other set Standby communication.
Mobile terminal provides wireless broadband internet by network module 502 for user and accesses, and such as user is helped to receive It sends e-mails, browse webpage and access streaming video etc..
Audio output unit 503 can be received by radio frequency unit 501 or network module 502 or in memory 509 The audio data of storage is converted into audio signal and exports to be sound.Moreover, audio output unit 503 can also be provided and be moved The relevant audio output of specific function that dynamic terminal 500 executes is (for example, call signal receives sound, message sink sound etc. Deng).Audio output unit 503 includes loudspeaker, buzzer and receiver etc..
Input unit 504 is for receiving audio or video signal.Input unit 504 may include graphics processor (Graphics Processing Unit, GPU) 5041 and microphone 5042, graphics processor 5041 is in video acquisition mode Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out Reason.Treated, and picture frame may be displayed on display unit 506.Through graphics processor 5041, treated that picture frame can be deposited Storage is sent in memory 509 (or other storage mediums) or via radio frequency unit 501 or network module 502.Mike Wind 5042 can receive sound, and can be audio data by such acoustic processing.Treated audio data can be The format output that mobile communication base station can be sent to via radio frequency unit 501 is converted in the case where telephone calling model.
Mobile terminal 500 further includes at least one sensor 505, such as optical sensor, motion sensor and other biographies Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment The light and shade of light adjusts the brightness of display panel 5061, and proximity sensor can close when mobile terminal 500 is moved in one's ear Display panel 5061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (general For three axis) size of acceleration, it can detect that size and the direction of gravity when static, can be used to identify mobile terminal posture (ratio Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);It passes Sensor 505 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, wet Meter, thermometer, infrared sensor etc. are spent, details are not described herein.
Display unit 506 is for showing information input by user or being supplied to the information of user.Display unit 506 can wrap Display panel 5061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 5061.
User input unit 507 can be used for receiving the number or character information of input, and generate the use with mobile terminal Family setting and the related key signals input of function control.Specifically, user input unit 507 include touch panel 5071 and Other input equipments 5072.Touch panel 5071, also referred to as touch screen collect the touch operation of user on it or nearby (for example user uses any suitable objects or attachment such as finger, stylus on touch panel 5071 or in touch panel 5071 Neighbouring operation).Touch panel 5071 may include both touch detecting apparatus and touch controller.Wherein, touch detection Device detects the touch orientation of user, and detects touch operation bring signal, transmits a signal to touch controller;Touch control Device processed receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 510, receiving area It manages the order that device 510 is sent and is executed.Furthermore, it is possible to more using resistance-type, condenser type, infrared ray and surface acoustic wave etc. Seed type realizes touch panel 5071.In addition to touch panel 5071, user input unit 507 can also include other input equipments 5072.Specifically, other input equipments 5072 can include but is not limited to physical keyboard, function key (such as volume control button, Switch key etc.), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 5071 can be covered on display panel 5061, when touch panel 5071 is detected at it On or near touch operation after, send processor 510 to determine the type of touch event, be followed by subsequent processing device 510 according to touching The type for touching event provides corresponding visual output on display panel 5061.Although in Fig. 5, touch panel 5071 and display Panel 5061 is the function that outputs and inputs of realizing mobile terminal as two independent components, but in some embodiments In, can be integrated by touch panel 5071 and display panel 5061 and realize the function that outputs and inputs of mobile terminal, it is specific this Place is without limitation.
Interface unit 508 is the interface that external device (ED) is connect with mobile terminal 500.For example, external device (ED) may include having Line or wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, storage card end Mouth, port, the port audio input/output (I/O), video i/o port, earphone end for connecting the device with identification module Mouthful etc..Interface unit 508 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and By one or more elements that the input received is transferred in mobile terminal 500 or can be used in 500 He of mobile terminal Data are transmitted between external device (ED).
Memory 509 can be used for storing software program and various data.Memory 509 can mainly include storing program area The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as Audio data, phone directory etc.) etc..In addition, memory 509 may include high-speed random access memory, it can also include non-easy The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 510 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection A part by running or execute the software program and/or module that are stored in memory 509, and calls and is stored in storage Data in device 509 execute the various functions and processing data of mobile terminal, to carry out integral monitoring to mobile terminal.Place Managing device 510 may include one or more processing units;Preferably, processor 510 can integrate application processor and modulatedemodulate is mediated Manage device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is main Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 510.
Mobile terminal 500 can also include the power supply 511 (such as battery) powered to all parts, it is preferred that power supply 511 Can be logically contiguous by power-supply management system and processor 510, to realize management charging by power-supply management system, put The functions such as electricity and power managed.
In addition, mobile terminal 500 includes some unshowned functional modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of mobile terminal, including processor 510, and memory 509 is stored in On memory 509 and the computer program that can run on the processor 510, the computer program are executed by processor 510 Each process of the above-mentioned image processing method embodiment of Shi Shixian, and identical technical effect can be reached, to avoid repeating, here It repeats no more.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium Calculation machine program, the computer program realize each process of above-mentioned image processing method embodiment, and energy when being executed by processor Reach identical technical effect, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, such as only Read memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation RAM), magnetic or disk etc..
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in a storage medium In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much Form belongs within protection of the invention.

Claims (15)

1. a kind of image processing method, which is characterized in that the described method includes:
Determine the first textures element that target user chooses;
The first textures element is added on preview image;
Acquire the default biological information of the preview image;
According to the default biological information, the first textures element is adjusted.
2. the method according to claim 1, wherein the first textures element for choosing of the determining target user it Before, the method also includes:
According to multiple users to the usage record data of textures element, the textures style of the textures element and the textures Textures type belonging to element determines the textures type that each user likes in multiple textures types, and, described each In the multiple textures styles for the textures type that user likes, the textures style that each user likes is determined;
According to the textures type that each user likes, the first object textures type that target user likes is determined;
According to the textures style that each user likes, the first object textures style that the target user likes is determined;
Multiple first textures elements of the first object textures style of the first object textures type are pushed into image Preview interface;
The textures element that the determining target user chooses, comprising:
Determine the first textures element that target user chooses in the multiple first textures element of push.
3. the method according to claim 1, wherein described according to the default biological information, adjustment institute State the first textures element, comprising:
According to the default biological information, the first textures element is switched to the second textures element;
And/or
According to the default biological information, the second textures element is added to the preview image;
And/or
According to the default biological information, the first textures attribute of an element is adjusted.
4. the method according to claim 1, wherein when the default biological information includes human face expression spy It is described according to the default biological information when sign, adjust the first textures element, comprising:
The determining target expression type information to match with the human face expression feature;
The determining target color information to match with the target expression type information;
The colouring information of the first textures element is adjusted to the target color information.
5. the method according to claim 1, wherein when the default biological information includes that face face is special It is described according to the default biological information when sign, adjust the first textures element, comprising:
The determining age attribute to match with the face features;
In multiple textures types, determining the second target textures type to match with the age attribute;
In multiple textures styles of the second target textures type, determining the second target to match with the age attribute Textures style;
If not including the first textures element in multiple second textures elements of the second target textures style, institute is pushed State multiple second textures elements;
Determine the second textures element being selected in the multiple second textures element;
The first textures element in the preview image is switched to the second textures element being selected.
6. the method according to claim 1, wherein when the default biological information includes hair feature When, it is described according to the default biological information, adjust the first textures element, comprising:
According to the hair feature, the hair style information and color development information of the target user are determined;
In multiple textures types, determining the second target textures type to match with the hair style information;
In multiple textures styles of the second target textures type, determining the second target to match with the hair style information Textures style;
In multiple second textures elements of the second target textures style, determining second to match with the color development information Target textures element;
The second target textures element is added to the reference object in the preview image.
7. the method according to claim 1, wherein described add the first textures element on preview image Later, the method also includes:
Acquire the scene characteristic of the preview image;
In multiple textures types, determining the second target textures type to match with the scene characteristic;
The determining target color information to match with the scene characteristic, object brightness information;
In multiple second textures elements of the second target textures type, colouring information and the target color information are searched Second target textures element of matching and luminance information and the object brightness information matches;
If the textures type of the first textures element is identical as the second target textures type, by the first textures member Element color be adjusted to the matched color of the target color information, by the brightness adjustment of the first textures element for institute State the brightness that object brightness information matches;
If the textures type of the first textures element is different from the second target textures type, by the first textures member Element is switched to the second target textures element.
8. a kind of image processing apparatus, which is characterized in that described device includes:
First determining module, the first textures element chosen for determining target user;
Adding module, for adding the first textures element on preview image;
Acquisition module, for acquiring the default biological information of the preview image;
The first adjustment module, for adjusting the first textures element according to the default biological information.
9. device according to claim 8, which is characterized in that described device further include:
Second determining module, for according to multiple users to the usage record data of textures element, the textures of the textures element Textures type belonging to style and the textures element determines the textures class that each user likes in multiple textures types Type, and, in the multiple textures styles for the textures type that each user likes, determine the patch that each user likes Figure style;
Third determining module, the textures type for being liked according to each user determine the first mesh that target user likes Labeling graph type;
4th determining module, the textures style for being liked according to each user determine the target user likes One target textures style;
Pushing module, for multiple first textures of the first object textures style of the first object textures type are first Element pushes to image preview interface;
First determining module is also used to determine target user chooses in the multiple first textures element of push the One textures element.
10. device according to claim 8, which is characterized in that
The first adjustment module is also used to be switched to the first textures element according to the default biological information Second textures element, and/or, it is also used to according to the default biological information, the second textures is added to the preview image Element, and/or, it is also used to be adjusted the first textures attribute of an element according to the default biological information.
11. device according to claim 8, which is characterized in that the first adjustment module includes:
First determination submodule, for determining and the people when the default biological information includes human face expression feature The target expression type information that face expressive features match;
Second determines submodule, for the determining target color information to match with the target expression type information;
Adjusting submodule is also used to the colouring information of the first textures element being adjusted to the target color information.
12. device according to claim 8, which is characterized in that the first adjustment module includes:
Third determines submodule, for determining and the people when the default biological information includes face features The age attribute that face facial characteristics matches;
4th determines submodule, in multiple textures types, determining the second target to match with the age attribute to be pasted Graph type;
5th determination submodule, for determining and the age in multiple textures styles of the second target textures type The second target textures style that attribute matches;
Submodule is pushed, if in multiple second textures elements for the second target textures style not including first patch Pel element, then push the multiple second textures element;
6th determines submodule, for determining the second textures element being selected in the multiple second textures element;
Switching submodule, for the first textures element in the preview image to be switched to the second textures being selected member Element.
13. device according to claim 8, which is characterized in that the first adjustment module includes:
7th determines submodule, for when the default biological information includes hair feature, according to the hair feature, Determine the hair style information and color development information of the target user;
8th determines submodule, in multiple textures types, determining the second target to match with the hair style information to be pasted Graph type;
9th determination submodule, for determining and the hair style in multiple textures styles of the second target textures type The second target textures style that information matches;
Tenth determines submodule, in multiple second textures elements of the second target textures style, it is determining with it is described The second target textures element that color development information matches;
Submodule is added, for adding the second target textures element to the reference object in the preview image.
14. device according to claim 8, which is characterized in that described device further include:
Acquisition module, for acquiring the scene characteristic of the preview image;
5th determining module, for the second target textures that in multiple textures types, the determining and scene characteristic matches Type;
6th determining module, for the determining target color information to match with the scene characteristic, object brightness information;
Searching module, for searching colouring information and institute in multiple second textures elements of the second target textures type State the second target textures element of target color information matching and luminance information and the object brightness information matches;
Second adjustment module, if the textures type for the first textures element is identical as the second target textures type, Then by the color of the first textures element be adjusted to the matched color of the target color information, by first textures member The brightness adjustment of element is the brightness to match with the object brightness information;
Switching module will if the textures type for the first textures element is different from the second target textures type The first textures element is switched to the second target textures element.
15. a kind of mobile terminal characterized by comprising memory, processor and be stored on the memory and can be in institute The computer program run on processor is stated, such as claim 1 to 7 is realized when the computer program is executed by the processor Any one of described in image processing method the step of.
CN201811217342.3A 2018-10-18 2018-10-18 A kind of image processing method and device Pending CN109361852A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811217342.3A CN109361852A (en) 2018-10-18 2018-10-18 A kind of image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811217342.3A CN109361852A (en) 2018-10-18 2018-10-18 A kind of image processing method and device

Publications (1)

Publication Number Publication Date
CN109361852A true CN109361852A (en) 2019-02-19

Family

ID=65345778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811217342.3A Pending CN109361852A (en) 2018-10-18 2018-10-18 A kind of image processing method and device

Country Status (1)

Country Link
CN (1) CN109361852A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977868A (en) * 2019-03-26 2019-07-05 深圳市商汤科技有限公司 Image rendering method and device, electronic equipment and storage medium
CN110413818A (en) * 2019-07-31 2019-11-05 腾讯科技(深圳)有限公司 Paster recommended method, device, computer readable storage medium and computer equipment
CN111182203A (en) * 2019-11-19 2020-05-19 广东小天才科技有限公司 Method for guiding user to take pictures and intelligent sound box
CN111275607A (en) * 2020-01-17 2020-06-12 腾讯科技(深圳)有限公司 Interface display method and device, computer equipment and storage medium
CN112148404A (en) * 2020-09-24 2020-12-29 游艺星际(北京)科技有限公司 Head portrait generation method, apparatus, device and storage medium
CN112804440A (en) * 2019-11-13 2021-05-14 北京小米移动软件有限公司 Method, device and medium for processing image
WO2023040633A1 (en) * 2021-09-14 2023-03-23 北京字跳网络技术有限公司 Video generation method and apparatus, and terminal device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014149674A (en) * 2013-02-01 2014-08-21 Furyu Kk Image editing device, image editing method, and program
CN106210545A (en) * 2016-08-22 2016-12-07 北京金山安全软件有限公司 Video shooting method and device and electronic equipment
CN106803057A (en) * 2015-11-25 2017-06-06 腾讯科技(深圳)有限公司 Image information processing method and device
CN107909629A (en) * 2017-11-06 2018-04-13 广东欧珀移动通信有限公司 Recommendation method, apparatus, storage medium and the terminal device of paster
CN108322802A (en) * 2017-12-29 2018-07-24 广州市百果园信息技术有限公司 Stick picture disposing method, computer readable storage medium and the terminal of video image
CN108537749A (en) * 2018-03-29 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014149674A (en) * 2013-02-01 2014-08-21 Furyu Kk Image editing device, image editing method, and program
CN106803057A (en) * 2015-11-25 2017-06-06 腾讯科技(深圳)有限公司 Image information processing method and device
CN106210545A (en) * 2016-08-22 2016-12-07 北京金山安全软件有限公司 Video shooting method and device and electronic equipment
CN107909629A (en) * 2017-11-06 2018-04-13 广东欧珀移动通信有限公司 Recommendation method, apparatus, storage medium and the terminal device of paster
CN108322802A (en) * 2017-12-29 2018-07-24 广州市百果园信息技术有限公司 Stick picture disposing method, computer readable storage medium and the terminal of video image
CN108537749A (en) * 2018-03-29 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer readable storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977868A (en) * 2019-03-26 2019-07-05 深圳市商汤科技有限公司 Image rendering method and device, electronic equipment and storage medium
CN110413818A (en) * 2019-07-31 2019-11-05 腾讯科技(深圳)有限公司 Paster recommended method, device, computer readable storage medium and computer equipment
CN112804440A (en) * 2019-11-13 2021-05-14 北京小米移动软件有限公司 Method, device and medium for processing image
CN111182203A (en) * 2019-11-19 2020-05-19 广东小天才科技有限公司 Method for guiding user to take pictures and intelligent sound box
CN111182203B (en) * 2019-11-19 2022-07-29 广东小天才科技有限公司 Method for guiding user to take pictures and intelligent sound box
CN111275607A (en) * 2020-01-17 2020-06-12 腾讯科技(深圳)有限公司 Interface display method and device, computer equipment and storage medium
CN112148404A (en) * 2020-09-24 2020-12-29 游艺星际(北京)科技有限公司 Head portrait generation method, apparatus, device and storage medium
CN112148404B (en) * 2020-09-24 2024-03-19 游艺星际(北京)科技有限公司 Head portrait generation method, device, equipment and storage medium
WO2023040633A1 (en) * 2021-09-14 2023-03-23 北京字跳网络技术有限公司 Video generation method and apparatus, and terminal device and storage medium

Similar Documents

Publication Publication Date Title
CN109361852A (en) A kind of image processing method and device
KR20150079804A (en) Image processing method and apparatus, and terminal device
CN109660728B (en) Photographing method and device
CN107832784A (en) A kind of method of image beautification and a kind of mobile terminal
CN110072012A (en) A kind of based reminding method and mobile terminal for screen state switching
CN111797249A (en) Content pushing method, device and equipment
CN108600647A (en) Shooting preview method, mobile terminal and storage medium
CN108154121A (en) Cosmetic auxiliary method, smart mirror and storage medium based on smart mirror
CN109508399A (en) A kind of facial expression image processing method, mobile terminal
CN109871125A (en) A kind of display control method and terminal device
CN109190509A (en) A kind of personal identification method, device and computer readable storage medium
CN108198162A (en) Photo processing method, mobile terminal, server, system, storage medium
CN109062411A (en) A kind of screen luminance adjustment method and mobile terminal
CN111797304A (en) Content pushing method, device and equipment
CN109819167A (en) A kind of image processing method, device and mobile terminal
CN109167914A (en) A kind of image processing method and mobile terminal
CN110490897A (en) Imitate the method and electronic equipment that video generates
CN108462826A (en) A kind of method and mobile terminal of auxiliary photo-taking
CN108848309A (en) A kind of camera programm starting method and mobile terminal
CN110379428A (en) A kind of information processing method and terminal device
CN109448069A (en) A kind of template generation method and mobile terminal
CN109684544A (en) One kind, which is worn, takes recommended method and terminal device
CN109257649A (en) A kind of multimedia file producting method and terminal device
CN108763475A (en) A kind of method for recording, record device and terminal device
CN108959585A (en) A kind of expression picture acquisition methods and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190219

RJ01 Rejection of invention patent application after publication