CN107578459A - Expression is embedded in the method and device of candidates of input method - Google Patents
Expression is embedded in the method and device of candidates of input method Download PDFInfo
- Publication number
- CN107578459A CN107578459A CN201710774726.4A CN201710774726A CN107578459A CN 107578459 A CN107578459 A CN 107578459A CN 201710774726 A CN201710774726 A CN 201710774726A CN 107578459 A CN107578459 A CN 107578459A
- Authority
- CN
- China
- Prior art keywords
- expression
- face
- target image
- image
- facial image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The embodiment of the present application provides a kind of method and device of expression insertion candidates of input method, and this method includes:Obtain target image;Obtain the facial image in target image;According to the face key point information of facial image, it is determined that the expression template matched with facial image, and facial image and the expression template are spliced, using the picture being spliced into as expression candidate item;The list entries and the corresponding relation of the expression candidate item of input method application are established, when detecting list entries input, to show the expression candidate item.It can be seen that facial image and expression template splicing can be generated new expression by technical scheme, and the list entries that the expression is applied with input method is associated, and the expression is shown when detecting list entries input.Compared with prior art, the expression candidate item that the embodiment of the present application can be applied based on Face image synthesis input method, the source of expression candidate item is added, meets the individual demand of user's expression candidate item.
Description
Technical field
The application is related to field of computer technology, more particularly to the method and device of expression insertion candidates of input method.
Background technology
With the fast development of network technology, the species and function of network application are increasingly abundanter, are applied with input method and are
Example, input method application can support that phonetic, stroke, five, hand-written, intelligent English input, intelligent sound input and multimedia are defeated
The input mode such as enter.In addition, current input method application can also be supported to export a variety of expression candidate items, still, what it was exported
The source of expression candidate item is relatively simple, can not meet individual demand of the user for expression candidate item.
The content of the invention
The purpose of the embodiment of the present application is to provide a kind of method and device of expression insertion candidates of input method, to meet to use
The individual demand of expression candidate item during family is applied to input method.
In order to solve the above technical problems, what the embodiment of the present application was realized in:
According to the first aspect of the embodiment of the present application, there is provided a kind of method of expression insertion candidates of input method, be applied to
Terminal device, methods described include:
Obtain target image;
Obtain the facial image in the target image;
According to the face key point information of the facial image, it is determined that the expression template matched with the facial image, and
The facial image and the expression template are spliced, using the picture being spliced into as expression candidate item;
The list entries and the corresponding relation of the expression candidate item of input method application are established, so as to described defeated when detecting
When entering sequence inputting, the expression candidate item is shown.
In the embodiment of the application one, the face key point information according to the facial image, it is determined that with the face
The expression template of images match, including:
According to the face key point information of the facial image, label corresponding to the facial image is determined, wherein, it is described
Label includes following at least one:Sex, age, custom, expression;
It is determined that the expression template with the tag match.
In the embodiment of the application one, methods described also includes:
The expression access interface and the corresponding relation of the expression candidate item of the input method application are established, will pass through
State expression entrance and access the expression candidate item.
In the embodiment of the application one, the facial image obtained in the target image, including:
Whether detect in the target image has face;
If having face in the target image, the face key point information in the target image is extracted;
According to the face key point information, the face scope in the target image is determined;
Based on the face scope, image cropping is carried out using shade, obtains facial image.
In the embodiment of the application one, there are multiple faces in the target image;
It is described that face scope in the target image is determined according to the face key point information, including:
According to the face key point information, the scope of the most obvious face of target image septum reset feature is determined.
According to the second aspect of the embodiment of the present application, there is provided a kind of device of expression insertion candidates of input method, be applied to
Terminal device, described device include:
Target image acquisition module, for obtaining target image;
Facial image acquisition module, for obtaining the facial image in the target image;
Expression template determining module, for the face key point information according to the facial image, it is determined that with the face
The expression template of images match;
Expression candidate item generation module, will for splicing to the facial image and the expression template of the matching
The picture being spliced into is as expression candidate item;
Candidate item is embedded in module, and the list entries for establishing input method application is corresponding with the expression candidate item to close
System, when detecting the list entries input, to show the expression candidate item.
In the embodiment of the application one, the expression template determining module, including:
Label determination sub-module, for the face key point information according to the facial image, determine the facial image
Corresponding label, wherein, the label includes following at least one:Sex, age, custom, expression;
Expression template determination sub-module, for determining the expression template with the tag match.
In the embodiment of the application one, described device also includes:
Relation establishes module, for establishing the expression access interface and pair of the expression candidate item of the input method application
It should be related to, the expression candidate item is accessed will pass through the expression entrance.
In the embodiment of the application one, the facial image acquisition module, including:
Detection sub-module, for detecting in the target image whether have face;
Face key point information extracting sub-module, for someone in detecting the target image in the detection sub-module
In the case of face, the face key point information in the target image is extracted;
Face range determination submodule, for according to the face key point information, determining the people in the target image
Face scope;
Image cropping submodule, for based on the face scope, carrying out image cropping using shade, obtaining face figure
Picture.
In the embodiment of the application one, there are multiple faces in the target image;
The face range determination submodule, including:
Face scope determining unit, for according to the face key point information, determining that the target image septum reset is special
The scope of the most obvious face of sign.
According to the third aspect of the embodiment of the present application, there is provided a kind of electronic equipment, including:
Processor;And
It is arranged to store the memory of computer executable instructions, the executable instruction makes the place when executed
Manage the step of device performs the method for foregoing any expression insertion candidates of input method.
According to the fourth aspect of the embodiment of the present application, there is provided a kind of computer-readable storage medium, it is characterised in that the calculating
Machine readable storage medium storing program for executing stores one or more programs, and one or more of programs are when the electronics for being included multiple application programs
When equipment performs so that the electronic equipment performs the step of method of foregoing any expression insertion candidates of input method.
The technical scheme provided from above the embodiment of the present application, the embodiment of the present application can be by the facial images of user
With expression template splicing, new expression is generated, and the list entries that the expression is applied with input method is associated, so as to when inspection
When measuring list entries input, the expression is shown.Compared with prior art, the embodiment of the present application can be based on facial image life
Into the expression candidate item of input method application, the source of expression candidate item is added, meets expression during user applies to input method
The individual demand of candidate item.
Brief description of the drawings
, below will be to embodiment or existing in order to illustrate more clearly of the embodiment of the present application or technical scheme of the prior art
There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments described in specification, for those of ordinary skill in the art, before creative labor is not paid
Put, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is the flow chart of the method for the expression insertion candidates of input method of one embodiment of the application;
Fig. 2 is the scene graph of the method for the expression insertion candidates of input method of one embodiment of the application;
Fig. 3 is the flow chart of the method for the expression insertion candidates of input method of another embodiment of the application;
Fig. 4 is the scene graph of the target image obtaining step of one embodiment of the application;
Fig. 5 is the structural representation of the electronic equipment of one embodiment of the application;
Fig. 6 is the structural representation of the device of the expression insertion candidates of input method of one embodiment of the application.
Embodiment
In order that those skilled in the art more fully understand the technical scheme in this specification, below in conjunction with the application
Accompanying drawing in embodiment, the technical scheme in the embodiment of the present application is clearly and completely described, it is clear that described reality
It is only this specification part of the embodiment to apply example, rather than whole embodiments.Based on the embodiment in this specification, ability
The every other embodiment that domain those of ordinary skill is obtained under the premise of creative work is not made, should all belong to this theory
The scope of bright book protection.
The embodiment of the present application provides a kind of method and device of expression insertion candidates of input method.
The method for being embedded in candidates of input method to a kind of expression that the embodiment of the present application provides first below is introduced.
It should be noted that the method that the embodiment of the present application provides is applied to terminal device, in actual applications, the terminal
Equipment can include:Smart mobile phone, tablet personal computer, intelligent watch, notebook/desktop computer etc., the embodiment of the present application is to this
It is not construed as limiting.
Fig. 1 is the flow chart of the method for the expression insertion candidates of input method of one embodiment of the application, such as Fig. 1 institutes
Show, this method may comprise steps of:
In a step 101, target image is obtained.
In the embodiment of the present application, one or more image can be previously stored with terminal device, it is now, optional at one
Embodiment in, above-mentioned steps 101 can include:
In the image being locally stored from terminal device, target image is obtained.
In the application embodiment, piece image can be selected from the image that terminal device is locally stored, as target
Image.Specifically, the image selection that can be triggered according to user instructs, a width is selected in the image being locally stored from terminal device
Image, as target image;Or piece image is selected in the image that randomly can also be locally stored from terminal device, make
For target image, the embodiment of the present application is not construed as limiting to this.
It should be noted that the camera that the image that terminal device prestores can derive from the terminal device collects
Image, other equipment can also be derived from, such as server or other-end equipment, the embodiment of the present application are not limited this
It is fixed.
In another optional embodiment, target image can also be obtained by the camera of terminal device, now,
Above-mentioned steps 101 can include:
By the camera of terminal device, target image is obtained.
In the application embodiment, target image can be obtained by the camera built in terminal device, specifically, can
To obtain target image by the preceding camera built in terminal device, can also by the rear camera built in terminal device come
Obtain target image;Or target image can also be obtained by the external camera of terminal device, specifically, the camera
Terminal device can be external in by way of wired connection, terminal device can also be external in by way of wireless connection,
In actual applications, the radio connection can include:Wireless Fidelity WiFi connected modes, bluetooth connection mode or
ZigBee connected modes, the embodiment of the present application are not construed as limiting to this.
In another optional embodiment, target image can also be obtained from other equipment by terminal device, now,
Above-mentioned steps 101 can include:
Target image is obtained from server;Or
Target image is obtained from other-end equipment.
It should be noted that the image content of target image in the embodiment of the present application can be personage, landscape, etc.,
The embodiment of the present application is not construed as limiting to this.
In a step 102, the facial image in target image is obtained.
In the embodiment of the present application, when the image content of target image includes personage, the facial image of the personage is extracted.
In an optional embodiment, the extraction of facial image only can be carried out in terminal equipment side, it is now, above-mentioned
Step 102 may comprise steps of:S10, S11, S12 and S13, wherein,
Whether in S10, detecting in target image has face.
In the application embodiment, after target image is obtained, can detect in target image whether have face first, if
Detect there is face in target image, then perform S11;If detecting do not have face in target image, it is not carried out subsequently walking
Suddenly.
(also known as Android, it is the operation system of a kind of freedom based on Linux and open source code for Android system
System, is mainly used for mobile device, such as smart mobile phone and tablet personal computer, by Google companies and open mobile phone alliance leader and opens
Hair) terminal device, the recognition of face API built in the Android system can be passed through --- FaceDetector detect target image
In face.It should be noted that FaceDetector can complete recognition of face by a small amount of code, but this identification is
It most basic identification, i.e., can only identify in image whether there is face, more accurate identification can not be done and (for example judge face
Identity).
And for the terminal device of IOS systems (Mobile operating system developed by Apple Inc.), IOS systems can be passed through
Built-in recognition of face API --- the face in CoreImage detection target images.
In S11, if having face in the target image, the face key point information in the target image is extracted.
In the embodiment of the present application, face key point can include:Facial contour, eyes, eyebrow, lip and nose profile
Etc..
In actual applications, TensorFlow technologies can be used, face key point information is extracted from target image, this
Outside, face key point information can also be extracted from target image using other technologies, for example, based on projection (face gray scale
Value is low compared with surrounding skin gray value) technology or based on priori rules, (priori refers to face such as eyelid, iris
Deng some known gray scale, shape informations) technology or based on geometry, (such as deformable template can be detected preferably
Eyes lip feature shape, but because edge is inaccurately difficult to precise positioning feature point, dependence initial parameter is big, is easily trapped into
Local Minimum, calculate the time length) technology or Statistics-Based Method (thought of this method is to regard characteristic portion as one
Quasi-mode, then it is trained using substantial amounts of characteristic portion sample and non-characteristic portion sample, then the skill of structural classification device
Art, or the technology based on wavelet and wavelet packets, the embodiment of the present application are not construed as limiting to this,
In S12, the face key point information in target image, the face scope in the target image is determined.
In the embodiment of the present application, face scope can be drawn according to face key point information, wherein, in the face scope
Face and hair can be included.Specifically, when face key point includes:, can be according to eye when eyes, eyebrow, face, nose
Eyeball, eyebrow, face, the feature of nose and mutual geometry site determine the scope of face.
In view of there may be multiple faces in target image, above-mentioned S12 can include:
According to face key point information, the scope of the most obvious face of target image septum reset feature is determined.
It in the application embodiment, when including multiple faces in target image, can select from multiple faces, obtain
The conduct output of the most obvious face of facial characteristics, continues subsequent operation.
In addition it is also possible to randomly choose one from multiple faces as output, continue subsequent operation;Or also may be used
One is randomly choosed as output from multiple faces using the selection according to user, continues subsequent operation;Or can also
Multiple faces are selected to continue subsequent operation, the embodiment of the present application is not construed as limiting to this as output.
In S13, based on face scope, image cropping is carried out using shade, obtains facial image.
Shade, also known as Mask, it is to prevent from performing paintbrush, erasing rubber and some image levels operations in image layer region
A kind of mode of (such as removing image layer, blurred picture layer etc.).
In the embodiment of the present application, shade can be placed on the region corresponding to face scope, to protect in the range of face
Image it is complete.
From above-mentioned embodiment, the embodiment can carry out the extraction of facial image in terminal equipment side, without
Server must be depended on, therefore flow caused by can avoid being communicated with server by terminal device as far as possible consumes.
In view of server data-handling capacity generally far beyond the data-handling capacity of terminal device, accordingly,
The accuracy of the result of server generally also exceeds the result of terminal device, now, in another optional implementation
In mode, the extraction of facial image can be carried out in only server side, and the facial image extracted is sent by the server
To terminal device, now, above-mentioned steps 102 can include:S15 and S16, wherein,
In S15, target image is sent to server, so that the server is handled target image to obtain face
Image.
In S16, the facial image of the reception server return.
In the application embodiment, server can be used and be based on after the target image of terminal device upload is received
The face recognition technology of depth convolutional neural networks, capture the face in target image and obtain facial boundary, being based on afterwards should
Facial boundary extracts the image section for only including face.
In addition, after the image section only comprising face is extracted, server can also calculate the anglec of rotation of face,
Face is adjusted according to this angle, to be at vertical direction.Further, OpenCV can also be used edge-protected
Filter carries out the filter processing that cartoonizes to face, so that the facial image finally given is more aesthetically pleasing.
The facial image that processing obtains can be returned to terminal device by server after a series of above-mentioned processing are completed, with
Subsequently synthesized.
From above-mentioned embodiment, the embodiment can carry out the extraction of facial image in server side, due to clothes
Therefore the data-handling capacity of business device, is obtained generally far beyond the data-handling capacity of terminal device by server process
The accuracy of facial image is higher.
In view of Countries or the data traffic cost in area, terminal device can be paid the utmost attention to and locally carry out face figure
The extraction of picture, if failure is locally extracted in terminal device, target image is uploaded onto the server by network, entered by server
The extraction of pedestrian's face image, now, can be mutual by terminal equipment side and server side in another optional embodiment
Coordinate, to carry out the extraction of facial image, now, above-mentioned steps 102 can include:S'10、S'11、S'12、S'13、S'14、
S'15 and S'16, wherein,
Whether in S'10, detecting in target image has face.
In S'11, if having face in the target image, the face key point information in the target image is extracted.
In S'12, the face key point information in target image, the face scope in the target image is determined, if
Face scope can not be determined, then performs S'14.
In S'13, based on face scope, image cropping is carried out using shade, obtains facial image, if cutting failure,
Perform S'14.
In S'14, target image is sent to server, so that the server is handled target image to obtain people
Face image.
In S'15, the facial image of the reception server return.
From above-mentioned embodiment, the embodiment can be worked in coordination by terminal equipment side and server side, to enter
The extraction of pedestrian's face image, it can be realized in the case where ensureing less flow consumption as far as possible to facial image in target image
Extraction.
In step 103, according to the face key point information of facial image, it is determined that the expression mould matched with facial image
Plate, and facial image and expression template are spliced, using the picture being spliced into as expression candidate item.
In the embodiment of the present application, expression resource, paster resource can be turned into again, for being spliced with facial image, closed
Cheng Xin expression.Expression resource includes one or more expression templates, can setting according to definition when making expression resource
Specification is counted, the corresponding expression resource of output simultaneously exports;Expression resource based on output, configure the analysis protocol of expression, and and table
Feelings resource is packed together, is synthesized for the later stage, wherein, analysis protocol is used for the stitching section for defining expression template and facial image
Position.When being spliced, configuration protocol can be based on, by code logic, parses the expression template in expression resource.
, can be according to face key point information in facial image, it is determined that the spy for characterizing face in the embodiment of the present application
The label of sign, the label can include:Sex, age, custom, expression etc., expression template is determined according to the label afterwards, this
When, in an optional embodiment, above-mentioned steps 103 can include:
According to the face key point information of facial image, label corresponding to the facial image is determined;It is determined that with the label
The expression template of matching;Wherein, the label includes following at least one:Sex, age, custom, expression.
For example, when label comprises at least sex, if sex is female, it is determined that expression template relatively faggoty;Such as
Fruit sex is man, it is determined that relatively butch expression template, to reduce indisposed sense.
In another example when label comprises at least the age, if the age is smaller, it is determined that the expression mould of versus young
Plate;If the age is bigger, it is determined that the expression template of relative maturity, to reduce indisposed sense.
In another example when label comprises at least custom, the expression mould that is commonly used of user corresponding to facial image is selected
Plate, to meet the behavioural habits of user.
In another example when label comprises at least expression, if expression is happy, it is determined that relatively active expression template;
If expression is sad, angry etc., it is determined that relatively serious expression template, to reduce indisposed sense.
In addition it is also possible to the selection instruction triggered according to user, to determine expression template, the embodiment of the present application is not made to this
Limit.
In the embodiment of the present application, facial image and pre-designed expression template can be spliced, generation can make
Entertaining image.For example, the facial image of user and pre-designed various caricature body shapes are spliced, generate
Personalized entertaining image.It can be seen that the embodiment of the present application can integrate recognition of face tracer technique and image mosaic technology, production
Go out the entertaining image of personalization.
At step 104, the list entries and the corresponding relation of expression candidate item of input method application are established, so as to when detection
When being inputted to the list entries, the expression candidate item is shown.
In the embodiment of the present application, list entries can include:Phonetic, letter and numeral etc..
In the embodiment of the present application, after expression candidate item is generated, the list entries of input method application can be established by user
With the corresponding relation of the expression candidate item, and the expression candidate item is stored into the dictionary of input method application, when detecting
When the list entries inputs, the retrieval expression candidate item corresponding with the list entries from the dictionary of input method application, and show
The expression candidate item.
Furthermore, it is possible to set the expression candidate item that there is higher priority, when detecting list entries input, from
Retrieval expression candidate item corresponding with the list entries, preferentially shows the expression candidate item in the dictionary of input method application.
Optionally, when establishing the list entries and the corresponding relation of the expression candidate item of input method application, can be based on
Emotion expressed by expression candidate item, it is determined that corresponding list entries, and it is corresponding with expression candidate item to establish the list entries
Relation.For example, the face in expression candidate item is the expression laughed, then list entries " haha " and expression candidate can be established
The corresponding relation of item.
In order to make it easy to understand, the embodiment of the present application is described with reference to the scene graph shown in Fig. 2, as shown in Fig. 2
Target image 20 is obtained first, facial image 21 is obtained from target image 20, it is determined that the expression mould matched with facial image 21
Plate 22, facial image 21 and expression template 22 are spliced, synthesis expression candidate item 23, it is contemplated that in expression candidate item 23
Face is the expression laughed, therefore can establish the corresponding relation of list entries " haha " and the expression candidate item, works as input method
During using list entries " haha " occur in 24 input frame 241, corresponding wait is retrieved in dictionary of the input method using 24
Option, the displaying expression candidate item 23 corresponding with list entries " haha " in candidate frame 242 of the input method using 24;Wherein,
The candidate frame of input method application, also known as word selection frame, show in candidate frame one or more with input frame Chinese version content pair
The candidate item answered, the candidate item can include:Word, character string, expression etc..
As seen from the above-described embodiment, the embodiment can generate the facial image of user and expression template splicing new
Expression, and the list entries that the expression is applied with input method is associated, when detecting list entries input, to show
Show the expression.Compared with prior art, the expression candidate that the embodiment of the present application can be applied based on Face image synthesis input method
, the source of expression candidate item is added, meets individual demand of the user to expression candidate item in input method application.
In addition, it is contemplated that the convenience of operation, emoji (emoticon) entrance that can also be applied by input method accesses
Expression candidate item, now, in another embodiment that the application provides, the embodiment can be on the basis of embodiment illustrated in fig. 1
On, increase following steps:
The expression access interface and the corresponding relation of expression candidate item of input method application are established, will pass through the expression entrance
Access the expression candidate item.
In the embodiment of the present application, the expression candidate item that step 103 generates can be stored specified to input method application
Under file, can also expression candidate item that randomly storing step 103 generates, the embodiment of the present application is not construed as limiting to this.
In the embodiment of the present application, user can pass through direct point when accessing expression candidate item by expression access interface
The mode of expression candidate item is hit, the expression candidate item occurs;Or expression candidate can also occur by way of sharing
, the embodiment of the present application is not construed as limiting to this.
As seen from the above-described embodiment, the expression access interface that the embodiment can be applied by establishing input method is waited with expression
The corresponding relation of option, to improve the convenience for accessing expression candidate item.
Fig. 3 is the flow chart of the method for the expression insertion candidates of input method of another embodiment of the application, such as Fig. 3 institutes
Show, this method may comprise steps of:
In step 301, terminal device camera is started.
In the embodiment of the present application, it can be triggered by user and start terminal device camera;Can also be from other equipment to end
Control instruction occurs for end equipment, starts terminal device camera to trigger;Startup terminal can also be triggered automatically by terminal device
Equipment camera, the embodiment of the present application are not construed as limiting to this.
In the embodiment of the present application, the camera built in terminal device can be started, specifically, can start in terminal device
The preceding camera put, the rear camera built in terminal device can also be started;Or it can also start that terminal device is external to be taken the photograph
As head, specifically, the camera can be external in terminal device by way of wired connection, wireless connection can also be passed through
Mode is external in terminal device, and in actual applications, the radio connection can include:Wireless Fidelity WiFi connected modes,
Bluetooth connection mode or ZigBee connected modes, the embodiment of the present application are not construed as limiting to this.
In step 302, whether there is face in the view-finder of identification terminal equipment camera, if then performing step 303.
In the embodiment of the present application, if not having face in the view-finder of terminal device camera, user is prompted to adjust shooting
Angle.
In order to make it easy to understand, step 302 is described the scene graph with reference to shown in Fig. 4, terminal device is in image taking
Display content interface 40 before, if there is face in view-finder, terminal device carries out image taking, now display content interface 41;
If not having face in view-finder, terminal device prompting user adjusts shooting angle, now display content interface 42.
It should be noted that the embodiment of the present application can be carried by way of showing screen display word in terminal device
Show that user adjusts shooting angle, as shown in Figure 3;By way of voice user can also be prompted to adjust shooting angle, for example,
Export the voice of herein below " shooting angle please be adjust ";It can also prompt to use by way of intelligent wearable device links
Family adjusts shooting angle, for example, Intelligent bracelet is shaken to prompt user to adjust shooting angle, the embodiment of the present application is not limited this
It is fixed.
In step 303, image taking is carried out, obtains target image.
In step 304, terminal device is handled target image.
In the embodiment of the present application, it is contemplated that the data traffic cost in Countries or area, can pay the utmost attention to terminal set
Target image, if failure is locally extracted in terminal device, is uploaded to by the standby local extraction for carrying out facial image by network
Server, the extraction of facial image is carried out by server, then the facial image extracted is sent to terminal device by server.
The image procossing thought of step 304 in the embodiment of the present application, with terminal in step 102 in embodiment illustrated in fig. 1
The processing thought of equipment extraction facial image is similar, and the embodiment of the present application repeats no more to this, detail as per step shown in Fig. 1
Content in 102.
In step 305, server is handled target image.
The image procossing thought of step 305 in the embodiment of the present application, with being serviced in step 102 in embodiment illustrated in fig. 1
The processing thought of device extraction facial image is similar, and the embodiment of the present application repeats no more to this, detail as per step 102 shown in Fig. 1
In content.
Within step 306, facial image is obtained.
In step 307, according to the face key point information of facial image, it is determined that the expression mould matched with facial image
Plate, and facial image and expression template are spliced, using the picture being spliced into as expression candidate item.
In the embodiment of the present application, expression resource, paster resource can be turned into again, for being spliced with facial image, closed
Cheng Xin expression.Expression resource includes one or more expression templates, can setting according to definition when making expression resource
Specification is counted, the corresponding expression resource of output simultaneously exports;Expression resource based on output, configure the analysis protocol of expression, and and table
Feelings resource is packed together, is synthesized for the later stage, wherein, analysis protocol is used for the stitching section for defining expression template and facial image
Position.When being spliced, configuration protocol can be based on, by code logic, parses the expression template in expression resource.
, can be according to face key point information in facial image, it is determined that the spy for characterizing face in the embodiment of the present application
The label of sign, the label can include:Sex, age, custom, expression etc., expression template is determined according to the label afterwards, this
When, in an optional embodiment, above-mentioned steps 307 can include:
According to the face key point information of facial image, label corresponding to the facial image is determined;It is determined that with the label
The expression template of matching;Wherein, the label includes following at least one:Sex, age, custom, expression.
For example, when label comprises at least sex, if sex is female, it is determined that expression template relatively faggoty;Such as
Fruit sex is man, it is determined that relatively butch expression template, to reduce indisposed sense.
In another example when label comprises at least the age, if the age is smaller, it is determined that the expression mould of versus young
Plate;If the age is bigger, it is determined that the expression template of relative maturity, to reduce indisposed sense.
In another example when label comprises at least custom, the expression mould that is commonly used of user corresponding to facial image is selected
Plate, to meet the behavioural habits of user.
In another example when label comprises at least expression, if expression is happy, it is determined that relatively active expression template;
If expression is sad, angry etc., it is determined that relatively serious expression template, to reduce indisposed sense.
In addition it is also possible to the selection instruction triggered according to user, to determine expression template, the embodiment of the present application is not made to this
Limit.
In the embodiment of the present application, facial image and pre-designed expression template can be spliced, generation can make
Entertaining image.For example, the facial image of user and pre-designed various caricature body shapes are spliced, generate
Personalized entertaining image.It can be seen that the embodiment of the present application can integrate recognition of face tracer technique and image mosaic technology, production
Go out the entertaining image of personalization.
In step 308, the list entries and the corresponding relation of expression candidate item of input method application are established, so as to when detection
When being inputted to the list entries, the expression candidate item is shown.
In the embodiment of the present application, list entries can include:Phonetic, letter and numeral etc..
In the embodiment of the present application, after expression candidate item is generated, the list entries of input method application can be established by user
With the corresponding relation of the expression candidate item, and the expression candidate item is stored into the dictionary of input method application, when detecting
When the list entries inputs, the retrieval expression candidate item corresponding with the list entries from the dictionary of input method application, and show
The expression candidate item.
Furthermore, it is possible to set the expression candidate item that there is higher priority, when detecting list entries input, from
Retrieval expression candidate item corresponding with the list entries, preferentially shows the expression candidate item in the dictionary of input method application.
Optionally, when establishing the list entries and the corresponding relation of the expression candidate item of input method application, can be based on
Emotion expressed by expression candidate item, it is determined that corresponding list entries, and it is corresponding with expression candidate item to establish the list entries
Relation.For example, the face in expression candidate item is the expression laughed, then list entries " haha " and expression candidate can be established
The corresponding relation of item.
As seen from the above-described embodiment, the embodiment can generate the facial image of user and expression template splicing new
Expression, and the list entries that the expression is applied with input method is associated, when detecting list entries input, to show
Show the expression.Compared with prior art, the expression candidate that the embodiment of the present application can be applied based on Face image synthesis input method
, the source of expression candidate item is added, meets individual demand of the user to expression candidate item in input method application.
Fig. 5 is the structural representation of the electronic equipment of one embodiment of the application.Fig. 5 is refer to, should in hardware view
Electronic equipment includes processor, alternatively also includes internal bus, network interface, memory.Wherein, memory may include interior
Deposit, such as high-speed random access memory (Random-Access Memory, RAM), it is also possible to also including non-volatile memories
Device (non-volatile memory), for example, at least 1 magnetic disk storage etc..Certainly, the electronic equipment is also possible that other
Hardware required for business.
Processor, network interface and memory can be connected with each other by internal bus, and the internal bus can be ISA
(Industry Standard Architecture, industry standard architecture) bus, PCI (Peripheral
Component Interconnect, Peripheral Component Interconnect standard) bus or EISA (Extended Industry Standard
Architecture, EISA) bus etc..The bus can be divided into address bus, data/address bus, control always
Line etc..For ease of representing, only represented in Fig. 5 with a four-headed arrow, it is not intended that an only bus or a type of
Bus.
Memory, for depositing program.Specifically, program can include program code, and described program code includes calculating
Machine operational order.Memory can include internal memory and nonvolatile memory, and provide instruction and data to processor.
Processor read from nonvolatile memory corresponding to computer program into internal memory then run, in logical layer
The device of expression insertion candidates of input method is formed on face.Processor, the program that memory is deposited is performed, and specifically for holding
Row is following to be operated:
Obtain target image;
Obtain the facial image in the target image;
According to the face key point information of the facial image, it is determined that the expression template matched with the facial image, and
The facial image and the expression template are spliced, using the picture being spliced into as expression candidate item;
The list entries and the corresponding relation of the expression candidate item of input method application are established, so as to described defeated when detecting
When entering sequence inputting, the expression candidate item is shown.
The method that the device of expression insertion candidates of input method disclosed in the above-mentioned embodiment illustrated in fig. 5 such as the application performs can
To be realized applied in processor, or by processor.Processor is probably a kind of IC chip, has the processing of signal
Ability.In implementation process, each step of the above method can pass through the integrated logic circuit or soft of the hardware in processor
The instruction of part form is completed.Above-mentioned processor can be general processor, including central processing unit (Central
Processing Unit, CPU), network processing unit (Network Processor, NP) etc.;It can also be Digital Signal Processing
Device (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated
Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other can
Programmed logic device, discrete gate or transistor logic, discrete hardware components.It can realize or perform the application implementation
Disclosed each method, step and logic diagram in example.General processor can be that microprocessor or the processor can also
It is any conventional processor etc..The step of method with reference to disclosed in the embodiment of the present application, can be embodied directly in hardware decoding
Computing device is completed, or performs completion with the hardware in decoding processor and software module combination.Software module can position
In random access memory, flash memory, read-only storage, programmable read only memory or electrically erasable programmable memory, register
Deng in the ripe storage medium in this area.The storage medium is located at memory, and processor reads the information in memory, with reference to it
Hardware completes the step of above method.
The electronic equipment can also carry out Fig. 1 method, and realize the device of expression insertion candidates of input method shown in Fig. 1
The function of embodiment, the embodiment of the present application will not be repeated here.
Certainly, in addition to software realization mode, the electronic equipment of this specification is not precluded from other implementations, such as
Mode of logical device or software and hardware combining etc., that is to say, that the executive agent of following handling process is not limited to each
Logic unit or hardware or logical device.
The embodiment of the present application also proposed a kind of computer-readable recording medium, the computer-readable recording medium storage one
Individual or multiple programs, one or more programs include instruction, and the instruction is when the portable electronic for being included multiple application programs
When equipment performs, the method for portable electric appts execution embodiment illustrated in fig. 1 can be made, and specifically for performing with lower section
Method:
Obtain target image;
Obtain the facial image in the target image;
According to the face key point information of the facial image, it is determined that the expression template matched with the facial image, and
The facial image and the expression template are spliced, using the picture being spliced into as expression candidate item;
The list entries and the corresponding relation of the expression candidate item of input method application are established, so as to described defeated when detecting
When entering sequence inputting, the expression candidate item is shown.
Fig. 6 is the structural representation of the device of the expression insertion candidates of input method of one embodiment of the application.It please join
Fig. 6 is examined, in a kind of Software Implementation, expression is embedded in the device 600 of candidates of input method, can include:Target image obtains
Modulus block 601, facial image acquisition module 602, expression template determining module 603, expression candidate item generation module 604 and candidate
The embedded module 605 of item, wherein,
Target image acquisition module 601, for obtaining target image;
Facial image acquisition module 602, for obtaining the facial image in the target image;
Expression template determining module 603, for the face key point information according to the facial image, it is determined that with the people
The expression template of face image matching;
Expression candidate item generation module 604, for splicing to the facial image and the expression template of the matching,
Using the picture being spliced into as expression candidate item;
Candidate item is embedded in module 605, and the list entries for establishing input method application is corresponding with the expression candidate item
Relation, when detecting the list entries input, to show the expression candidate item.
As seen from the above-described embodiment, the embodiment can generate the facial image of user and expression template splicing new
Expression, and the list entries that the expression is applied with input method is associated, when detecting list entries input, to show
Show the expression.Compared with prior art, the expression candidate that the embodiment of the present application can be applied based on Face image synthesis input method
, the source of expression candidate item is added, meets individual demand of the user to expression candidate item in input method application.
In another embodiment that the application provides, the embodiment can be described on the basis of embodiment illustrated in fig. 6
Expression template determining module 603, can include:
Label determination sub-module, for the face key point information according to the facial image, determine the facial image
Corresponding label, wherein, the label includes following at least one:Sex, age, custom, expression;
Expression template determination sub-module, for determining the expression template with the tag match.
In another embodiment that the application provides, the embodiment can be described on the basis of embodiment illustrated in fig. 6
Expression is embedded in the device 600 of candidates of input method, can also include:
Relation establishes module, for establishing the expression access interface and pair of the expression candidate item of the input method application
It should be related to, the expression candidate item is accessed will pass through the expression entrance.
In another embodiment that the application provides, the embodiment can be described on the basis of embodiment illustrated in fig. 6
Target image acquisition module 601, it can include:
Camera starts submodule, for starting terminal device camera;
Submodule is identified, whether has face in the view-finder for identifying the terminal device camera;
Image taking submodule, used in identifying the view-finder of the terminal device camera in the identification submodule
In the case of having face, image taking is carried out, obtains target image.
In another embodiment that the application provides, the embodiment can be on the basis of a upper embodiment, the mesh
Logo image acquisition module 601, it can also include:
Prompting submodule, for not having in identifying the view-finder of the terminal device camera in the identification submodule
In the case of face, user is prompted to adjust shooting angle.
The application provide another embodiment in, the embodiment can on the basis of embodiment illustrated in fig. 6, institute
Target image acquisition module 601 is stated, can be included:
Target image acquisition submodule, in the image that is locally stored from the terminal device, obtaining target image.
The application provide another embodiment in, the embodiment can on the basis of embodiment illustrated in fig. 6, institute
Facial image acquisition module 602 is stated, can be included:
Detection sub-module, for detecting in the target image whether have face;
Face key point information extracting sub-module, for someone in detecting the target image in the detection sub-module
In the case of face, the face key point information in the target image is extracted;
Face range determination submodule, for according to the face key point information, determining the people in the target image
Face scope;
Image cropping submodule, for based on the face scope, carrying out image cropping using shade, obtaining face figure
Picture.
In another embodiment that the application provides, there are multiple faces in the target image, the embodiment can be
On the basis of a upper embodiment, the face range determination submodule, it can include:
Face scope determining unit, for according to the face key point information, determining that the target image septum reset is special
The scope of the most obvious face of sign.
In another embodiment that the application provides, the embodiment can be described on the basis of embodiment illustrated in fig. 6
Facial image acquisition module 602, it can include:
Target image sending submodule, for the target image to be sent to server, so that the server is to institute
Target image is stated to be handled to obtain facial image;
Facial image receiving submodule, the facial image returned for the reception server.
The device 600 of expression insertion candidates of input method can also carry out the method for embodiment illustrated in fig. 1, and realize that expression is embedding
Enter function of the device in embodiment illustrated in fig. 5 of candidates of input method, the embodiment of the present application will not be repeated here.
In a word, the preferred embodiment of this specification is the foregoing is only, is not intended to limit the protection of this specification
Scope.All spirit in this specification any modification, equivalent substitution and improvements made etc., should be included in this with principle
Within the protection domain of specification.
System, device, module or the unit that above-described embodiment illustrates, it can specifically be realized by computer chip or entity,
Or realized by the product with certain function.One kind typically realizes that equipment is computer.Specifically, computer for example may be used
Think personal computer, laptop computer, cell phone, camera phone, smart phone, personal digital assistant, media play
It is any in device, navigation equipment, electronic mail equipment, game console, tablet PC, wearable device or these equipment
The combination of equipment.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method
Or technology come realize information store.Information can be computer-readable instruction, data structure, the module of program or other data.
The example of the storage medium of computer includes, but are not limited to phase transition internal memory (PRAM), static RAM (SRAM), moved
State random access memory (DRAM), other kinds of random access memory (RAM), read-only storage (ROM), electric erasable
Programmable read only memory (EEPROM), fast flash memory bank or other memory techniques, read-only optical disc read-only storage (CD-ROM),
Digital versatile disc (DVD) or other optical storages, magnetic cassette tape, the storage of tape magnetic rigid disk or other magnetic storage apparatus
Or any other non-transmission medium, the information that can be accessed by a computing device available for storage.Define, calculate according to herein
Machine computer-readable recording medium does not include temporary computer readable media (transitory media), such as data-signal and carrier wave of modulation.
It should also be noted that, term " comprising ", "comprising" or its any other variant are intended to nonexcludability
Comprising so that process, method, commodity or equipment including a series of elements not only include those key elements, but also wrapping
Include the other element being not expressly set out, or also include for this process, method, commodity or equipment intrinsic want
Element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that wanted including described
Other identical element also be present in the process of element, method, commodity or equipment.
Each embodiment in this specification is described by the way of progressive, identical similar portion between each embodiment
Divide mutually referring to what each embodiment stressed is the difference with other embodiment.It is real especially for system
For applying example, because it is substantially similar to embodiment of the method, so description is fairly simple, related part is referring to embodiment of the method
Part explanation.
Claims (12)
- A kind of 1. method of expression insertion candidates of input method, it is characterised in that applied to terminal device, methods described includes:Obtain target image;Obtain the facial image in the target image;According to the face key point information of the facial image, it is determined that the expression template matched with the facial image, and to institute State facial image with the expression template to be spliced, using the picture being spliced into as expression candidate item;The list entries and the corresponding relation of the expression candidate item of input method application are established, the input sequence is detected to work as During row input, the expression candidate item is shown.
- 2. according to the method for claim 1, it is characterised in that described to be believed according to the face key point of the facial image Breath, it is determined that the expression template matched with the facial image, including:According to the face key point information of the facial image, label corresponding to the facial image is determined, wherein, the label Including following at least one:Sex, age, custom, expression;It is determined that the expression template with the tag match.
- 3. according to the method for claim 1, it is characterised in that methods described also includes:The expression access interface and the corresponding relation of the expression candidate item of the input method application are established, will pass through the table Feelings entrance accesses the expression candidate item.
- 4. according to the method for claim 1, it is characterised in that the facial image obtained in the target image, bag Include:Whether detect in the target image has face;If having face in the target image, the face key point information in the target image is extracted;According to the face key point information, the face scope in the target image is determined;Based on the face scope, image cropping is carried out using shade, obtains facial image.
- 5. according to the method for claim 4, it is characterised in that have multiple faces in the target image;It is described that face scope in the target image is determined according to the face key point information, including:According to the face key point information, the scope of the most obvious face of target image septum reset feature is determined.
- 6. a kind of device of expression insertion candidates of input method, it is characterised in that applied to terminal device, described device includes:Target image acquisition module, for obtaining target image;Facial image acquisition module, for obtaining the facial image in the target image;Expression template determining module, for the face key point information according to the facial image, it is determined that with the facial image The expression template of matching;Expression candidate item generation module, for splicing to the facial image and the expression template of the matching, it will splice Into picture as expression candidate item;Candidate item is embedded in module, for establishing the list entries and the corresponding relation of the expression candidate item of input method application, with When box lunch detects the list entries input, the expression candidate item is shown.
- 7. device according to claim 6, it is characterised in that the expression template determining module, including:Label determination sub-module, for the face key point information according to the facial image, determine that the facial image is corresponding Label, wherein, the label includes following at least one:Sex, age, custom, expression;Expression template determination sub-module, for determining the expression template with the tag match.
- 8. device according to claim 6, it is characterised in that described device also includes:Relation establishes module, and the expression access interface for establishing the input method application is corresponding with the expression candidate item to close System, the expression candidate item is accessed will pass through the expression entrance.
- 9. device according to claim 6, it is characterised in that the facial image acquisition module, including:Detection sub-module, for detecting in the target image whether have face;Face key point information extracting sub-module, for detecting there is face in the target image in the detection sub-module In the case of, extract the face key point information in the target image;Face range determination submodule, for according to the face key point information, determining the face model in the target image Enclose;Image cropping submodule, for based on the face scope, carrying out image cropping using shade, obtaining facial image.
- 10. device according to claim 9, it is characterised in that have multiple faces in the target image;The face model Determination sub-module is enclosed, including:Face scope determining unit, for according to the face key point information, determining the target image septum reset feature most The scope of obvious face.
- 11. a kind of electronic equipment, it is characterised in that including:Processor;AndIt is arranged to store the memory of computer executable instructions, the executable instruction makes the processor when executed The step of performing any one of claim 1-5 methods described.
- A kind of 12. computer-readable storage medium, it is characterised in that the computer-readable recording medium storage one or more journey Sequence, one or more of programs are when the electronic equipment for being included multiple application programs performs so that the electronic equipment is held The step of any one of row claim 1-5 methods described.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710774726.4A CN107578459A (en) | 2017-08-31 | 2017-08-31 | Expression is embedded in the method and device of candidates of input method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710774726.4A CN107578459A (en) | 2017-08-31 | 2017-08-31 | Expression is embedded in the method and device of candidates of input method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107578459A true CN107578459A (en) | 2018-01-12 |
Family
ID=61030819
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710774726.4A Pending CN107578459A (en) | 2017-08-31 | 2017-08-31 | Expression is embedded in the method and device of candidates of input method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107578459A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108346427A (en) * | 2018-02-05 | 2018-07-31 | 广东小天才科技有限公司 | Voice recognition method, device, equipment and storage medium |
CN108388557A (en) * | 2018-02-06 | 2018-08-10 | 腾讯科技(深圳)有限公司 | Message treatment method, device, computer equipment and storage medium |
CN108564541A (en) * | 2018-03-28 | 2018-09-21 | 麒麟合盛网络技术股份有限公司 | A kind of image processing method and device |
CN108573527A (en) * | 2018-04-18 | 2018-09-25 | 腾讯科技(深圳)有限公司 | A kind of expression picture generation method and its equipment, storage medium |
CN109215007A (en) * | 2018-09-21 | 2019-01-15 | 维沃移动通信有限公司 | A kind of image generating method and terminal device |
CN110389667A (en) * | 2018-04-17 | 2019-10-29 | 北京搜狗科技发展有限公司 | A kind of input method and device |
CN110619513A (en) * | 2019-09-11 | 2019-12-27 | 腾讯科技(深圳)有限公司 | Electronic resource obtaining method, electronic resource distributing method and related equipment |
CN111145283A (en) * | 2019-12-13 | 2020-05-12 | 北京智慧章鱼科技有限公司 | Expression personalized generation method and device for input method |
CN111367580A (en) * | 2020-02-28 | 2020-07-03 | Oppo(重庆)智能科技有限公司 | Application starting method and device and computer readable storage medium |
CN111541950A (en) * | 2020-05-07 | 2020-08-14 | 腾讯科技(深圳)有限公司 | Expression generation method and device, electronic equipment and storage medium |
CN112330728A (en) * | 2020-11-30 | 2021-02-05 | 维沃移动通信有限公司 | Image processing method, image processing device, electronic equipment and readable storage medium |
CN112783332A (en) * | 2019-11-04 | 2021-05-11 | 北京搜狗科技发展有限公司 | Information recommendation method and device and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104076944A (en) * | 2014-06-06 | 2014-10-01 | 北京搜狗科技发展有限公司 | Chat emoticon input method and device |
CN105069830A (en) * | 2015-08-14 | 2015-11-18 | 广州市百果园网络科技有限公司 | Method and device for generating expression animation |
CN105787976A (en) * | 2016-02-24 | 2016-07-20 | 深圳市金立通信设备有限公司 | Method and apparatus for processing pictures |
CN106599817A (en) * | 2016-12-07 | 2017-04-26 | 腾讯科技(深圳)有限公司 | Face replacement method and device |
CN106803057A (en) * | 2015-11-25 | 2017-06-06 | 腾讯科技(深圳)有限公司 | Image information processing method and device |
-
2017
- 2017-08-31 CN CN201710774726.4A patent/CN107578459A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104076944A (en) * | 2014-06-06 | 2014-10-01 | 北京搜狗科技发展有限公司 | Chat emoticon input method and device |
CN105069830A (en) * | 2015-08-14 | 2015-11-18 | 广州市百果园网络科技有限公司 | Method and device for generating expression animation |
CN106803057A (en) * | 2015-11-25 | 2017-06-06 | 腾讯科技(深圳)有限公司 | Image information processing method and device |
CN105787976A (en) * | 2016-02-24 | 2016-07-20 | 深圳市金立通信设备有限公司 | Method and apparatus for processing pictures |
CN106599817A (en) * | 2016-12-07 | 2017-04-26 | 腾讯科技(深圳)有限公司 | Face replacement method and device |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108346427A (en) * | 2018-02-05 | 2018-07-31 | 广东小天才科技有限公司 | Voice recognition method, device, equipment and storage medium |
CN108388557A (en) * | 2018-02-06 | 2018-08-10 | 腾讯科技(深圳)有限公司 | Message treatment method, device, computer equipment and storage medium |
CN108564541A (en) * | 2018-03-28 | 2018-09-21 | 麒麟合盛网络技术股份有限公司 | A kind of image processing method and device |
CN110389667A (en) * | 2018-04-17 | 2019-10-29 | 北京搜狗科技发展有限公司 | A kind of input method and device |
CN108573527B (en) * | 2018-04-18 | 2020-02-18 | 腾讯科技(深圳)有限公司 | Expression picture generation method and equipment and storage medium thereof |
CN108573527A (en) * | 2018-04-18 | 2018-09-25 | 腾讯科技(深圳)有限公司 | A kind of expression picture generation method and its equipment, storage medium |
CN109215007A (en) * | 2018-09-21 | 2019-01-15 | 维沃移动通信有限公司 | A kind of image generating method and terminal device |
CN110619513A (en) * | 2019-09-11 | 2019-12-27 | 腾讯科技(深圳)有限公司 | Electronic resource obtaining method, electronic resource distributing method and related equipment |
CN112783332A (en) * | 2019-11-04 | 2021-05-11 | 北京搜狗科技发展有限公司 | Information recommendation method and device and electronic equipment |
CN111145283A (en) * | 2019-12-13 | 2020-05-12 | 北京智慧章鱼科技有限公司 | Expression personalized generation method and device for input method |
CN111367580A (en) * | 2020-02-28 | 2020-07-03 | Oppo(重庆)智能科技有限公司 | Application starting method and device and computer readable storage medium |
CN111367580B (en) * | 2020-02-28 | 2024-02-13 | Oppo(重庆)智能科技有限公司 | Application starting method and device and computer readable storage medium |
CN111541950A (en) * | 2020-05-07 | 2020-08-14 | 腾讯科技(深圳)有限公司 | Expression generation method and device, electronic equipment and storage medium |
CN111541950B (en) * | 2020-05-07 | 2023-11-03 | 腾讯科技(深圳)有限公司 | Expression generating method and device, electronic equipment and storage medium |
CN112330728A (en) * | 2020-11-30 | 2021-02-05 | 维沃移动通信有限公司 | Image processing method, image processing device, electronic equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107578459A (en) | Expression is embedded in the method and device of candidates of input method | |
CN108399409B (en) | Image classification method, device and terminal | |
CN107886032B (en) | Terminal device, smart phone, authentication method and system based on face recognition | |
EP3627392A1 (en) | Object identification method, system and device, and storage medium | |
CN110121118A (en) | Video clip localization method, device, computer equipment and storage medium | |
CN110083716A (en) | Multi-modal affection computation method and system based on Tibetan language | |
CN110222728B (en) | Training method and system of article identification model and article identification method and equipment | |
US10719695B2 (en) | Method for pushing picture, mobile terminal, and storage medium | |
CN109089133A (en) | Method for processing video frequency and device, electronic equipment and storage medium | |
CN108256549B (en) | Image classification method, device and terminal | |
CN106528879A (en) | Picture processing method and device | |
CN110110118A (en) | Dressing recommended method, device, storage medium and mobile terminal | |
CN108538311A (en) | Audio frequency classification method, device and computer readable storage medium | |
CN108985176A (en) | image generating method and device | |
CN108600632A (en) | It takes pictures reminding method, intelligent glasses and computer readable storage medium | |
CN105095860B (en) | character segmentation method and device | |
CN106980840A (en) | Shape of face matching process, device and storage medium | |
KR20180067654A (en) | Facial verification method and electronic device | |
CN106971164A (en) | Shape of face matching process and device | |
CN104077597B (en) | Image classification method and device | |
CN110570383B (en) | Image processing method and device, electronic equipment and storage medium | |
CN110008664A (en) | Authentication information acquisition, account-opening method, device and electronic equipment | |
CN109040594A (en) | Photographic method and device | |
CN107992811A (en) | Face identification method and device | |
JP2023510375A (en) | Image processing method, device, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180112 |
|
RJ01 | Rejection of invention patent application after publication |