CN110196956A - User's head portrait generation method, device, electronic equipment and storage medium - Google Patents

User's head portrait generation method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110196956A
CN110196956A CN201910365350.0A CN201910365350A CN110196956A CN 110196956 A CN110196956 A CN 110196956A CN 201910365350 A CN201910365350 A CN 201910365350A CN 110196956 A CN110196956 A CN 110196956A
Authority
CN
China
Prior art keywords
character
user information
set user
target
head portrait
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910365350.0A
Other languages
Chinese (zh)
Other versions
CN110196956B (en
Inventor
付超群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN201910365350.0A priority Critical patent/CN110196956B/en
Publication of CN110196956A publication Critical patent/CN110196956A/en
Application granted granted Critical
Publication of CN110196956B publication Critical patent/CN110196956B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • User Interface Of Digital Computer (AREA)
  • Machine Translation (AREA)

Abstract

This application discloses a kind of user's head portrait generation methods, belong to field of computer technology, and the head portrait vision discrimination for solving the problems, such as that method in the prior art generates is weaker.User's head portrait generation method disclosed in the embodiment of the present application includes: to obtain the pre-set user information of target user, and the target user is the user of not set head portrait;By key character identification model, the distinguishing characteristics parameter of each target character in the pre-set user information is determined;According to the distribution of the distinguishing characteristics parameter, the key character in the pre-set user information is determined;The default head portrait of the target user is generated according to the key character.By the way that this application discloses a kind of user's head portrait generation methods, facilitate the vision discrimination for user's head portrait that promotion automatically generates.

Description

User's head portrait generation method, device, electronic equipment and storage medium
Technical field
This application involves field of computer technology, more particularly to a kind of user's head portrait generation method, device, electronic equipment And computer readable storage medium.
Background technique
User's head portrait is a kind of mark of intuitive identification different user identity, in instant messaging application client, internet It is widely applied in the application scenarios such as platform client.In general, client can mention when user executes and registers on line Awake user setting user head portrait, alternatively, user is supported to modify user's head portrait.But it can also have many consumers and have ignored setting use Account causes client or system to be only one default head portrait of user setting as this link.In general, system or client are The default user head portrait that all users provide is the same width image graphic built in system.In some more humanized systems, System can be according to the name information for registering user as the head portrait picture of the male built in user provides system or the head portrait figure of women Piece.
Nevertheless, we for the default head portrait of user setting it can be found that do not have any valuable in the prior art Information hardly has vision discrimination.
Summary of the invention
The application provides a kind of user's head portrait generation method, and the vision for helping to be promoted the user's head portrait automatically generated is distinguished Degree.
To solve the above-mentioned problems, in a first aspect, the embodiment of the present application provides a kind of user's head portrait generation method, packet It includes:
The pre-set user information of target user is obtained, the target user is the user of not set head portrait;
By key character identification model, the distinguishing characteristics parameter of each target character in the pre-set user information is determined;
According to the distribution of the distinguishing characteristics parameter, the key character in the pre-set user information is determined;
The default head portrait of the target user is generated according to the key character.
Second aspect, the embodiment of the present application provide a kind of user's head portrait generating means, comprising:
User profile acquisition module, for obtaining the pre-set user information of target user, the target user is not set The user of head portrait;
Distinguishing characteristics parameter determination module, for determining in the pre-set user information by key character identification model The distinguishing characteristics parameter of each target character;
Key character determining module determines the pre-set user information for the distribution according to the distinguishing characteristics parameter In key character;
Head portrait generation module, for generating the default head portrait of the target user according to the key character.
The third aspect, the embodiment of the present application also disclose a kind of electronic equipment, including memory, processor and are stored in institute The computer program that can be run on memory and on a processor is stated, the processor realizes this when executing the computer program Apply for user's head portrait generation method described in embodiment.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence, when which is executed by processor disclosed in the embodiment of the present application the step of user's head portrait generation method.
User's head portrait generation method disclosed in the embodiment of the present application, by obtaining the pre-set user information of target user, institute State the user that target user is not set head portrait;By key character identification model, each mesh in the pre-set user information is determined The distinguishing characteristics parameter of marking-up symbol;According to the distribution of the distinguishing characteristics parameter, the key in the pre-set user information is determined Character;The default head portrait that the target user is generated according to the key character helps to promote the user's head portrait automatically generated Vision discrimination.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application, below will be in embodiment or description of the prior art Required attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is only some realities of the application Example is applied, it for those of ordinary skill in the art, without any creative labor, can also be attached according to these Figure obtains other attached drawings.
Fig. 1 is user's head portrait generation method flow chart of the embodiment of the present application one;
Fig. 2 is the default user head portrait schematic diagram generated in the prior art;
Fig. 3 is user's head portrait schematic diagram that user's head portrait generation method of the embodiment of the present application one generates;
Fig. 4 is one of user's head portrait generating means structural schematic diagram of the embodiment of the present application three;
Fig. 5 is user's head portrait generating means second structural representation of the embodiment of the present application three;
Fig. 6 is user's head portrait generating means third structural representation of the embodiment of the present application three;
Fig. 7 is the four of user's head portrait generating means structural schematic diagram of the embodiment of the present application three;
Fig. 8 is the five of user's head portrait generating means structural schematic diagram of the embodiment of the present application three.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiment is some embodiments of the present application, instead of all the embodiments.Based on this Shen Please in embodiment, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall in the protection scope of this application.
Embodiment one
A kind of user's head portrait generation method disclosed in the embodiment of the present application, as shown in Figure 1, this method comprises: step 110 to Step 140.
Step 110, the pre-set user information of target user is obtained, the target user is the user of not set head portrait.
In some embodiments of the present application, user's letter of registered users can be got by Subscriber Management System Breath, the user information includes the information such as the head portrait attribute of user, the name of user, user name, the pet name, remarks.
Further, user is registered for some, when the image attributes of the user of acquisition shows the not set head of the user When picture, target user is determined that the user is.If user setting default head portrait, i.e. the default head portrait of the user is not system Preset head portrait picture (as shown in Figure 2), then do not process the default image of the user.
In some embodiments of the present application, the pre-set user information includes: user name or the pet name.
When in the user information got in the pet name include significant character when, i.e., it is when the pet name is not sky, the pet name is true It is set to the pre-set user information of target user;When the pet name is empty in the user information got, as user is not provided with the pet name The case where, the user name that will acquire is determined as the pre-set user information of target user.
Step 120, by key character identification model, determine that the difference of each target character in the pre-set user information is special Levy parameter.
In some embodiments of the present application, the key character identification model can be mathematical model, or Method model.For example, the character recognition model include but is not limited to it is following any one: character weighted value is determined based on word frequency Mathematical model, determine based on term vector character similarity mathematical model, determine based on musical note the method models of accented characters.
Correspondingly, each key character identification model can export the distinguishing characteristics of each target character in the pre-set user information Parameter.In some embodiments of the present application, the distinguishing characteristics parameter includes: that (weighted value is used to indicate pair weighted value The vision discrimination for answering target character to generate the pre-set user information), each target character and the pre-set user information Term vector similarity, accented characters.
Step 130, according to the distribution of the distinguishing characteristics parameter, the key character in the pre-set user information is determined.
It, can further basis in the pre-set user information has been determined after the distinguishing characteristics parameter of each target character Determining distinguishing characteristics parameter determines the corresponding key character of the pre-set user information.
It is crucial in conjunction with the difference of the distinguishing characteristics parameter of each target character in the determination pre-set user information in detail below Several preferred implementations for determining the key character in the pre-set user information are discussed in detail in character recognition model or method Mode.
It is described by key character identification model in some embodiments of the present application, determine the pre-set user information In each target character distinguishing characteristics parameter the step of, comprising: determined in the pre-set user information based on word frequency analysis technology The corresponding weighted value of each target character, wherein the weighted value is used to indicate corresponding target character to the pre-set user information The vision discrimination of generation;Correspondingly, the distribution according to the distinguishing characteristics parameter, determines in the pre-set user information Key character the step of, comprising: determine the target character in the corresponding pre-set user information of the maximum weighted value, As the key character in the pre-set user information.
It is described that each mesh in the pre-set user information is determined based on word frequency analysis technology in some embodiments of the present application Marking-up accords with the step of corresponding weighted value, comprising: determines the target character in the pre-set user information;Determine each target Word frequency of the character based on the pre-set user information, wherein the word frequency of the target character and the target character are described default The number exponentially occurred in user information is positively correlated;For each target character, institute is based on according to the target character State the reverse text word frequency of word frequency and the predetermined target character based on specified training dataset of pre-set user information Product, determine the corresponding weighted value of the target character.
Before determining the key character in the pre-set user information, it is first determined the mesh in the pre-set user information Marking-up symbol.
In some embodiments of the present application, can using each character for including in the pre-set user information as Target character.It, can be by the default use when the pre-set user information is name in other embodiments of the application Each character in the information of family in addition to surname is respectively as target character.
Later, it is also necessary to determine word frequency of each target character based on the pre-set user information.
Word frequency (term frequency, TF) refers to the number that some given word occurs in this document.This A number would generally be normalized, to prevent it to be biased to long file.Therefore, the word in a certain specific file is come It says, its importance may be expressed as: frequency of occurrence of the word in a certain file divided by the appearance of all words in this document The sum of number.In the embodiment of the present application, word frequency calculation method in the prior art is improved, for determining a certain described Single target character in pre-set user information, by M times of the number that the target character occurs in the pre-set user information Side divided by the character sum for including in the pre-set user information obtain as a result, word frequency as the target character.
For example, formula can be passed through:Determine the target character t for including in a certain nameiWord frequency tfij, In, nijIndicate target character tiIn name djThe number of middle appearance;Indicate name djIn all characters frequency of occurrence it With;M is a customized coefficient, m >=2, to enhance word frequency weight.
Assuming that m=2, is " Shangguan's cloud " citing with pre-set user information, character " cloud " is at pre-set user information " Shangguan's cloud " The number of middle appearance is 1, and the number for all characters for including in pre-set user information " Shangguan's cloud " is 3, then character " cloud " is pre- If word frequency is 1/3 in user information " Shangguan's cloud ".And the case where for being " Shangguan is luxuriant and bdautiful " with pre-set user information, character " phenanthrene " The number occurred in pre-set user information " Shangguan is luxuriant and bdautiful " is 2, all words for including in pre-set user information " Shangguan is luxuriant and bdautiful " The number of symbol is 4, then character " phenanthrene " word frequency in pre-set user information " Shangguan is luxuriant and bdautiful " is 1.By to target character described The number occurred in pre-set user information carries out power operation, so that the word frequency of the target character is with the target character described pre- If the number exponentially occurred in user information, which is positively correlated effectively to be promoted, is overlapped weight of the word as key character in name, It is more in line with cognition of the user to name discrimination.
Next, word frequency and the predetermined target according to the target character based on the pre-set user information The product of reverse text word frequency of the character based on specified training dataset, determines target character described in the pre-set user information Corresponding weighted value.
Wherein, the target character is that the reverse text word frequency based on specified training dataset is predetermined.
Those skilled in the art answer but understand, the application when it is implemented, available whole network data or field data or Name data etc. is as specified training dataset, for determining the reverse text for the character that may include in the pre-set user information This word frequency;Pet name data and name data can also only be obtained as specified training dataset, for determining the pre-set user The reverse text word frequency for the character that may include in information.Below one the whole network name data is obtained as specified training dataset act The technical solution for determining the reverse text word frequency of target character is described in detail in example.
Firstly, the name data of the whole network is obtained, as specified training dataset.
Determine that the specified training data concentrates each name for including to go out occurrence what the specified training data was concentrated Number.For example, name " Zhang San " is 128 in the number of the appearance of specified training dataset, name " Li Si " is in specified training data The number of the appearance of collection is 256.Then, the specified training data concentrates the quantity for all names for including, as | D |.
Later, determine that the specified training data concentrates the reverse text of each character occurred in all names for including Word frequency.Reverse document-frequency (inverse document frequency, IDF) is the measurement of a word general importance. The reverse document-frequency of a certain particular words, can be by general act number divided by the number of the file comprising the word, then incites somebody to action To quotient take logarithm to obtain.In the embodiment of the present application, the method for determining the reverse text word frequency of word in file is converted to determination The reverse text word frequency of character in name concentrates the quantity for all names for including divided by comprising referring to by the specified training data Determine the quantity of the name of character, then obtained quotient is taken into logarithm, the reverse text word frequency as the designated character.Specific implementation When, formula can be passed throughCalculating character tiReverse text word frequency idfi, wherein | D | described in expression Specified training data concentrates the quantity for all names for including, | { j:ti∈dj| it indicates to include character tiName quantity;dj To include character tiName.
According to aforementioned calculation method, it can determine that the specified training data concentrates the reverse text for each character for including Word frequency.
In the embodiment of the present application, as long as it includes enough data that the specified training data of building, which is concentrated, so that it may really The reverse text of each character in pre-set user information described in the fixed subsequent user name for possibly being present at user, name, pet name etc. This word frequency.
Finally, word frequency and the predetermined target word according to the target character based on the pre-set user information The product for according with the reverse text word frequency based on specified training dataset, determines the corresponding weighted value of the target character.For example, will Word frequency and the predetermined target character of the target character based on the pre-set user information are based on specified training number According to the product of the reverse text word frequency of collection, as the corresponding weighted value of the target character.
It is " schoolmate " citing with the pre-set user information, by multiplying for the word frequency of target character " " and reverse text word frequency Product is the weighted value in " schoolmate " in pre-set user information as target character " ", by the word frequency of target character " friend " and inversely The product of text word frequency is the weighted value in " schoolmate " in pre-set user information as target character " friend ", later, selects weight It is worth key character of the highest target character as the pre-set user information " schoolmate ".Assuming that target character " friend " is used default Family information is that the weighted value in " schoolmate " is 5.25, and target character " " is for the weighted value in " schoolmate " in pre-set user information 4.74, weighted value of the target character " friend " in pre-set user information is higher, head portrait text is suitable as, accordingly, it is determined that target Character " friend " is the key character of " schoolmate " as pre-set user information.
It is described by key character identification model in other embodiments of the application, determine the pre-set user letter In breath the step of the distinguishing characteristics parameter of each target character, comprising: word-based vector techniques determine the pre-set user letter respectively The word vector of each character in the term vector of breath and the pre-set user information;Calculate separately the word vector of the character with Similarity between the term vector.Correspondingly, the distribution according to the distinguishing characteristics parameter, determines the pre-set user The step of key character in information, comprising: determine the corresponding affiliated character conduct of word vector of the maximum similarity Key character in the pre-set user information.
In word-based vector techniques, determined in the term vector and the pre-set user information of the pre-set user information respectively Before the word vector of each character, it is necessary first to training term vector and word vector.
In some embodiments of the present application, available news information or dictionary content, construct training term vector and The training dataset of word vector.Then, word segmentation processing is carried out to every corpus that training data is concentrated and utilizes gensim later Softwares training term vectors such as (a common software packets) obtains the term vector of the participle obtained based on the training dataset.
On the other hand, every corpus training data concentrated, by 1-gram split plot design (unitary split plot design) by every Corpus is divided into the fragment sequence being made of 1 character, using the softwares training word vector such as gensim, obtains and is based on the training The term vector for each character that data set obtains.
In the specific application process, it after getting the pre-set user information, may further determine that described default The word vector of the term vector of word in user information and single character, with the keyword in the determination pre-set user information Symbol.In some preferred embodiments of the application, for getting the pre-set user information, (the pre-set user information may For user name or the pet name), remove the surname character for including in the pre-set user information first;Then, by the pre-set user Remove remaining character is constituted after surname character word in information as name;Finally, passing through term vector trained in advance The term vector for determining the word as name, the term vector as the pre-set user information.
The surname for including in the pre-set user information can be determined according to One Hundred Family Names in some embodiments of the present application Character, and remove the determining surname character.It can also be determined by other means in other embodiments of the application The surname character for including in the pre-set user information, for example, defaulting the first character in the pre-set user information is surname Family name's character.The application to determine surname character particular technique means without limitation.
After the word as name for having included in the pre-set user information has been determined, further by training in advance Word vector determine the word vector of each character in the name respectively.
Next, calculating separately the similarity between the word vector of each character and the term vector.For example, Using cosine similarity, jaccard (Jie Kade) calculates the word vector and the institute of each character apart from equidistant function Similarity between predicate vector, the distinguishing characteristics parameter as each character.Later, according to point of the distinguishing characteristics parameter Cloth determines the key character in the pre-set user information.For example, determining that the similarity maximum value being calculated is corresponding described The affiliated character of word vector is as the key character in the pre-set user information.It is replaced by a representative character whole A user information generates user's head portrait, on the basis of retaining the user information feature, so that the user's head portrait generated is more Clearly, brief introduction, vision discrimination are higher.
It is " Zhang San is crazy " citing with the pre-set user information of acquisition, removes the surname for including in the pre-set user information After family name's character " opening ", word " three is crazy " is obtained as name;Later, it is determined by term vector trained in advance described as name The term vector of the word " three is crazy " of word, as the term vector of the pre-set user information, and, character " three " and word are determined respectively Accord with the word vector of " crazy ";Finally, calculating separately the similarity of the word vector of character " three " and the term vector of word " three is crazy ", character The similarity of the term vector of the word vector and word " three is crazy " of " crazy ", and select highest " crazy " word of similarity as described default Key character in user information.By removing the surname for including in the pre-set user information first in embodiments herein Character, it is possible to reduce subsequent calculating term vector and word vector, and the operand of similarity is calculated, promote head portrait formation efficiency. Another convenience, surname be usually for the discrimination of name it is relatively low, after removing surname, the head portrait of generation can't be reduced Vision discrimination.
It is described by key character identification model in other embodiments of the application, determine the pre-set user letter In breath the step of the distinguishing characteristics parameter of each target character, comprising: determine each target character that the pre-set user information includes In accented characters.Correspondingly, the distribution according to the distinguishing characteristics parameter, determines the pass in the pre-set user information The step of key characters, comprising: using the accented characters as the key character in the pre-set user information.
For example, name and accented characters corresponding relationship can be constructed in advance based on expert's domain knowledge, the name and again It include the accented characters in corresponding name in sound character corresponding relationship.Determining the accented characters in the pre-set user information When, the pre-set user information can be compared with the name in the name and accented characters corresponding relationship, determine ratio To successful name, then, the corresponding accented characters of successful name are compared by described, as in the pre-set user information Key character.
For comparing the pre-set user information of failure by being compared with name and accented characters corresponding relationship (the pre-set user information is not present in the i.e. described name and accented characters corresponding relationship), can be further according to based on expert Experience preset stress selection rule determines the accented characters in the pre-set user information, and using the accented characters selected as Key character in the pre-set user information.In some embodiments of the present application, the preset stress selection rule can With are as follows: preferentially select the character with the tone as accented characters according to the sequence of falling tone, even tone, entering tone, upper sound, in even tone High and level tone is preferential, takes last character as accented characters the character with same tone.With the pre-set user information For name " state is loyal " citing, wherein " state " word is even tone, belongs to rising tone, and " loyalty " word is even tone, belongs to high and level tone, preferential to select " loyalty " word selects " loyalty " word as the key character in name " state is loyal " as accented characters.Again with pre-set user letter Breath is that name " Li Si " illustrates, wherein " Lee " word is upper sound, and " four " word is entering tone, preferentially selects " four " word as accented characters, Select " four " word as the key character in name " Li Si ".It is illustrated again using the pre-set user information as name " triumph ", Wherein, " victory " word is entering tone, and " benefit " word is entering tone, preferentially selects the last one entering tone " benefit " word as accented characters, that is, selects " benefit " word is as the key character in name " triumph ".
Based on musical note rule, by the accented characters in the pre-set user information, as in the pre-set user information Key character meets perception of the user to musical note rule, and operand is small, convenient to carry out.
In the other embodiments of the application, rule directly can also be selected really according to based on the preset stress of expertise Accented characters in the fixed pre-set user information, and using the accented characters selected as the key in the pre-set user information Character, without constructing name and accented characters corresponding relationship in advance.
In some embodiments of the present application, it can be determined described pre- according only to said one key character identification model If the key character of each target character in user information, in other embodiments of the application, can also according to it is above-mentioned at least Both keyword accords with identification model, determines the key character of each target character in the pre-set user information.For example, described pass through Key character identification model, the step of determining the distinguishing characteristics parameter of each target character in the pre-set user information, comprising: logical Cross the distinguishing characteristics ginseng for each target character that at least two key character identification models are determined respectively in the pre-set user information Number;The distribution according to the distinguishing characteristics parameter, the step of determining the key character in the pre-set user information, packet Include: the distinguishing characteristics parameter determined respectively by least two key character identification models determines the default use respectively Candidate key character in the information of family;Ballot fortune is weighted to each candidate key character by default Weighted Fusion model It calculates, determines the votes of each candidate key character;It determines maximum one candidate key character of the votes, makees For the key character in the pre-set user information.By the way that multiple key character identification models are arranged, and pass through the multiple pass Key characters identification model identifies the key character in the pre-set user information respectively, as candidate key character, then, then base It is weighted ballot in recognition result of the preset Weighted Fusion model to multiple key character identification models, can further be mentioned Determining key character is risen to embody the otherness of the pre-set user information.
Wherein, at least two key characters identification model may include being introduced in the embodiment of the present application based on word frequency It determines the mathematical model of character weighted value, the mathematical model that determines character similarity based on term vector, stress determined based on musical note One or more models in the method model of character can also include machine learning model in the prior art.
When at least two key characters identification model includes machine learning model, it is necessary first to such as abovementioned steps Every expectation that the training data of middle acquisition is concentrated is labeled, and the key character of the expectation is arranged, and is then remembered again based on length Recall the neural networks such as network, training key character identification model.
In the embodiment of the present application without limitation to the type of at least two key characters identification model, training method. Those skilled in the art can choose any key that can be determined in a string of texts on the basis of present disclosure The model of word.
The linear fusion model that Weighted Fusion model in the embodiment of the present application can be as follows:
Wherein, x is candidate key character;gt(x) it is calculated for key character identification model t Weight of the candidate key character x as key character out;atFor the weight of key character identification model t, according to expertise It presets;G (x) is weight of the fused candidate key character x as key character.Finally, selection G (x) maximum time Select key character x as the key character in the pre-set user information.
In some embodiments of the present application, if the maximum candidate key character of G (x) be it is multiple, take at random wherein One as the key character in the pre-set user information.
Step 140, the default head portrait of the target user is generated according to the key character.
Next, the key character for including in the pre-set user information that the target user has been determined, further basis The determining key character generates a picture, and using the picture as the default head portrait of the target user.With acquisition The pre-set user information is " Zhang San is crazy " citing, if determining " crazy " word as in the pre-set user information in abovementioned steps Key character, then can be as shown in Figure 3 according to the default head portrait that the key character generates the target user.
In the embodiment of the present application, the key character for including in the pre-set user information of the target user determined is character The form of coding, therefore, it is necessary first to which a picture is generated for character code according to the key character.
In some embodiments of the present application, a drawing board can be created, is then connect by the character library that calling system provides Then the character pattern data that mouth obtains the character code draws the character code pair according to the character pattern data on drawing board The picture answered.
In other embodiments of the application, it can also be connect by the application for calling character in the prior art to turn picture Mouthful, by inputting the character code, obtain the corresponding character picture of the character code.
Certainly, those skilled in the art can also generate character figure corresponding with the character code using other methods Piece no longer enumerates in the embodiment of the present application.The application is to the particular technique means for generating picture according to the key character It is not construed as limiting.
Later, using the picture generated according to the key character as the default head portrait of the target user.
User's head portrait generation method disclosed in the embodiment of the present application, by obtaining the pre-set user information of target user, institute State the user that target user is not set head portrait;By key character identification model, each mesh in the pre-set user information is determined The distinguishing characteristics parameter of marking-up symbol;According to the distribution of the distinguishing characteristics parameter, the key in the pre-set user information is determined Character;The default head portrait that the target user is generated according to the key character helps to promote the user's head portrait automatically generated Vision discrimination.For different users, it is user individual that the pre-set users information such as user name or the pet name, which is different, The embodiment of mark, therefore, based on the personalized identification that can embody different user further extracted from pre-set user information Key character generates user's head portrait, and the user's head portrait generated is enabled to maximize the difference for embodying different user, meanwhile, it can also To promote the aesthetics of user's head portrait.
On the other hand, it takes key character in the pre-set user information at family and generates user's head portrait, compared to arbitrarily taking The initial of some character is as user's head portrait in the initial or user name of user name, and user's head portrait of generation gives top priority to what is the most important spy Sign, so that the vision discrimination of the user's head portrait generated and directive property are stronger.
In another aspect, the key character only taken in the pre-set user information at family generates user's head portrait, basis is compared Whole pre-set user information generate user's head portrait, and user's head portrait of generation is more succinct, and the feature that gives top priority to what is the most important, so that the use generated The vision discrimination of account picture is stronger.
Embodiment two
A kind of user's head portrait generating means disclosed in the present embodiment, as shown in figure 4, described device includes:
User profile acquisition module 410, for obtaining the pre-set user information of target user, the target user is not set Set the user of head portrait;
Distinguishing characteristics parameter determination module 420, for determining the pre-set user information by key character identification model In each target character distinguishing characteristics parameter;
Key character determining module 430 determines the pre-set user letter for the distribution according to the distinguishing characteristics parameter Key character in breath;
Head portrait generation module 440, for generating the default head portrait of the target user according to the key character.
In some embodiments of the present application, as shown in figure 5, the distinguishing characteristics parameter determination module 420 further wraps Include the first distinguishing characteristics parameter determination submodule 4201.
First distinguishing characteristics parameter determination submodule 4201, for determining that the pre-set user is believed based on word frequency analysis technology The corresponding weighted value of each target character in breath, wherein the weighted value is used to indicate corresponding target character to the pre-set user The vision discrimination that information generates;
Correspondingly, as shown in figure 5, the key character determining module 430 further comprises that the first key character determines son Module 4301.
First key character determines submodule 4301, for determining that the maximum weighted value is corresponding described default Target character in user information, as the key character in the pre-set user information.
It is further alternative, it is described to determine that each target character is corresponding in the pre-set user information based on word frequency analysis technology Weighted value the step of, comprising:
Determine the target character in the pre-set user information;
Determine word frequency of each target character based on the pre-set user information, wherein the word frequency of the target character The number exponentially occurred in the pre-set user information with the target character is positively correlated;
For each target character, word frequency according to the target character based on the pre-set user information and in advance The product of reverse text word frequency of the determining target character based on specified training dataset determines that the target character is corresponding Weighted value.
In some embodiments of the present application, as indicated with 6, the distinguishing characteristics parameter determination module 420 further comprises Second distinguishing characteristics parameter determination submodule 4202.
Second distinguishing characteristics parameter determination submodule 4202 is used for word-based vector techniques, determines the default use respectively The word vector of each character in the term vector of family information and the pre-set user information;And
Calculate separately the similarity between the word vector of the character and the term vector;
Correspondingly, as shown in fig. 6, the key character determining module 430 further comprises that the second key character determines son Module 4302.
Second key character determines submodule 4302, for determine the corresponding word of the maximum similarity to Character belonging to measuring is as the key character in the pre-set user information.
In other embodiments of the application, as shown in fig. 7, the distinguishing characteristics parameter determination module 420 is further Including third distinguishing characteristics parameter determination submodule 4203.
The third distinguishing characteristics parameter determination submodule 4203, each mesh for including for determining the pre-set user information Accented characters in marking-up symbol;
Correspondingly, as shown in fig. 7, the key character determining module 430 further comprises that third key character determines son Module 4303.
The third key character determines submodule 4303, for using the accented characters as the pre-set user information In key character.
In the still other embodiments of the application, as shown in figure 8, the distinguishing characteristics parameter determination module 420 includes extremely Few two distinguishing characteristics parameter determination submodules, such as the 4th distinguishing characteristics parameter determination submodule 4204 and the 5th distinguishing characteristics are joined Number determines submodule 4205, and the 4th distinguishing characteristics parameter determination submodule 4204 and the 5th distinguishing characteristics parameter determine submodule Block 4205 respectively corresponds different key character identification models, and the distinguishing characteristics parameter determination module 420 is further used for:
Each target character in the pre-set user information is determined respectively by least two key character identification models Distinguishing characteristics parameter;
Correspondingly, the key character determining module 430 includes that at least two key characters determine submodule, the such as the 4th is closed Key characters determine submodule 4304 and the 5th key character determines that submodule 4305, the 4th key character determine submodule 4304 and the 5th key character determine that submodule 4305 respectively corresponds different key character identification models, the key character is true Cover half block 430 is further used for:
The distinguishing characteristics parameter determined respectively by least two key character identification models, determines described pre- respectively If the candidate key character in user information;
Ballot operation is weighted to each candidate key character by default Weighted Fusion model, determines each time Select the votes of key character;
Maximum one candidate key character of the votes is determined, as the key in the pre-set user information Character.
In some embodiments of the present application, the pre-set user information includes any one or more: user name, The pet name, remarks.
User's head portrait generating means disclosed in the embodiment of the present application, for realizing in the embodiment of the present application one and embodiment two Each step of user's head portrait generation method, the specific embodiment of each module of device is referring to corresponding steps, herein not It repeats again.
User's head portrait generating means disclosed in the embodiment of the present application, by obtaining the pre-set user information of target user, institute State the user that target user is not set head portrait;By key character identification model, each mesh in the pre-set user information is determined The distinguishing characteristics parameter of marking-up symbol;According to the distribution of the distinguishing characteristics parameter, the key in the pre-set user information is determined Character;The default head portrait that the target user is generated according to the key character helps to promote the user's head portrait automatically generated Vision discrimination.For different users, it is user individual that the pre-set users information such as user name or the pet name, which is different, The embodiment of mark, therefore, based on the personalized identification that can embody different user further extracted from pre-set user information Key character generates user's head portrait, and the user's head portrait generated is enabled to maximize the difference for embodying different user, meanwhile, it can also To promote the aesthetics of user's head portrait.
On the other hand, it takes key character in the pre-set user information at family and generates user's head portrait, compared to arbitrarily taking The initial of some character is as user's head portrait in the initial or user name of user name, and user's head portrait of generation gives top priority to what is the most important spy Sign, so that the vision discrimination of the user's head portrait generated and directive property are stronger.
In another aspect, the key character only taken in the pre-set user information at family generates user's head portrait, basis is compared Whole pre-set user information generate user's head portrait, and user's head portrait of generation is more succinct, and the feature that gives top priority to what is the most important, so that the use generated The vision discrimination of account picture is stronger.
Correspondingly, disclosed herein as well is a kind of electronic equipment, including memory, processor and it is stored in the memory Computer program that is upper and can running on a processor, the processor are realized when executing the computer program as the application is real Apply user's head portrait generation method described in example one.The electronic equipment can be PC machine, mobile terminal, personal digital assistant, put down Plate computer etc..
Disclosed herein as well is a kind of computer readable storage mediums, are stored thereon with computer program, which is located Manage the step of realizing user's head portrait generation method as described in the embodiment of the present application one when device executes.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.For Installation practice For, since it is basically similar to the method embodiment, so being described relatively simple, referring to the portion of embodiment of the method in place of correlation It defends oneself bright.
A kind of user's head portrait generation method provided by the present application and device are described in detail above, it is used herein The principle and implementation of this application are described for specific case, and the above embodiments are only used to help understand The present processes and its core concept;At the same time, for those skilled in the art is having according to the thought of the application There will be changes in body embodiment and application range, in conclusion the content of the present specification should not be construed as to the application Limitation.
Through the above description of the embodiments, those skilled in the art can be understood that each embodiment can It realizes by means of software and necessary general hardware platform, naturally it is also possible to pass through hardware realization.Based on such reason Solution, substantially the part that contributes to existing technology can embody above-mentioned technical proposal in the form of software products in other words Come, which may be stored in a computer readable storage medium, such as ROM/RAM, magnetic disk, CD, including Some instructions are used so that a computer equipment (can be personal computer, server or the network equipment etc.) executes respectively Method described in certain parts of a embodiment or embodiment.

Claims (10)

1. a kind of user's head portrait generation method characterized by comprising
The pre-set user information of target user is obtained, the target user is the user of not set head portrait;
By key character identification model, the distinguishing characteristics parameter of each target character in the pre-set user information is determined;
According to the distribution of the distinguishing characteristics parameter, the key character in the pre-set user information is determined;
The default head portrait of the target user is generated according to the key character.
2. determining described pre- the method according to claim 1, wherein described by key character identification model If in user information the step of the distinguishing characteristics parameter of each target character, comprising:
The corresponding weighted value of each target character in the pre-set user information is determined based on word frequency analysis technology, wherein the power Weight values are used to indicate the vision discrimination that corresponding target character generates the pre-set user information;
The distribution according to the distinguishing characteristics parameter, the step of determining the key character in the pre-set user information, packet It includes:
It determines the target character in the corresponding pre-set user information of the maximum weighted value, believes as the pre-set user Key character in breath.
3. according to the method described in claim 2, it is characterized in that, described determine the pre-set user based on word frequency analysis technology In information the step of each target character corresponding weighted value, comprising:
Determine the target character in the pre-set user information;
Determine word frequency of each target character based on the pre-set user information, wherein the word frequency of the target character with should The number exponentially that target character occurs in the pre-set user information is positively correlated;
For each target character, word frequency according to the target character based on the pre-set user information and predefine Reverse text word frequency of the target character based on specified training dataset product, determine the corresponding power of the target character Weight values.
4. determining described pre- the method according to claim 1, wherein described by key character identification model If in user information the step of the distinguishing characteristics parameter of each target character, comprising:
Word-based vector techniques determine each word in the term vector and the pre-set user information of the pre-set user information respectively The word vector of symbol;
Calculate separately the similarity between the word vector of the character and the term vector;
The distribution according to the distinguishing characteristics parameter, the step of determining the key character in the pre-set user information, packet It includes:
Determine the corresponding affiliated character of word vector of the maximum similarity as the key in the pre-set user information Character.
5. determining described pre- the method according to claim 1, wherein described by key character identification model If in user information the step of the distinguishing characteristics parameter of each target character, comprising:
Determine the accented characters in each target character that the pre-set user information includes;
The distribution according to the distinguishing characteristics parameter, the step of determining the key character in the pre-set user information, packet It includes:
Using the accented characters as the key character in the pre-set user information.
6. determining described pre- the method according to claim 1, wherein described by key character identification model If in user information the step of the distinguishing characteristics parameter of each target character, comprising:
Determine the difference of each target character in the pre-set user information respectively by least two key character identification models Characteristic parameter;
The distribution according to the distinguishing characteristics parameter, the step of determining the key character in the pre-set user information, packet It includes:
The distinguishing characteristics parameter determined respectively by least two key character identification models, determines the default use respectively Candidate key character in the information of family;
Ballot operation is weighted to each candidate key character by default Weighted Fusion model, determines each candidate pass The votes of key characters;
Maximum one candidate key character of the votes is determined, as the keyword in the pre-set user information Symbol.
7. method according to any one of claims 1 to 6, which is characterized in that the pre-set user information includes appointing It anticipates one or more: user name, the pet name, remarks.
8. a kind of user's head portrait generating means characterized by comprising
User profile acquisition module, for obtaining the pre-set user information of target user, the target user is not set head portrait User;
Distinguishing characteristics parameter determination module, for determining each mesh in the pre-set user information by key character identification model The distinguishing characteristics parameter of marking-up symbol;
Key character determining module determines in the pre-set user information for the distribution according to the distinguishing characteristics parameter Key character;
Head portrait generation module, for generating the default head portrait of the target user according to the key character.
9. a kind of electronic equipment, including memory, processor and it is stored on the memory and can runs on a processor Computer program, which is characterized in that the processor realizes claim 1 to 7 any one when executing the computer program User's head portrait generation method.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The step of user's head portrait generation method described in claim 1 to 7 any one is realized when execution.
CN201910365350.0A 2019-04-30 2019-04-30 User head portrait generation method and device, electronic equipment and storage medium Active CN110196956B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910365350.0A CN110196956B (en) 2019-04-30 2019-04-30 User head portrait generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910365350.0A CN110196956B (en) 2019-04-30 2019-04-30 User head portrait generation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110196956A true CN110196956A (en) 2019-09-03
CN110196956B CN110196956B (en) 2021-06-11

Family

ID=67752322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910365350.0A Active CN110196956B (en) 2019-04-30 2019-04-30 User head portrait generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110196956B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060065387A (en) * 2004-12-10 2006-06-14 엘지전자 주식회사 Method and device for mobile communication terminal having editing function of the character
CN103401761A (en) * 2013-07-24 2013-11-20 北京小米科技有限责任公司 Method and device for generating head portrait picture as well as server
CN104348968A (en) * 2013-08-09 2015-02-11 联想(北京)有限公司 Information processing method and electronic equipment
CN106097113A (en) * 2016-06-21 2016-11-09 仲兆满 A kind of social network user sound interest digging method
CN107154067A (en) * 2017-03-31 2017-09-12 北京奇艺世纪科技有限公司 A kind of head portrait generation method and device
CN107273353A (en) * 2017-06-08 2017-10-20 广东灵机文化传播有限公司 Name resolution method and system
CN107733722A (en) * 2017-11-16 2018-02-23 百度在线网络技术(北京)有限公司 Method and apparatus for configuring voice service
CN107910005A (en) * 2017-11-16 2018-04-13 海信集团有限公司 The target service localization method and device of interaction text
CN109416591A (en) * 2017-05-16 2019-03-01 苹果公司 Image data for enhanced user interaction

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060065387A (en) * 2004-12-10 2006-06-14 엘지전자 주식회사 Method and device for mobile communication terminal having editing function of the character
CN103401761A (en) * 2013-07-24 2013-11-20 北京小米科技有限责任公司 Method and device for generating head portrait picture as well as server
CN104348968A (en) * 2013-08-09 2015-02-11 联想(北京)有限公司 Information processing method and electronic equipment
CN106097113A (en) * 2016-06-21 2016-11-09 仲兆满 A kind of social network user sound interest digging method
CN107154067A (en) * 2017-03-31 2017-09-12 北京奇艺世纪科技有限公司 A kind of head portrait generation method and device
CN109416591A (en) * 2017-05-16 2019-03-01 苹果公司 Image data for enhanced user interaction
CN107273353A (en) * 2017-06-08 2017-10-20 广东灵机文化传播有限公司 Name resolution method and system
CN107733722A (en) * 2017-11-16 2018-02-23 百度在线网络技术(北京)有限公司 Method and apparatus for configuring voice service
CN107910005A (en) * 2017-11-16 2018-04-13 海信集团有限公司 The target service localization method and device of interaction text

Also Published As

Publication number Publication date
CN110196956B (en) 2021-06-11

Similar Documents

Publication Publication Date Title
CN111061946B (en) Method, device, electronic equipment and storage medium for recommending scenerized content
CN106372059B (en) Data inputting method and device
CN107797984B (en) Intelligent interaction method, equipment and storage medium
US20170351663A1 (en) Iterative alternating neural attention for machine reading
CN106462608A (en) Knowledge source personalization to improve language models
CN108121800A (en) Information generating method and device based on artificial intelligence
WO2021135455A1 (en) Semantic recall method, apparatus, computer device, and storage medium
CN108268450B (en) Method and apparatus for generating information
CN112015928B (en) Information extraction method and device for multimedia resources, electronic equipment and storage medium
CN113516961B (en) Note generation method, related device, storage medium and program product
CN110597965A (en) Sentiment polarity analysis method and device of article, electronic equipment and storage medium
CN110728983A (en) Information display method, device, equipment and readable storage medium
CN110059172A (en) The method and apparatus of recommendation answer based on natural language understanding
CN113837576A (en) Method, computing device, and computer-readable storage medium for content recommendation
CN113573128A (en) Audio processing method, device, terminal and storage medium
CN116913278A (en) Voice processing method, device, equipment and storage medium
CN112052388A (en) Method and system for recommending gourmet stores
CN116776003A (en) Knowledge graph recommendation method, system and equipment based on comparison learning and collaborative signals
CN111310453A (en) User theme vectorization representation method and system based on deep learning
CN113010664B (en) Data processing method and device and computer equipment
CN110196956A (en) User's head portrait generation method, device, electronic equipment and storage medium
CN113505293A (en) Information pushing method and device, electronic equipment and storage medium
CN113254788A (en) Big data based recommendation method and system and readable storage medium
CN114970494A (en) Comment generation method and device, electronic equipment and storage medium
CN111782762A (en) Method and device for determining similar questions in question answering application and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant