WO2021004114A1 - 表情包自动生成方法、装置、计算机设备及存储介质 - Google Patents

表情包自动生成方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2021004114A1
WO2021004114A1 PCT/CN2020/085573 CN2020085573W WO2021004114A1 WO 2021004114 A1 WO2021004114 A1 WO 2021004114A1 CN 2020085573 W CN2020085573 W CN 2020085573W WO 2021004114 A1 WO2021004114 A1 WO 2021004114A1
Authority
WO
WIPO (PCT)
Prior art keywords
emoticon
facial features
facial
face image
package
Prior art date
Application number
PCT/CN2020/085573
Other languages
English (en)
French (fr)
Inventor
向纯玉
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2021004114A1 publication Critical patent/WO2021004114A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Definitions

  • This application relates to the field of micro-expression recognition in artificial intelligence, and in particular to a method, device, computer equipment and storage medium for automatically generating an expression pack.
  • the embodiments of this application provide a method, device, computer equipment and storage medium for automatically generating emoticons.
  • the operation of this application is simple, the fusion effect and consistency of the generated personalized emoticons are better, the user experience is improved, and the user is also improved. Activity and participation.
  • a method for automatically generating emoticons including:
  • the emoticon package picture is matched from the preset emoticon package library, and the location of the facial features of the matched emoticon package picture is determined; wherein, Each of the emoticon package pictures has at least one facial features, and each of the emoticon package pictures is associated with at least one emoticon label;
  • a device for automatically generating emoticons including:
  • the acquisition module is used to acquire a face image
  • An extraction module for extracting facial micro-expressions from the facial image, and obtaining the facial expression tag of the facial image according to the facial micro-expressions;
  • the matching module is configured to match an emoticon package picture from a preset emoticon package library according to the emoticon tag of the face image, and determine the location of the facial features of the matched emoticon package picture; wherein, the preset The emoticon package pictures in the emoticon package library all have at least one facial features, and each emoticon package picture is associated with at least one emoticon label;
  • An overlay module for extracting facial features in the face image, and overlaying the facial features to the location of the facial features of the emoticon package picture matched from the preset emoticon package library , Generate personalized emoticons.
  • a computer device comprising a memory, a processor, and computer-readable instructions stored in the memory and capable of running on the processor.
  • the processor executes the computer-readable instructions to realize the above-mentioned emoticon automatic generation method.
  • a computer-readable storage medium which stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, realizes the aforementioned emoticon package automatic generation method.
  • FIG. 1 is a schematic diagram of an application environment of a method for automatically generating emoticons in an embodiment of the present application
  • FIG. 2 is a flowchart of a method for automatically generating emoticons in an embodiment of the present application
  • step S20 of the method for automatically generating emoticons in an embodiment of the present application
  • step S30 is a flowchart of step S30 of the method for automatically generating emoticons in an embodiment of the present application
  • step S40 of the method for automatically generating emoticons in an embodiment of the present application
  • FIG. 6 is a flowchart of step S407 of the method for automatically generating emoticons in an embodiment of the present application
  • FIG. 7 is a schematic block diagram of a device for automatically generating emoticons in an embodiment of the present application.
  • FIG. 8 is a schematic block diagram of the extraction module of the emoticon automatic generation device in an embodiment of the present application.
  • Fig. 9 is a schematic diagram of a computer device in an embodiment of the present application.
  • the method for automatically generating emoticons provided by this application can be applied in the application environment as shown in Fig. 1, where the client (computer equipment) communicates with the server through the network.
  • the client includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, cameras, and portable wearable devices.
  • the server can be implemented as an independent server or a server cluster composed of multiple servers.
  • a method for automatically generating emoticons is provided. Taking the method applied to the server in FIG. 1 as an example, the method includes the following steps S10-S40:
  • the face image refers to an image containing part or all of the facial features.
  • the face image can be taken by the user through a camera device and uploaded to the value server, or it can be pre-stored in the database, and the server can retrieve it from the database at any time as required. Retrieval.
  • the step S20 that is, the extraction of facial micro-expressions from the facial image, and obtaining the facial expression of the facial image according to the facial micro-expressions Labels, including:
  • the type of the action unit may include, but is not limited to, the internationally used action units (AU) and eye movements in Table 1 below.
  • the eye movements are specifically different movements and perspectives of the eye, such as the left eye, Look right, upward, downward, upper right, etc., and the action units corresponding to different eye movements and viewing angles may also include judging the magnitude of eye movements.
  • the database pre-stores the action unit types corresponding to various micro-expression types (such as crying, laughing, or angry), and each micro-expression type corresponds to a combination of multiple action unit types, such as ,
  • the micro expression type is laugh, this micro expression type corresponds to a combination of at least the following action unit types: mouth up (AU12 in Table 1), mouth up (AU12 in Table 1) + outer eyebrows up (in Table 1 AU2), the corners of the mouth are raised (AU12 in Table 1) + lip extension (AU20 in Table 1) + the upper and lower lips are separated (AU25 in Table 1), etc.; therefore, it is only necessary to remove all the words extracted in step S201
  • the action unit type is compared with the corresponding action unit type of each micro-expression type stored in the database to confirm the type of the micro-expression.
  • all the action unit types extracted in step S201 include all the action unit types corresponding to a micro-expression type stored in the database (that is, in All the action unit types extracted in the step S201 may also include other action units), that is, the micro-expression type can be regarded as the micro-expression type.
  • all the action unit types extracted in the step S201 can also correspond one-to-one with the action unit types and sequences of a micro-expression type stored in the database (no more or less Any action unit) is considered to be the micro-expression type of the monitored person.
  • S203 Acquire all the emoticon tags associated with the micro-expression type, and simultaneously acquire a characteristic action unit associated with each emoticon tag;
  • the database has pre-stored expression tags associated with each of the micro expression types, and each micro expression type corresponds to multiple expression tags.
  • the associated expression tags may include: Laugh, smile, smirk, wry smile, smirk, etc. Understandably, each emoticon tag associated with a micro-emoji type has at least one characteristic action unit corresponding to it.
  • all the action unit types extracted from the face image in step S201 include all the feature action units associated with an emoticon tag (that is, in step S201)
  • All the extracted action unit types may also include other action units other than the characteristic action unit corresponding to the emoticon tag, that is, the emoticon tag can be regarded as the emoticon tag of the face image.
  • the type of micro-expression is first confirmed according to the type of action unit extracted from the face image (the number of micro-expression types is much smaller than the number of emoticon tags), and then the type of action unit extracted from the face image is compared with the micro-expression The feature action unit of the type-associated emoticon tag is matched.
  • the action unit type extracted from the face image does not need to be compared with all the emoticon tags, only the feature action unit of the emoticon tag corresponding to a few types of micro-expression types is required.
  • the number of emoticons is huge, the amount of calculation is greatly reduced and the server load is reduced.
  • step S201 after extracting all the action unit types of the facial micro-expression from the facial image in step S201, all the facial expression tags and the corresponding expression tags can be directly obtained.
  • the expression label is recorded as the expression label of the face image.
  • the emoticon package picture associated with the emoticon tag in the preset emoticon package library it is first necessary to obtain the emoticon package picture associated with the emoticon tag in the preset emoticon package library according to the emoticon tag of the face image acquired in step S20 (a emoticon package picture may correspond to For one or more emoticon tags), at this time, the number of emoticon package pictures that may be obtained corresponding to the emoticon label is greater than one. At this time, one of the emoticon package pictures needs to be selected according to the needs, and the selected emoticon The package picture record is the emoticon package picture matched from a preset emoticon package library.
  • the step S30 that is, the matching emoticon package pictures from a preset emoticon package library according to the emoticon tags of the facial images, and determining the matched emoticons
  • the location of the facial features of the package picture including:
  • the face contour is also the edge contour of the face in the face image.
  • S302 Select all emoticon package pictures with the same emoticon tag and the face image from a preset emoticon package library
  • S303 Determine the location of the facial features and the contour of the location of the facial features of each selected emoticon package picture; the location of the facial features refers to all the objects with facial features in one emoticon package picture
  • the location of the five sense organs such as a person, an animal, an animation character, etc.
  • the location contour refers to the contour of the corresponding position of the five sense organs (such as the facial contour of a human face).
  • it may refer to selecting the facial contour and the facial image of the facial image from all the facial expression package pictures corresponding to the facial expression tag of the facial image in the preset facial expression package library.
  • the emoticon picture with the highest contour similarity is recorded as the emoticon picture matched from the preset emoticon library, so that it is convenient to extract the largest facial features from the face image later It is adapted to the extent to cover the facial features, replacing the image corresponding to the facial features in the emoticon package picture, and generating a new personalized emoticon package.
  • the matching an emoticon package picture from a preset emoticon package library according to the emoticon tag of the face image, and determining the location of the facial features of the matched emoticon package picture includes:
  • All the emoticon pictures with the same emoticon label and the face image are selected from the preset emoticon package library; the emoticon package pictures in the preset emoticon package library may be legally obtained from a third party that specializes in making emoticon packages
  • the emoticon package, and the emoticon package pictures in the preset emoticon package library all need to have facial features (that is, the face, which may be part of the face or all).
  • the emoticon pack pictures that uniquely match the face images are determined; understandably, the screening rules can be selected randomly or refer to The selection is based on the frequency of use. For example, the emoticon picture with the highest personal use frequency of the user can be selected, that is, the more frequently the user uses the emoticon picture, the greater the probability that it will be selected.
  • the filtering rule may also be to first count the total usage times of all the emoticon package pictures corresponding to the emoticon tag in the preset emoticon package library by all users, and select the emoticon package with the highest total usage count.
  • the screening rule can also pass the total usage count through a preset conversion rule (a conversion rule contains the association relationship between the total usage count in different ranges and different popularity, and a total usage count can only Corresponding to a popularity) is converted to popularity.
  • a preset conversion rule contains the association relationship between the total usage count in different ranges and different popularity, and a total usage count can only Corresponding to a popularity
  • the emoticon package picture determined according to the screening rule is recorded as the emoticon package picture that uniquely matches the face image, and at the same time it is extracted from the emoticon package picture that uniquely matches the face image The location of the facial features.
  • the emoticon package picture that uniquely matches the face image can be confirmed according to the screening rule.
  • the screening rule can be confirmed according to the number of times the user has used it. In this way, the user can be selected according to user preferences.
  • the generated personalized emoticon package template in this way, the generated personalized emoticon package will be more in line with the user's usage habits and bring a better user experience.
  • the location of the facial features in the emoticon package picture needs to be determined, and then The facial features extracted from the face image are replaced with the original image content in the contour of the position of the facial features (referring to the contour of the corresponding position of the facial features), and then the facial features of the facial image will be replaced
  • the above-mentioned emoticon package pictures are synthesized into a new personalized emoticon package (for example, the main body of the emoticon package picture is a cute kitten. In this case, replace the cat’s face with the facial features of the face image to generate a personalized expression
  • the bag is a cute kitten with facial features of a face image).
  • the extracting facial features in the face image, and overlaying the facial features to match from the preset emoticon library includes:
  • the location contour of the facial features refers to the edge contour of the area occupied by the facial features in the emoticon package picture;
  • the overall placement angle of the facial features refers to the size, upright or inverted angle of the facial features ,
  • the overall placement angle can be determined based on one or more of the five senses, for example, the angle between the straight line and the horizontal line between the opposite corners of an eye is used to determine its inclination angle, and then the nose Or whether the mouth is under the eyes (similarly, it can also be determined by whether the mouth is under the nose, etc.) to determine whether it is upright or upside down (the nose or mouth under the eyes is upright, otherwise it is upside down);
  • the contour area of the facial features refers to the total area of the contour of the location.
  • S402 Extract all facial features within the contour of the face in the face image, and determine the positional relationship between the center points of the facial features and the linear distance between the center points of the facial features ;
  • the facial features include but are not limited to ears, eyebrows, eyes, nose, mouth, etc.
  • the positional relationship between the center points of the facial features refers to the distance, relative orientation, etc. between the center points of the facial features.
  • S403 Create a new canvas, where the contour of the canvas is consistent with the contour of the position of the facial features; preprocess the facial features according to a preset image processing method; that is, the contour of the canvas is consistent with the position of the facial features.
  • the location contours of the five sense organs can be completely overlapped.
  • the preset image processing methods include, but are not limited to, performing transparency adjustment and color toning processing on the facial features, so as to make the generated personalized emoticon package more natural and beautiful.
  • the center position of all the facial features and the center position of the canvas outline can be used as the counterpoint to place all the facial features into the canvas outline; and the composition between all the facial features
  • the overall placement angle is inconsistent with the overall placement angle, it needs to be adjusted to be consistent with the overall placement angle before placing all facial features into the canvas outline.
  • the above-mentioned same ratio needs to be based on "the ratio of the figure area enclosed by the outermost facial feature among the facial features after adjustment to the contour area of the facial features is within the preset ratio range" Conditions are selected (it can be arranged in advance and stored in the database according to the priority level, and the server automatically selects the same ratio from the database that meets the above conditions), if the ratio is within the preset ratio range, how many of the same ratios choose one, you can choose one randomly or according to the order of priority.
  • S406 Cover the canvas containing the facial features on the contour of the location of the facial features of the emoticon package picture matched from the preset emoticon package library; that is, due to The canvas outline of the canvas and the location outline of the facial features can completely overlap, so in this step, the canvas can directly replace the original image content in the outline of the facial features.
  • S407 Perform image synthesis processing on the emoticon package picture covered with the canvas outline to generate the personalized emoticon package.
  • the image synthesis processing includes, but is not limited to, merging the emoticon package pictures covered with the facial features into the same picture and performing unified exposure and toning processing to make it more natural.
  • step S407 generating the personalized emoticon package includes:
  • S4071 Receive a text adding instruction, and obtain the emoticon text entered by the user and the text box number selected by the user; wherein, the text adding instruction refers to the image synthesis process performed in step S407, if the user also wants to personalize
  • the emoticon text refers to what the user wants to configure for the personalized emoticon package Text
  • the text box number refers to the unique identifier of the text box that can be added to the personalized emoticon package, and each text box number corresponds to the style of a text box.
  • each text box number has a text box size that can be filled with emoticon text, and each text box corresponds to a default Text format, if the user does not modify the default text format, the emoticon text will be filled in the text box in the default text format.
  • S4073 Obtain the number of characters in the emoticon text, and adjust the character size in the default text format according to the number of characters and the size of the text box; that is, the number of characters in the emoticon text (ie, the character length ), the character size is automatically adjusted; understandably, other text format items other than the character size in the default text format can also be adjusted according to requirements.
  • S4074 Generate a text box corresponding to the text box number at a preset position in the emoticon package picture or a position selected by the user, and fill in emoticon text in the text box according to the adjusted default text format ; That is, after the user adjusts the default text format, the emoticon text will be filled into the text box in the adjusted default text format.
  • S4075 After assembling the emoticon package picture and the text box, generate the personalized emoticon package.
  • the assembling process refers to merging the text box and the emoticon package picture after image synthesis processing into the same personalized emoticon package.
  • the above-mentioned embodiment also supports user-defined input of emoticon text, and automatically adjusts the character size and so on by judging the number of characters of emoticon text (that is, the character length), and fills in the emoticon characters.
  • the text box and the emoticon package picture are automatically assembled into a personalized emoticon package. Understandably, it is also possible to add prop effects to the personalized emoticon package, such as adding props with effects such as hearts, hats, and stars.
  • an apparatus for automatically generating emoticons corresponds to the method for automatically generating emoticons in the above-mentioned embodiment one-to-one.
  • the device for automatically generating emoticons includes:
  • the obtaining module 11 is used to obtain a face image
  • the extraction module 12 is configured to extract facial micro-expressions from the facial images, and obtain the facial expression tags of the facial images according to the facial micro-expressions;
  • the matching module 13 is configured to match an emoticon package picture from a preset emoticon package library according to the emoticon tag of the face image, and determine the location of the facial features of the matched emoticon package picture; wherein, the preset It is assumed that the emoticon package pictures in the emoticon package library have at least one facial features, and each emoticon package picture is associated with at least one emoticon label;
  • the covering module 14 is used to extract the facial features in the face image, and cover the facial features to the facial features of the emoticon package picture matched from the preset emoticon package library. Location, generate personalized emoticons.
  • the extraction module 12 includes:
  • the extraction unit 121 is configured to extract all the action unit types of the facial micro-expression from the facial image
  • the confirming unit 122 is configured to confirm the micro-expression type of the face image based on all the action unit types extracted from the face image;
  • the acquiring unit 123 is configured to acquire all the emoticon tags associated with the micro-expression type, and at the same time acquire the characteristic action unit associated with each emoticon tag;
  • the matching unit 124 is configured to match all the action unit types extracted from the face image with the characteristic action units associated with each expression tag, and extract all the action units from the face image.
  • the action unit type includes all the characteristic action units associated with the expression tag
  • the expression tag is recorded as the expression tag of the face image.
  • Each module in the above-mentioned emoticon package automatic generating device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the foregoing modules may be embedded in the form of hardware or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the foregoing modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 9.
  • the computer equipment includes a processor, a memory, a network interface and a database connected through a system bus.
  • the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium. .
  • a computer device including a memory, a processor, and computer-readable instructions stored on the memory and capable of running on the processor, and the processor implements at least the following steps when executing the computer-readable instructions:
  • the emoticon package picture is matched from the preset emoticon package library, and the location of the facial features of the matched emoticon package picture is determined; wherein, Each of the emoticon package pictures has at least one facial features, and each of the emoticon package pictures is associated with at least one emoticon label;
  • a computer-readable storage medium is provided.
  • the computer-readable storage medium is a volatile storage medium or a non-volatile storage medium, and computer-readable instructions are stored thereon. At least the following steps are implemented when executed by the processor:
  • the emoticon package picture is matched from the preset emoticon package library, and the location of the facial features of the matched emoticon package picture is determined; wherein, Each of the emoticon package pictures has at least one facial features, and each of the emoticon package pictures is associated with at least one emoticon label;
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • ROM read only memory
  • PROM programmable ROM
  • EPROM electrically programmable ROM
  • EEPROM electrically erasable programmable ROM
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Road DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种表情包自动生成方法、装置、计算机设备及存储介质,所述方法包括:自人脸图像中提取人脸微表情,并根据人脸微表情获取人脸图像的表情标签(S20);根据人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的表情包图片的五官部位的所处位置(S30);提取人脸图像中的面部特征,并将面部特征覆盖到自预设表情包库中匹配到的表情包图片的五官部位的所处位置,生成个性化表情包(S40)。该方法操作便捷,且由于生成的个性化表情包中的人脸图像与原有的表情包图片的表情一致,因此人脸图像的面部特征融合至原有的表情包图片中的效果和一致性更好,提升了用户体验,也提高了用户活跃度和参与度。

Description

表情包自动生成方法、装置、计算机设备及存储介质
本申请要求于2019年7月5日提交中国专利局、申请号为201910602401.7,发明名称为“表情包自动生成方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能中的微表情识别领域,具体涉及一种表情包自动生成方法、装置、计算机设备及存储介质。
背景技术
随着通信技术的发展,手机的使用越来越广泛,极大的扩展了人们的社交范围。基于社交范围的扩大,用户使用手机即时通信软件进行交流的情况也越来越多。为了用户之间的交流和沟通,很多社交应用中都存在聊天功能,用户可以通过聊天框进行对话或者互相发送各种各样的表情包,来表达难以通过文字述说的情绪。
在现实应用中,用户发送的表情包大多是从专门制作表情包的第三方获取的,即第三方根据其收集的素材,生成表情包后,发布到网络中,用户从第三方提供的表情包中获取自己感兴趣的表情包进行使用。但是,发明人意识到,在这种情况下,用户是被动的接受或是被动的选择表情包,可能经常会出现无法达到自己想要的效果的情况。
发明内容
本申请实施例提供一种表情包自动生成方法、装置、计算机设备及存储介质,本申请操作简易,生成的个性化表情包的融合效果和一致性更好,提升了用户体验,也提高了用户活跃度和参与度。
一种表情包自动生成方法,包括:
获取人脸图像;
自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;
根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;
提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。
一种表情包自动生成装置,包括:
获取模块,用于获取人脸图像;
提取模块,用于自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;
匹配模块,用于根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;
覆盖模块,用于提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现上述表情包自动生成方法。
一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令被处理器执行时实现上述表情包自动生成方法。
附图说明
图1是本申请一实施例中表情包自动生成方法的应用环境示意图;
图2是本申请一实施例中表情包自动生成方法的流程图;
图3是本申请一实施例中表情包自动生成方法的步骤S20的流程图;
图4是本申请一实施例中表情包自动生成方法的步骤S30的流程图;
图5是本申请一实施例中表情包自动生成方法的步骤S40的流程图;
图6是本申请一实施例中表情包自动生成方法的步骤S407的流程图;
图7是本申请一实施例中表情包自动生成装置的原理框图;
图8是本申请一实施例中表情包自动生成装置的提取模块的原理框图;
图9是本申请一实施例中计算机设备的示意图。
具体实施方式
本申请提供的表情包自动生成方法,可应用在如图1的应用环境中,其中,客户端(计算机设备)通过网络与服务器进行通信。其中,客户端(计算机设备)包括但不限于为各种个人计算机、笔记本电脑、智能手机、平板电脑、摄像头和便携式可穿戴设备。服务器可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
在一实施例中,如图2所示,提供一种表情包自动生成方法,以该方法应用在图1中的服务器为例进行说明,包括以下步骤S10-S40:
S10,获取人脸图像;
所述人脸图像是指包含部分或全部人脸五官的图像,所述人脸图像可以由用户通过摄像设备拍摄并上传值服务器,亦可以预先存储在数据库中,服务器可以根据需求随时从数据库中调取。
S20,自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;
也即,在该实施例中,需要首先提取所述人脸图像中的人脸微表情,并根据所述人脸微表情去确定与其对应的表情标签,并将该表情标签确定为该人脸图像的表情标签。
在一实施例中,如图3所示,所述步骤S20,也即所述自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签,包括:
S201,自所述人脸图像中提取人脸微表情的所有动作单元类型;
其中,所述动作单元类型可以包括但不限定于为以下表1中国际上通用的动作单元(AU)以及眼球动态等,所述眼球动态具体为眼球的不同动作和视角,比如眼球向左、向右、向上、向下、右上看等,且眼球的不同动作和视角对应的动作单元中还可以包括对眼球动作的幅度大小进行判断。
S202,根据自所述人脸图像中提取所有所述动作单元类型确认所述人脸图像的微表情类型。
也即,数据库中预先存储有各种微表情类型(比如微表情类型为哭、笑或者生气等)所对应的动作单元类型,每一种微表情类型对应有多种动作单元类型的组合,比如,微表情类型为笑,这一微表情类型对应有至少以下动作单元类型的组合:嘴角上扬(表1中的AU12)、嘴角上扬(表1中的AU12)+外眉上扬(表1中的AU2)、嘴角上扬(表1中的AU12)+嘴唇伸展(表1中的AU20)+上下嘴唇分开(表1中的AU25)等;因此,只要将在所述步骤S201中提取的所有所述动作单元类型,与数据库中存储的各微表情类型的所对应的动作单元类型进行比对,即可确认所述微表情的类型。可理解地,在本实施例一方面,只要在所述步骤S201中提取的所有所述动作单元类型中,包含数据库中存储的一个微表情类型的所对应的所有动作单元类型(也即,在所述步骤S201中提取的所有所述动作单元类型中还可以包含其他动作单元),即可认为所述微表情类型为该微表情类型。在本实施例另一方面,亦可以仅在在所述步骤S201中提取的所有所述动作单元类型,与数据库中存储的一个微表情类型的动作单元类型及序列一一对应(不可多或者少任何一个动作单元)时,才认为所述被监护人员的微表情类型为该微表情类型。
表1部分AU
[Table 1]
AU标号 AU描述
AU1 内眉上扬
AU2 外眉上扬
AU4 眉毛下压
AU5 上眼脸上扬
AU6 脸颊抬起
AU7 眼睑收紧
AU9 鼻子蹙皱
AU10 上唇上扬
AU12 嘴角上扬
AU14 收紧嘴角
AU15 嘴角下拉
AU16 下嘴唇下压
AU17 下巴缩紧
AU18 嘴唇褶皱
AU20 嘴唇伸展
AU23 嘴唇收缩
AU24 嘴唇压紧
AU25 上下嘴唇分开
AU26 下颚下拉
S203,获取所述微表情类型关联的所有所述表情标签,同时获取与每一个所述表情标签关联的特征动作单元;
也即,数据库中预先存储有与各所述微表情类型关联的表情标签,且每一种微 表情类型对应于多个表情标签,比如,微表情类型为笑,其关联的表情标签可能包括:大笑、微笑、奸笑、苦笑不得、傻笑等。可理解地,每一个与微表情类型关联的表情标签都有其对应的至少一个特征动作单元。
S204,将自所述人脸图像中提取所有所述动作单元类型与每一个所述表情标签关联的所述特征动作单元进行匹配,在自所述人脸图像中提取所有所述动作单元类型中包含一个所述表情标签关联的所有所述特征动作单元时,将所述表情标签记录为所述人脸图像的表情标签。
在本实施例中,只有在所述步骤S201中自人脸图像中提取的所有所述动作单元类型中包含一个表情标签关联的所有所述特征动作单元时(也即,在所述步骤S201中提取的所有所述动作单元类型中还可以包含除该表情标签对应的所述特征动作单元之外的其他动作单元),即可认为上述表情标签即为人脸图像的表情标签。在上述实施例中,首先根据人脸图像中提取的动作单元类型确认微表情类型(微表情类型数量远小于表情标签的数量),之后才将人脸图像中提取的动作单元类型与该微表情类型关联的表情标签的特征动作单元进行匹配,如此,人脸图像中提取的动作单元类型无需与所有的表情标签进行对比,仅需要与少数几类微表情类型对应的表情标签的特征动作单元进行比对,在表情标签的数量巨大的情况下,大大减少了计算量,减轻了服务器负载。
可理解地,在一实施例中,亦可以直接在步骤S201中自所述人脸图像中提取人脸微表情的所有动作单元类型之后,直接获取所有的表情标签以及与每一个所述表情标签关联的特征动作单元;再进入步骤S204中将自所述人脸图像中提取所有所述动作单元类型与每一个所述表情标签关联的所述特征动作单元进行匹配,在自所述人脸图像中提取所有所述动作单元类型中包含一个所述表情标签关联的所有所述特征动作单元时,将所述表情标签记录为所述人脸图像的表情标签。在该实施例中,无需首先根据人脸图像中提取的动作单元类型确认微表情类型,而是直接将人脸图像中提取的动作单元类型与表情标签的特征动作单元进行匹配,简化了比对步骤,在表情标签的数量相对较少时,该实施例可以被优先选取使用。
S30,根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确 定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;
也即,在本实施例中,首先需要根据步骤S20中获取到的该人脸图像的表情标签去获取预设表情包库中的与该表情标签关联的表情包图片(一个表情包图片可能对应于一个或多个表情标签),此时,可能获取到的与该表情标签对应的表情包图片的数量大于一个,此时,需要根据需求选取其中的一个表情包图片,并将选取的该表情包图片记录为自预设表情包库中匹配到的所述表情包图片。且在自预设表情包库中选取与所述人脸图像的表情标签匹配的唯一的一个表情包图片之后,需要确定该表情包图片中的五官部位所处的位置,进而用所述人脸图像中提取出来的面部特征覆盖至该五官部位,替换该表情包图片中的五官位置对应的图像,生成新的个性化的表情包。
在一实施例中,如图4所示,所述步骤S30,也即所述根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置,包括:
S301,获取自所述人脸图像中提取的人脸脸部轮廓;在本实施例中,所述人脸脸部轮廓也即是所述人脸图像中的人脸的脸部的边缘轮廓。
S302,自预设表情包库中选取所述表情标签与所述人脸图像相同的所有表情包图片;
S303,确定选取的各所述表情包图片的五官部位的所处位置以及所述五官部位的所处位置轮廓;所述五官部位的所处位置是指一个表情包图片中的所有具有五官的对象(比如人、动物或动漫人物等)的五官所处位置,所述所处位置轮廓是指五官部位对应位置的轮廓(比如人脸的脸部轮廓)。
S304,获取所述人脸脸部轮廓与所述所处位置轮廓之间的相似度;所述相似度可以根据两者占据的面积大小、两者的轮廓线弧度变化等相似参数进行比对,可以给各个相似参数设定不同的权重,进而将各相似参数进行归一化处理之后,分别乘以对应的权重之后,将归一化处理之后的相抵参数与权重的乘积之和,作为所述相似度的判断标准,乘积之和越大,所述相似度越高。
S305,将所述相似度最高的所述所处位置轮廓对应的所述表情包图片,记录为与所述人脸图像唯一匹配的所述表情包图片,同时获取与所述人脸图像唯一匹配的所述表情包图片的五官部位的所处位置。
也即,在该实施例中,可以是指自所述预设表情包库中与该人脸图像的表情标签对应的所有表情包图片中,选取脸部轮廓与所述人脸图像的脸部轮廓的相似度最高的所述表情包图片,将其记录为自预设表情包库中匹配到的所述表情包图片,如此,可以方便后续将所述人脸图像中提取出来的面部特征最大程度地适配覆盖至该五官部位,替换该表情包图片中的五官位置对应的图像,生成新的个性化的表情包。
在一实施例中,所述根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置,包括:
自预设表情包库中选取所述表情标签与所述人脸图像相同的所有表情包图片;所述预设表情包库中的表情包图片可以是从专门制作表情包的第三方合法获取的表情包,且该预设表情包库中的表情包图片均需要具有五官部位(也即脸部,可以为部分脸部或者全部)。
根据预设的筛选规则,自选取的所有所述表情包图片中确定与所述人脸图像唯一匹配的所述表情包图片;可理解地,所述筛选规则可以是随机选取,也可以是指根据使用频率进行选取,比如,可以选取用户的个人使用频率最高的表情包图片,也即用户使用该表情包图片的频率越高,则其被选取的几率越大。同理,所述筛选规则亦可以为首先统计所述预设表情包库中的上述与该表情标签对应的所有表情包图片被所有用户的总使用次数,并选取其中总使用次数最高的表情包图片,进一步地,所述筛选规则也可以将该总使用次数通过预设的换算规则(一个换算规则中包含不同范围的总使用次数与不同流行度之间的关联关系,一个总使用次数仅能对应一个流行度)转换为流行度,此时同理,流行度越高,其被选取的几率越高。
将根据所述筛选规则确定的所述表情包图片,记录为与所述人脸图像唯一匹配的所述表情包图片,同时自与所述人脸图像唯一匹配的所述表情包图片中提取其五官部位的所处位置。
在该实施例中,可以根据所述筛选规则确认与所述人脸图像唯一匹配的所述表情包图片,该筛选规则可以根据用户使用次数等进行确认,如此,可以根据用户喜好去选取用户待生成的个性化表情包的模板,如此,生成的个性化表情包会更符合用户的使用习惯,带给用户更好的使用体验。
S40,提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。
在该实施例中,当自预设表情包库中选取与所述人脸图像的表情标签匹配的唯一的一个表情包图片之后,需要确定该表情包图片中的五官部位的所处位置,进而将所述人脸图像中提取出来的面部特征,替换原有的该五官部位的所处位置轮廓(指五官部位对应位置的轮廓)中的图像内容,进而将替换有该人脸图像的面部特征的的上述表情包图片合成新的个性化表情包(比如,表情包图片中主体为一只卖萌的小猫,此时,将猫脸部位替换为人脸图像的面部特征,生成的个性化表情包即为卖萌的具有人脸图像的面部特征的小猫)。
在一实施例中,如图5所示,所述步骤S40中,所述提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,包括:
S401,获取自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置轮廓、所述五官部位的整体放置角度、所述五官部位轮廓面积;其中,所述五官部位的所处位置轮廓是指所述表情包图片中的五官部位所占据的面积的边缘轮廓;所述五官部位的整体放置角度是指所述五官部位倾斜角度的大小、正置或倒置,该整体放置角度可以以其中一个或多个五官为基准来进行确定,比如以一只眼睛的相对两个眼角之间连成的直线与水平线之间的角度来确定其倾斜角度,再以鼻子或嘴巴是否位于眼睛的下方(同理,亦可以嘴巴是否位于鼻子下方等条件来确定)来确定其为正置或倒置(鼻子或嘴巴位于眼睛的下方为正置、否则为倒置);所述五官部位轮廓面积是指该所处位置轮廓的总面积。
S402,提取所述人脸图像中位于人脸脸部轮廓之内的所有面部特征,并确定各 所述面部特征中心点之间的位置关系,以及各所述面部特征中心点之间的直线距离;其中,所述面部特征包括但不限定于为耳、眉、眼、鼻、口等。各所述面部特征中心点之间的位置关系是指各所述面部特征中心点之间的距离远近、相对方位等。
S403,新建画布,所述画布的画布轮廓与所述五官部位的所处位置轮廓一致;将所述面部特征根据预设的图像处理方式进行预处理;也即,所述画布的画布轮廓与所述五官部位的所处位置轮廓可以完全重叠。所述预设的图像处理方式包括但不限定于为对所述面部特征进行透明度调整、调色处理等,以使得生成的个性化表情包更为自然美观。
S404,在保持所述面部特征中心点之间的位置关系不变的情况下,将将进行预处理之后的所有所述面部特征按照所述整体放置角度放置在所述画布的画布轮廓中;也即,在所有面部特征放置进入所述画布轮廓中时,需要保持其相对之间的位置关系不变,进而维持人脸图像的表情不变(若各面部特征之间的相对位置关系变化,则变化之后的面部特征组成的人脸图像表情与此前的人脸图像对应的表情可能会发生变化)。可理解地,可以以所有所述面部特征的中心位置以及所述画布轮廓的中心位置为对位点,将所有面部特征放置进入所述画布轮廓中;且在所有所述面部特征之间组成的整体与所述整体放置角度不一致时,需要将其调整至与所述整体放置角度一致之后,再将所有面部特征放置进入所述画布轮廓中。
S405,以同一比列调整各所述面部特征中心点之间的直线距离,且根据所述同一比例调整之后的各所述面部特征中的最***的所述面部特征围成的图形面积,与所述五官部位轮廓面积之间的比值处于预设比值范围内;也即,通过同一比例统一调整各所述面部特征中心点之间的直线距离,来整体调整各所述面部特征的大小;且调整之后的各所述面部特征中的最***的所述面部特征围成的图形面积,与所述五官部位轮廓面积之间的比值处于预设比值范围(可根据需求设定)内时,各面部特征在该五官部位上的大小将会相对协调,否则,面部特征在画布轮廓中可能过大或者过小而不协调。也即,上述同一比例需要根据“调整之后的各所述面部特征中的最***的所述面部特征围成的图形面积,与所 述五官部位轮廓面积之间的比值处于预设比值范围”该条件进行选取(可预先按照优先级别排列并存储在数据库中,并由服务器自动从数据库中筛选符合上述条件的同一比列),若使得所述比值处于预设比值范围的所述同一比例有多个选择,则可以随机选取一个或者根据优先级别的先后顺序选取。
S406,将包含所述面部特征的所述画布,对应覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置轮廓上;也即,由于所述画布的画布轮廓与所述五官部位的所处位置轮廓可以完全重叠,因此在该步骤中直接用所述画布替换原有的该五官部位的所处位置轮廓中的图像内容即可。
S407,对覆盖有所述画布轮廓的所述表情包图片进行图像合成处理,生成所述个性化表情包。所述图像合成处理包括但不限定于为对覆盖有所述面部特征的所述表情包图片合并为同一张图片并进行统一曝光调色处理,使其更加自然。
在一实施例中,如图6所示,所述步骤S407中,所述生成所述个性化表情包,包括:
S4071,接收文本添加指令,获取用户录入的表情文本、所述用户选取的文本框编号;其中,所述文本添加指令是指在步骤S407中进行图像合成处理之后,若用户还想要在个性化表情包中自主录入表情文本时,此时,可以通过点击、滑动等方式触发预设按键之后将文本添加指令发送至服务器;所述表情文本就是指用户想要为所述个性化表情包配置的文本;所述文本框编号是指在个性化表情包中可以被加入的文本框的唯一标识,每一个文本框编号对应于一个文本框的样式。
S4072,获取与所述文本框编号关联的文本框大小以及默认文本格式;也即,每一个文本框编号均具有一个可以被填入表情文本的文本框大小,且每一个文本框均对应一个默认文本格式,若用户并没有修改所述默认文本格式,则所述表情文本将以该默认文本格式填入所述文本框中。
S4073,获取所述表情文本的字符数量,并根据所述字符数量以及所述文本框大小调整所述默认文本格式中的字符大小;也即,可以通过对于表情文本的字符数量(也即字符长度)的判断,对字符大小等进行自动调整;可理解地,所述默认文本格式中除所述字符大小之外的其他文本格式项目亦可以根据需求进 行调整。
S4074,在所述表情包图片中的预设位置或用户选定位置生成与所述文本框编号对应的文本框,并在所述文本框中按照调整之后的所述默认文本格式填入表情文本;也即,在用户调整所述默认文本格式之后,所述表情文本将以调整之后的该默认文本格式填入所述文本框中。
S4075,对所述表情包图片与所述文本框进行拼装处理之后,生成所述个性化表情包。所述拼装处理是指将所述文本框与进行图像合成处理之后的所述表情包图片合并同一个个性化表情包。
也即,在上述实施例中,还支持用户自定义输入表情文本,并通过对于表情文本的字符数量(也即字符长度)的判断,对字符大小等进行自动调整,并将填入表情字符的所述文本框与表情包图片自动拼装为个性化表情包。可理解地,同样可在所述个性化表情包上添加道具效果,比如增加,心形、帽子、星星等效果的道具。
在一实施例中,如图7所示,提供一种表情包自动生成装置,该表情包自动生成装置与上述实施例中表情包自动生成方法一一对应。所述表情包自动生成装置包括:
获取模块11,用于获取人脸图像;
提取模块12,用于自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;
匹配模块13,用于根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;
覆盖模块14,用于提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。
在一实施例中,如图8所示,所述提取模块12包括:
提取单元121,用于自所述人脸图像中提取人脸微表情的所有动作单元类型;
确认单元122,用于根据自所述人脸图像中提取所有所述动作单元类型确认所述人脸图像的微表情类型;
获取单元123,用于获取所述微表情类型关联的所有所述表情标签,同时获取与每一个所述表情标签关联的特征动作单元;
匹配单元124,用于将自所述人脸图像中提取所有所述动作单元类型与每一个所述表情标签关联的所述特征动作单元进行匹配,在自所述人脸图像中提取所有所述动作单元类型中包含一个所述表情标签关联的所有所述特征动作单元时,将所述表情标签记录为所述人脸图像的表情标签。
关于表情包自动生成装置的具体限定可以参见上文中对于表情包自动生成方法的限定,在此不再赘述。上述表情包自动生成装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图9所示。该计算机设备包括通过***总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作***、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作***和计算机可读指令的运行提供环境。。该计算机可读指令被处理器执行时以实现一种表情包自动生成方法。
在一个实施例中,提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,处理器执行计算机可读指令时至少实现以下步骤:
获取人脸图像;
自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;
根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所 述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;
提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。
在一个实施例中,提供了一种计算机可读存储介质,所述计算机可读存储介质为易失性存储介质或非易失性存储介质,其上存储有计算机可读指令,计算机可读指令被处理器执行时至少实现以下步骤:
获取人脸图像;
自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;
根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;
提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、 增强型SDRAM(ESDRAM)、同步链路DRAM(SLDRAM)、存储器总线直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
发明概述
技术问题
问题的解决方案
发明的有益效果

Claims (20)

  1. 一种表情包自动生成方法,其中,包括:
    获取人脸图像;
    自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;
    根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;
    提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。
  2. 如权利要求1所述的表情包自动生成方法,其中,所述自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签,包括:
    自所述人脸图像中提取人脸微表情的所有动作单元类型;
    根据自所述人脸图像中提取所有所述动作单元类型确认所述人脸图像的微表情类型;
    获取所述微表情类型关联的所有所述表情标签,同时获取与每一个所述表情标签关联的特征动作单元;
    将自所述人脸图像中提取所有所述动作单元类型与每一个所述表情标签关联的所述特征动作单元进行匹配,在自所述人脸图像中提取所有所述动作单元类型中包含一个所述表情标签关联的所有所述特征动作单元时,将所述表情标签记录为所述人脸图像的表情标签。
  3. 如权利要求1所述表情包自动生成的方法,其中,所述根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置,包括:
    获取自所述人脸图像中提取的人脸脸部轮廓;
    自预设表情包库中选取所述表情标签与所述人脸图像相同的所有表情包图片;
    确定选取的各所述表情包图片的五官部位的所处位置以及所述五官部位的所处位置轮廓;
    获取所述人脸脸部轮廓与所述所处位置轮廓之间的相似度;
    将所述相似度最高的所述所处位置轮廓对应的所述表情包图片,记录为与所述人脸图像唯一匹配的所述表情包图片,同时获取与所述人脸图像唯一匹配的所述表情包图片的五官部位的所处位置。
  4. 如权利要求1所述的表情包自动生成方法,其中,所述根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置,包括:
    自预设表情包库中选取所述表情标签与所述人脸图像相同的所有表情包图片;
    根据预设的筛选规则,自选取的所有所述表情包图片中确定与所述人脸图像唯一匹配的所述表情包图片;
    将根据所述筛选规则确定的所述表情包图片,记录为与所述人脸图像唯一匹配的所述表情包图片,同时自与所述人脸图像唯一匹配的所述表情包图片中提取其五官部位的所处位置。
  5. 如权利要求1所述的表情包自动生成方法,其中,所述提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,包括:
    获取自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置轮廓、所述五官部位的整体放置角度、所述五官部位轮廓面积;
    提取所述人脸图像中位于人脸脸部轮廓之内的所有面部特征,并 确定各所述面部特征中心点之间的位置关系,以及各所述面部特征中心点之间的直线距离;
    新建画布,所述画布的画布轮廓与所述五官部位的所处位置轮廓一致;将所述面部特征根据预设的图像处理方式进行预处理;
    在保持所述面部特征中心点之间的位置关系不变的情况下,将将进行预处理之后的所有所述面部特征按照所述整体放置角度放置在所述画布的画布轮廓中;
    以同一比列调整各所述面部特征中心点之间的直线距离,且根据所述同一比例调整之后的各所述面部特征中的最***的所述面部特征围成的图形面积,与所述五官部位轮廓面积之间的比值处于预设比值范围内;
    将包含所述面部特征的所述画布,对应覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置轮廓上;
    对覆盖有所述画布轮廓的所述表情包图片进行图像合成处理,生成所述个性化表情包。
  6. 如权利要求1所述的表情包自动生成方法,其中,所述生成所述个性化表情包,包括:
    接收文本添加指令,获取用户录入的表情文本、所述用户选取的文本框编号;
    获取与所述文本框编号关联的文本框大小以及默认文本格式;
    获取所述表情文本的字符数量,并根据所述字符数量以及所述文本框大小调整所述默认文本格式中的字符大小;
    在所述表情包图片中的预设位置或用户选定位置生成与所述文本框编号对应的文本框,并在所述文本框中按照调整之后的所述默认文本格式填入表情文本;
    对所述表情包图片与所述文本框进行拼装处理之后,生成所述个性化表情包。
  7. 一种表情包自动生成装置,其中,包括:
    获取模块,用于获取人脸图像;
    提取模块,用于自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;
    匹配模块,用于根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;
    覆盖模块,用于提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。
  8. 如权利要求7所述的表情包自动生成装置,其中,所述提取模块包括:
    提取单元,用于自所述人脸图像中提取人脸微表情的所有动作单元类型;
    确认单元,用于根据自所述人脸图像中提取所有所述动作单元类型确认所述人脸图像的微表情类型;
    获取单元,用于获取所述微表情类型关联的所有所述表情标签,同时获取与每一个所述表情标签关联的特征动作单元;
    匹配单元,用于将自所述人脸图像中提取所有所述动作单元类型与每一个所述表情标签关联的所述特征动作单元进行匹配,在自所述人脸图像中提取所有所述动作单元类型中包含一个所述表情标签关联的所有所述特征动作单元时,将所述表情标签记录为所述人脸图像的表情标签。
  9. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现一种表情包自动生成方法,其中 ,所述表情包自动生成方法包括:
    获取人脸图像;
    自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;
    根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;
    提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。
  10. 如权利要求9所述的计算机设备,其中,所述自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签,包括:
    自所述人脸图像中提取人脸微表情的所有动作单元类型;
    根据自所述人脸图像中提取所有所述动作单元类型确认所述人脸图像的微表情类型;
    获取所述微表情类型关联的所有所述表情标签,同时获取与每一个所述表情标签关联的特征动作单元;
    将自所述人脸图像中提取所有所述动作单元类型与每一个所述表情标签关联的所述特征动作单元进行匹配,在自所述人脸图像中提取所有所述动作单元类型中包含一个所述表情标签关联的所有所述特征动作单元时,将所述表情标签记录为所述人脸图像的表情标签。
  11. 如权利要求9所述的计算机设备,其中,所述根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置,包括:
    获取自所述人脸图像中提取的人脸脸部轮廓;
    自预设表情包库中选取所述表情标签与所述人脸图像相同的所有表情包图片;
    确定选取的各所述表情包图片的五官部位的所处位置以及所述五官部位的所处位置轮廓;
    获取所述人脸脸部轮廓与所述所处位置轮廓之间的相似度;
    将所述相似度最高的所述所处位置轮廓对应的所述表情包图片,记录为与所述人脸图像唯一匹配的所述表情包图片,同时获取与所述人脸图像唯一匹配的所述表情包图片的五官部位的所处位置。
  12. 如权利要求9所述的计算机设备,其中,所述根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置,包括:
    自预设表情包库中选取所述表情标签与所述人脸图像相同的所有表情包图片;
    根据预设的筛选规则,自选取的所有所述表情包图片中确定与所述人脸图像唯一匹配的所述表情包图片;
    将根据所述筛选规则确定的所述表情包图片,记录为与所述人脸图像唯一匹配的所述表情包图片,同时自与所述人脸图像唯一匹配的所述表情包图片中提取其五官部位的所处位置。
  13. 如权利要求9所述的计算机设备,其中,所述提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,包括:
    获取自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置轮廓、所述五官部位的整体放置角度、所述五官部位轮廓面积;
    提取所述人脸图像中位于人脸脸部轮廓之内的所有面部特征,并确定各所述面部特征中心点之间的位置关系,以及各所述面部特征中心点之间的直线距离;
    新建画布,所述画布的画布轮廓与所述五官部位的所处位置轮廓一致;将所述面部特征根据预设的图像处理方式进行预处理;
    在保持所述面部特征中心点之间的位置关系不变的情况下,将将进行预处理之后的所有所述面部特征按照所述整体放置角度放置在所述画布的画布轮廓中;
    以同一比列调整各所述面部特征中心点之间的直线距离,且根据所述同一比例调整之后的各所述面部特征中的最***的所述面部特征围成的图形面积,与所述五官部位轮廓面积之间的比值处于预设比值范围内;
    将包含所述面部特征的所述画布,对应覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置轮廓上;
    对覆盖有所述画布轮廓的所述表情包图片进行图像合成处理,生成所述个性化表情包。
  14. 如权利要求1所述的计算机设备,其中,所述生成所述个性化表情包,包括:
    接收文本添加指令,获取用户录入的表情文本、所述用户选取的文本框编号;
    获取与所述文本框编号关联的文本框大小以及默认文本格式;
    获取所述表情文本的字符数量,并根据所述字符数量以及所述文本框大小调整所述默认文本格式中的字符大小;
    在所述表情包图片中的预设位置或用户选定位置生成与所述文本框编号对应的文本框,并在所述文本框中按照调整之后的所述默认文本格式填入表情文本;
    对所述表情包图片与所述文本框进行拼装处理之后,生成所述个性化表情包。
  15. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,其中,所述计算机可读指令被处理器执行时实现一 种表情包自动生成方法,其中,所述表情包自动生成方法包括:
    获取人脸图像;
    自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;
    根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;
    提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。
  16. 如权利要求15所述的计算机可读存储介质,其中,所述自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签,包括:
    自所述人脸图像中提取人脸微表情的所有动作单元类型;
    根据自所述人脸图像中提取所有所述动作单元类型确认所述人脸图像的微表情类型;
    获取所述微表情类型关联的所有所述表情标签,同时获取与每一个所述表情标签关联的特征动作单元;
    将自所述人脸图像中提取所有所述动作单元类型与每一个所述表情标签关联的所述特征动作单元进行匹配,在自所述人脸图像中提取所有所述动作单元类型中包含一个所述表情标签关联的所有所述特征动作单元时,将所述表情标签记录为所述人脸图像的表情标签。
  17. 如权利要求15所述的计算机可读存储介质,其中,所述根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置,包括:
    获取自所述人脸图像中提取的人脸脸部轮廓;
    自预设表情包库中选取所述表情标签与所述人脸图像相同的所有表情包图片;
    确定选取的各所述表情包图片的五官部位的所处位置以及所述五官部位的所处位置轮廓;
    获取所述人脸脸部轮廓与所述所处位置轮廓之间的相似度;
    将所述相似度最高的所述所处位置轮廓对应的所述表情包图片,记录为与所述人脸图像唯一匹配的所述表情包图片,同时获取与所述人脸图像唯一匹配的所述表情包图片的五官部位的所处位置。
  18. 如权利要求15所述的计算机可读存储介质,其中,所述根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置,包括:
    自预设表情包库中选取所述表情标签与所述人脸图像相同的所有表情包图片;
    根据预设的筛选规则,自选取的所有所述表情包图片中确定与所述人脸图像唯一匹配的所述表情包图片;
    将根据所述筛选规则确定的所述表情包图片,记录为与所述人脸图像唯一匹配的所述表情包图片,同时自与所述人脸图像唯一匹配的所述表情包图片中提取其五官部位的所处位置。
  19. 如权利要求15所述的计算机可读存储介质,其中,所述提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,包括:
    获取自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置轮廓、所述五官部位的整体放置角度、所述五官部位轮廓面积;
    提取所述人脸图像中位于人脸脸部轮廓之内的所有面部特征,并确定各所述面部特征中心点之间的位置关系,以及各所述面部特 征中心点之间的直线距离;
    新建画布,所述画布的画布轮廓与所述五官部位的所处位置轮廓一致;将所述面部特征根据预设的图像处理方式进行预处理;
    在保持所述面部特征中心点之间的位置关系不变的情况下,将将进行预处理之后的所有所述面部特征按照所述整体放置角度放置在所述画布的画布轮廓中;
    以同一比列调整各所述面部特征中心点之间的直线距离,且根据所述同一比例调整之后的各所述面部特征中的最***的所述面部特征围成的图形面积,与所述五官部位轮廓面积之间的比值处于预设比值范围内;
    将包含所述面部特征的所述画布,对应覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置轮廓上;
    对覆盖有所述画布轮廓的所述表情包图片进行图像合成处理,生成所述个性化表情包。
  20. 如权利要求15所述的计算机可读存储介质,其中,所述生成所述个性化表情包,包括:
    接收文本添加指令,获取用户录入的表情文本、所述用户选取的文本框编号;
    获取与所述文本框编号关联的文本框大小以及默认文本格式;
    获取所述表情文本的字符数量,并根据所述字符数量以及所述文本框大小调整所述默认文本格式中的字符大小;
    在所述表情包图片中的预设位置或用户选定位置生成与所述文本框编号对应的文本框,并在所述文本框中按照调整之后的所述默认文本格式填入表情文本;
    对所述表情包图片与所述文本框进行拼装处理之后,生成所述个性化表情包。
PCT/CN2020/085573 2019-07-05 2020-04-20 表情包自动生成方法、装置、计算机设备及存储介质 WO2021004114A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910602401.7 2019-07-05
CN201910602401.7A CN110458916A (zh) 2019-07-05 2019-07-05 表情包自动生成方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021004114A1 true WO2021004114A1 (zh) 2021-01-14

Family

ID=68482133

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/085573 WO2021004114A1 (zh) 2019-07-05 2020-04-20 表情包自动生成方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN110458916A (zh)
WO (1) WO2021004114A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905791A (zh) * 2021-02-20 2021-06-04 北京小米松果电子有限公司 表情包生成方法及装置、存储介质
CN113177994A (zh) * 2021-03-25 2021-07-27 云南大学 基于图文语义的网络社交表情包合成方法、电子设备和计算机可读存储介质
CN113485596A (zh) * 2021-07-07 2021-10-08 游艺星际(北京)科技有限公司 虚拟模型的处理方法、装置、电子设备及存储介质
CN117150063A (zh) * 2023-10-26 2023-12-01 深圳慢云智能科技有限公司 一种基于场景识别的图像生成方法及***

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458916A (zh) * 2019-07-05 2019-11-15 深圳壹账通智能科技有限公司 表情包自动生成方法、装置、计算机设备及存储介质
CN110889379B (zh) * 2019-11-29 2024-02-20 深圳先进技术研究院 表情包生成方法、装置及终端设备
CN111145283A (zh) * 2019-12-13 2020-05-12 北京智慧章鱼科技有限公司 一种用于输入法的表情个性化生成方法及装置
CN111046814A (zh) * 2019-12-18 2020-04-21 维沃移动通信有限公司 图像处理方法及电子设备
CN111368127B (zh) * 2020-03-06 2023-03-24 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质
CN112102157B (zh) * 2020-09-09 2024-07-09 咪咕文化科技有限公司 视频换脸方法、电子设备和计算机可读存储介质
CN112270733A (zh) * 2020-09-29 2021-01-26 北京五八信息技术有限公司 Ar表情包的生成方法、装置、电子设备及存储介质
CN112214632B (zh) * 2020-11-03 2023-11-17 虎博网络技术(上海)有限公司 文案检索方法、装置及电子设备
CN114816599B (zh) * 2021-01-22 2024-02-27 北京字跳网络技术有限公司 图像显示方法、装置、设备及介质
CN113727024B (zh) * 2021-08-30 2023-07-25 北京达佳互联信息技术有限公司 多媒体信息生成方法、装置、电子设备和存储介质
CN117974853B (zh) * 2024-03-29 2024-06-11 成都工业学院 同源微表情图像自适应切换生成方法、***、终端及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063683A (zh) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 一种基于人脸识别的表情输入方法和装置
CN107219917A (zh) * 2017-04-28 2017-09-29 北京百度网讯科技有限公司 表情符号生成方法及装置、计算机设备与可读介质
US20180024726A1 (en) * 2016-07-21 2018-01-25 Cives Consulting AS Personified Emoji
CN108197206A (zh) * 2017-12-28 2018-06-22 努比亚技术有限公司 表情包生成方法、移动终端及计算机可读存储介质
CN110458916A (zh) * 2019-07-05 2019-11-15 深圳壹账通智能科技有限公司 表情包自动生成方法、装置、计算机设备及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573527B (zh) * 2018-04-18 2020-02-18 腾讯科技(深圳)有限公司 一种表情图片生成方法及其设备、存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063683A (zh) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 一种基于人脸识别的表情输入方法和装置
US20180024726A1 (en) * 2016-07-21 2018-01-25 Cives Consulting AS Personified Emoji
CN107219917A (zh) * 2017-04-28 2017-09-29 北京百度网讯科技有限公司 表情符号生成方法及装置、计算机设备与可读介质
CN108197206A (zh) * 2017-12-28 2018-06-22 努比亚技术有限公司 表情包生成方法、移动终端及计算机可读存储介质
CN110458916A (zh) * 2019-07-05 2019-11-15 深圳壹账通智能科技有限公司 表情包自动生成方法、装置、计算机设备及存储介质

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905791A (zh) * 2021-02-20 2021-06-04 北京小米松果电子有限公司 表情包生成方法及装置、存储介质
US11922725B2 (en) 2021-02-20 2024-03-05 Beijing Xiaomi Pinecone Electronics Co., Ltd. Method and device for generating emoticon, and storage medium
CN113177994A (zh) * 2021-03-25 2021-07-27 云南大学 基于图文语义的网络社交表情包合成方法、电子设备和计算机可读存储介质
CN113177994B (zh) * 2021-03-25 2022-09-06 云南大学 基于图文语义的网络社交表情包合成方法、电子设备和计算机可读存储介质
CN113485596A (zh) * 2021-07-07 2021-10-08 游艺星际(北京)科技有限公司 虚拟模型的处理方法、装置、电子设备及存储介质
CN113485596B (zh) * 2021-07-07 2023-12-22 游艺星际(北京)科技有限公司 虚拟模型的处理方法、装置、电子设备及存储介质
CN117150063A (zh) * 2023-10-26 2023-12-01 深圳慢云智能科技有限公司 一种基于场景识别的图像生成方法及***
CN117150063B (zh) * 2023-10-26 2024-02-06 深圳慢云智能科技有限公司 一种基于场景识别的图像生成方法及***

Also Published As

Publication number Publication date
CN110458916A (zh) 2019-11-15

Similar Documents

Publication Publication Date Title
WO2021004114A1 (zh) 表情包自动生成方法、装置、计算机设备及存储介质
US11455729B2 (en) Image processing method and apparatus, and storage medium
US11861936B2 (en) Face reenactment
WO2016177290A1 (zh) 为自由组合创作的虚拟形象生成及使用表情的方法和***
US10911695B2 (en) Information processing apparatus, information processing method, and computer program product
US20210374839A1 (en) Generating augmented reality content based on third-party content
WO2020211347A1 (zh) 基于人脸识别的修改图片的方法、装置和计算机设备
US11776187B2 (en) Digital makeup artist
WO2023138345A1 (zh) 虚拟形象生成方法和***
US11961169B2 (en) Digital makeup artist
CN111091448A (zh) 服装预定制方法、装置、计算机设备及存储介质
WO2022257766A1 (zh) 图像处理方法、装置、设备及介质
WO2016082470A1 (zh) 一种图片处理方法、装置及计算机存储介质
US11430158B2 (en) Intelligent real-time multiple-user augmented reality content management and data analytics system
KR101757184B1 (ko) 감정표현 콘텐츠를 자동으로 생성하고 분류하는 시스템 및 그 방법
CN115222899B (zh) 虚拟数字人生成方法、***、计算机设备及存储介质
CN114841851A (zh) 图像生成方法、装置、电子设备及存储介质
CN114443182A (zh) 一种界面切换方法、存储介质及终端设备
CN110147511B (zh) 一种页面处理方法、装置、电子设备及介质
CN117391805A (zh) 试穿图生成方法、生成***、电子设备及存储介质
CN115554701A (zh) 虚拟角色的控制方法、装置、计算机设备和存储介质
KR20230118191A (ko) 디지털 메이크업 아티스트
CN117351121A (zh) 数字人编辑控制方法、装置、电子设备和存储介质
CN117351105A (zh) 图像编辑方法、终端和电子设备
CN117036150A (zh) 图像获取方法、装置、电子设备和可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20836230

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.05.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20836230

Country of ref document: EP

Kind code of ref document: A1