CN113760161A - Data generation method, data generation device, image processing method, image processing device, equipment and storage medium - Google Patents

Data generation method, data generation device, image processing method, image processing device, equipment and storage medium Download PDF

Info

Publication number
CN113760161A
CN113760161A CN202111016566.XA CN202111016566A CN113760161A CN 113760161 A CN113760161 A CN 113760161A CN 202111016566 A CN202111016566 A CN 202111016566A CN 113760161 A CN113760161 A CN 113760161A
Authority
CN
China
Prior art keywords
special effect
image
effect material
region
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111016566.XA
Other languages
Chinese (zh)
Inventor
李园园
许亲亲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202111016566.XA priority Critical patent/CN113760161A/en
Publication of CN113760161A publication Critical patent/CN113760161A/en
Priority to PCT/CN2022/127583 priority patent/WO2023030550A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2213/00Indexing scheme for animation
    • G06T2213/08Animation software package

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a data generation method, an image processing method, a data generation device, an image processing device, a storage medium and a program product, wherein the method comprises the following steps: acquiring a first part image, and displaying at least one first part special effect material to be edited on the first image based on the first part image; responding to a first special effect editing operation carried out on at least one first part special effect material, and acquiring a first special effect display parameter; generating a special effect data packet based on the first special effect display parameter; the special effect data packet is used for presenting at least one second part special effect material corresponding to the first part special effect material on a second image based on the first special effect display parameter and the second part image under the running condition; wherein the second part image and the first part image contain the same type of part.

Description

Data generation method, data generation device, image processing method, image processing device, equipment and storage medium
Technical Field
The present application relates to, but not limited to, the field of image processing technologies, and in particular, to a method, an apparatus, a device, a storage medium, and a program product for data generation and image processing.
Background
With the development of image processing technology, the application of image processing technology in daily life is becoming more and more widespread, wherein the image processing technology is utilized to perform special effect processing on images or videos so as to present different special effect effects, and the application of image processing technology is attracting more and more attention. For example, in an application scene such as a live broadcast or a short video, the image or the video may be subjected to special effect processing to present a corresponding special effect, so as to enhance the presentation effect of the image or the video and improve the visual experience of a user. However, in the related art, special effects that can be supported in application platforms such as live broadcast or short video are generally simple, and diversity is insufficient, so that the experience requirements of users cannot be well met.
Disclosure of Invention
In view of this, embodiments of the present application provide a data generation method, an image processing method, an apparatus, a device, a storage medium, and a program product.
The technical scheme of the embodiment of the application is realized as follows:
in one aspect, an embodiment of the present application provides a data generation method, where the method includes:
acquiring a first part image, and displaying at least one first part special effect material to be edited on the first image based on the first part image;
responding to a first special effect editing operation carried out on at least one first part special effect material, and acquiring a first special effect display parameter;
generating a special effect data packet based on the first special effect display parameter; the special effect data packet is used for presenting at least one second part special effect material corresponding to the first part special effect material on a second image based on the first special effect display parameter and the second part image under the running condition; wherein the second part image and the first part image contain the same type of part.
In some embodiments, the first image is a reference template image; the displaying at least one first region special effect material to be edited on the first image based on the first region image comprises: determining at least one first part special effect material to be edited based on the first part image; and displaying the reference template image and at least one first part special effect material in an editing operation area.
Therefore, the user can carry out special effect editing operation based on the reference template image, and the special effect editing requirements of the user can be better met.
In some embodiments, the method further comprises: and presenting at least one third part special effect material corresponding to the first part special effect material on the preview image based on the first special effect display parameter.
Therefore, the special effect of the editing can be presented in the preview image in the process of the special effect editing operation of the user, so that the operation and use experience of the user in the special effect editing process can be effectively improved, the user can adjust and optimize the first special effect display parameter in real time based on the special effect of the third part special effect material presented in the preview image, and the special effect editing requirement of the user can be better met.
In some embodiments, the presenting at least one preview special effect corresponding to the first region special effect material on a preview image based on the first special effect display parameter includes: acquiring a third part image from the fourth image in response to a special effect preview operation performed in the special effect preview area; wherein the fourth image is the same or related image as the preview image, and the third region image and the first region image contain the same type of region; and displaying at least one third part special effect material corresponding to the first part special effect material in the preview image in real time based on the third part image and the first special effect display parameter.
Therefore, the third part image can be obtained from the fourth image which is the same as or related to the preview image in the process of special effect preview, and the third part special effect material can be presented on the preview image based on the third part image and the first special effect display parameter, so that the effect preview of special effect editing operation can be quickly and accurately realized, and the use experience of a user can be further improved.
In some embodiments, the acquiring the first region image comprises: acquiring the first part image from a third image in response to a part special effect adding operation performed on the first image in an editing operation area; wherein the third image is the same or related image as the first image.
Therefore, the position special effect adding operation can be carried out in the visual special effect editing interface, the operation of a user is facilitated, and the operation and use experience of the user in the special effect editing process can be improved. In addition, the first part image is acquired from the third image, and the second part special effect material corresponding to the part in the third image can be presented on the second image, so that the diversity of the special effect of the user editing can be further improved, and the special effect editing requirement of the user can be better met.
In some embodiments, the third image comprises at least one image frame; the first part special effect material comprises at least one special effect animation frame; the acquiring the first part image from the third image comprises: acquiring the first part image from the current image frame of the third image; the displaying at least one first region special effect material to be edited on the first image based on the first region image comprises: and displaying one special effect animation frame of at least one first part special effect material to be edited on the first image based on the first part image.
Therefore, the corresponding special effect animation frame can be displayed on the first image in real time based on the first position image acquired from the current image frame of the third image, so that the diversity and the interestingness of the special effect edited by the user can be further improved, and the special effect editing requirement of the user can be better met.
In some embodiments, each of the first region special effects material comprises at least one key frame; the first special effect editing operation comprises a special effect material quantity setting operation and/or a frame parameter setting operation, and the first special effect display parameters comprise the quantity of the first part special effect materials and/or key frame display parameters of each first part special effect material; the obtaining of a first special effect display parameter in response to a first special effect editing operation performed on at least one first portion special effect material includes: responding to the quantity setting operation of the first part special effect materials, and acquiring the quantity of the first part special effect materials; and/or, for each first part special effect material, in response to a frame parameter setting operation performed on at least one key frame of the first part special effect material, acquiring a key frame display parameter of the first part special effect material; wherein the key frame display parameters of the first portion special effect material include display parameters of at least one key frame of the first portion special effect material, and the display parameters of each key frame include at least one of: visible state, display position, display size, rotation angle.
Therefore, the user can flexibly set the number of the first part special effect materials and the display effect of each key frame of the first part special effect materials based on different display parameters according to the editing requirements, so that the diversity and the interestingness of the special effect edited by the user can be further improved, and the special effect editing requirements of the user can be better met.
In some embodiments, where the first effect editing operation comprises a looping display setting operation, the first effect display parameters comprise looping display parameters for each of the first region effect material; the method further comprises the following steps: and circularly displaying a third part special effect material corresponding to the first part special effect material on a preview image based on the circular display parameters of the first part special effect material aiming at each first part special effect material in at least one first part special effect material.
Therefore, a user can set the circular display parameters for any one first part special effect material in at least one first part special effect material according to the editing requirements, and the first part special effect material is circularly displayed on the first image based on the circular display parameters of the first part special effect material, so that the diversity and the interestingness of the special effect edited by the user can be further improved, and the special effect editing requirements of the user can be better met.
In some embodiments, where the first effect editing operation comprises a trigger event editing operation, the first effect display parameters comprise a trigger event for each of the first region effect material; the method further comprises the following steps: and for each first part special effect material in at least one first part special effect material, presenting a third part special effect material corresponding to the first part special effect material on a preview image based on a trigger event of the first part special effect material.
Therefore, the trigger event can be set for the first part special effect material according to the actual editing requirement, so that the diversity and the interestingness of the special effect edited by the user can be further improved, and the special effect editing requirement of the user can be better met.
In some embodiments, the obtaining first effect display parameters in response to the first effect editing operation on at least one of the first portion effect materials comprises: responding to the combination operation of at least two first part special effect materials, and acquiring at least one first special effect material combination; responding to the combined special effect editing operation carried out on each first special effect material combination, and acquiring a third special effect display parameter of each first special effect material combination; the method further comprises the following steps: for each first special effect material combination in at least one first part special effect material, presenting at least one third special effect material combination corresponding to the first special effect material combination on a preview image based on a third special effect display parameter of the first special effect material combination; the third special effect material combinations include at least two third location special effect materials corresponding to each first location special effect material in the first special effect material combinations, respectively.
Therefore, the user can carry out combined special effect editing operation on at least one first special effect material combination by taking the at least two first part special effect materials as one first special effect material combination, so that the diversity and the interestingness of the special effect edited by the user can be further improved, the special effect editing requirement of the user can be further met, and the operation and use experience of the user during the special effect editing can be further improved.
In some embodiments, the obtaining at least one first combination of special effect material in response to the combining operation performed on at least two of the first location special effect materials comprises: responding to the selection operation of the special effect materials in the editing operation area, and acquiring at least two selected first part special effect materials to be combined; and responding to the combination operation of the selected at least two first part special effect materials to be combined, and acquiring at least one first special effect material combination after combination.
Therefore, at least one first part special effect material to be combined can be selected in the visual editing operation area, and at least one first special effect material combination after combination is obtained by combining at least two selected first part special effect materials to be combined, so that the special effect editing requirements of users can be better met, and the operation and use experience of the users during special effect editing can be further improved.
In some embodiments, before displaying the at least one first region effect material to be edited on the first image based on the first region image, the method further comprises: responding to a second special effect editing operation carried out on the first image, and acquiring a second special effect display parameter; presenting a first special effect on the first image based on the second special effect display parameter; the acquiring a first region image includes: acquiring a first part image from the first image presenting the first special effect in response to a part special effect adding operation performed on the first image presenting the first special effect.
Therefore, the part special effect material can be added to the first image on the basis of the first special effect, the first part image is obtained from the first image showing the first special effect, and therefore the first part special effect material added to the first image can be combined with the first special effect, the diversity and the interestingness of the special effect edited by a user are further improved, and the special effect editing requirement of the user is better met.
In some embodiments, the first location image includes a location that is a head, the first location effect material includes head effect material, the location effect addition operation includes a head effect addition operation, and the first effect includes a face beautification effect; the displaying at least one first region special effect material to be edited on the first image based on the first region image comprises: displaying at least one head special effect material to be edited on the first image presenting the face beautification special effect based on the first part image; and the face in each head special effect material to be edited presents the face beautifying special effect.
Therefore, a user can add at least one head special effect material on the first image, and the face in each head special effect material can present the face beautifying special effect edited in the first image, so that the head special effect material added in the first image can be combined with the face beautifying special effect edited in the first image, the diversity and the interestingness of the special effect edited by the user are further improved, and the special effect editing requirements of the user are further met.
In some embodiments, prior to said generating a special effects data packet based on said first special effects display parameter, said method further comprises: updating the first special effect display parameters in response to a background setting operation performed on at least one first part special effect material; and displaying the set background on the preview image based on the first special effect display parameters, and presenting third part special effect materials respectively corresponding to each first part special effect material on the preview image provided with the background.
Therefore, the background setting operation can be carried out on at least one first part special effect material so as to display the set background on the preview image, and at least one third part special effect material is presented on the preview image provided with the background, so that the diversity and the interestingness of the special effect edited by the user can be further improved, and the special effect editing requirement of the user can be better met.
In another aspect, an embodiment of the present application provides an image processing method, where the method includes: acquiring a second image to be processed; responding to the special effect selection operation of the second image, and determining a first special effect display parameter and a part to be recognized based on the running special effect data packet; acquiring a second part image from a fifth image based on the part to be identified; wherein the fifth image is the same or related image as the second image; presenting at least one second region special effect material on the second image based on the first special effect display parameters and the second region image.
In another aspect, an embodiment of the present application provides a data generating apparatus, including:
the first display module is used for acquiring a first part image and displaying at least one first part special effect material to be edited on the first image based on the first part image;
the first editing module is used for responding to a first special effect editing operation carried out on at least one first part special effect and acquiring a first special effect display parameter;
the generating module is used for generating a special effect data packet based on the first special effect display parameter; the special effect data packet is used for presenting at least one second part special effect material corresponding to the first part special effect material on a second image based on the first special effect display parameter and the second part image under the running condition; wherein the second part image and the first part image contain the same type of part.
In another aspect, an embodiment of the present application provides an image processing apparatus, including:
the fourth acquisition module is used for acquiring a second image to be processed;
a fifth obtaining module, configured to determine, in response to a special effect selection operation performed on the second image, a first special effect display parameter and a portion to be recognized based on an operating special effect data packet;
a sixth obtaining module, configured to obtain a second part image from a fifth image based on the part to be identified; wherein the fifth image is the same or related image as the second image;
and the eighth display module is used for presenting at least one second part special effect material on the second image based on the first special effect display parameter and the second part image.
In yet another aspect, the present application provides a computer device, including a memory and a processor, where the memory stores a computer program executable on the processor, and the processor implements the steps of the method when executing the program.
In yet another aspect, the present application provides a computer storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the method.
In yet another aspect, the present application provides a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and when the computer program is read and executed by a computer, the computer program implements the steps of the method.
In the embodiment of the application, first, a first part image is obtained, and at least one first part special effect material to be edited is displayed on the first image based on the first part image, so that a user can edit the at least one first part special effect material on the first image according to actual special effect editing requirements, and a special effect data packet is generated based on the obtained first special effect display parameters, so that under the condition that the special effect data packet is operated, at least one second part special effect material corresponding to the first part special effect material is displayed on the second image based on the first special effect display parameters and the second part image, the interestingness and the diversity of a special effect can be improved, the personalized customization of the part-based special effect can be realized, and the special effect editing requirements of the user can be better met.
Drawings
Fig. 1 is a schematic flow chart illustrating an implementation of a data generation method according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating an implementation of a data generation method according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating an implementation of a data generation method according to an embodiment of the present application;
fig. 4 is a schematic flow chart illustrating an implementation of a data generation method according to an embodiment of the present application;
fig. 5 is a schematic flow chart illustrating an implementation of a data generation method according to an embodiment of the present application;
fig. 6 is a schematic flow chart illustrating an implementation of an image processing method according to an embodiment of the present application;
fig. 7A is a schematic diagram illustrating a presentation effect of a head special effect material according to an embodiment of the present application;
fig. 7B is a schematic diagram illustrating a presentation effect of a head special effect material according to an embodiment of the present application;
fig. 7C is a schematic diagram illustrating a presentation effect of a head special effect material according to an embodiment of the present application;
fig. 7D is a schematic diagram of a special effect editing interface of a special effect editing tool according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a data generating apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 10 is a hardware entity diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the present application clearer, the technical solutions of the present application are further described in detail with reference to the drawings and the embodiments, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts belong to the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Where similar language of "first/second" appears in the specification, the following description is added, and where reference is made to the term "first \ second \ third" merely to distinguish between similar items and not to imply a particular ordering with respect to the items, it is to be understood that "first \ second \ third" may be interchanged with a particular sequence or order as permitted, to enable the embodiments of the application described herein to be performed in an order other than that illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application.
Embodiments of the present application provide a data generation method, which may be executed by a computer device, where the computer device may be any suitable device with data processing capability, such as a server, a smart pass device, a snapshot machine, a webcam, a laptop, a tablet computer, a desktop computer, a smart television, a set-top box, a mobile device (e.g., a mobile phone, a portable video player, a personal digital assistant, a dedicated messaging device, a portable game device, etc.). As shown in fig. 1, the method includes the following steps S101 to S103:
step S101, acquiring a first part image, and displaying at least one first part special effect material to be edited on the first image based on the first part image.
Here, the first image is any suitable image for assisting in the special effect editing, and may include a single image frame or a plurality of image frames that are consecutive in time sequence, which is not limited herein. In practice, the skilled person can select a suitable image as the first image according to actual requirements. For example, the first image may be a default of the system, may be imported by the user, or may be acquired by the user in real time.
The first part image is any suitable image containing a specific part, and may include a single image frame or a plurality of image frames that are consecutive in time sequence, which is not limited herein. The part included in the first part image may be a human body part, such as a head, a hand, a foot, an arm, a waist, and the like of a human, or a body part of another object, such as a head, a cat claw, a cat tail, a head of a dog, a dog claw, a dog tail, and the like of a cat. In implementation, the first part image may be a system-default image including a specific part (e.g., a system-default human head image, cat paw image, or dog tail image), an image including a specific part previously acquired from the internet or a database and imported by a user, or a sub-image including a specific part and segmented by a user from the first image or another image different from the first image in real time.
The first part special effect material to be edited may be a special effect material generated based on the first part image and representing a part included in the first part image. For example, in the case where the first region image is an image including a head of a person, the first region special effect material to be edited may be a head special effect material to be edited that presents the head; under the condition that the first part image is an image containing the cat claw, the first part special effect material to be edited can be a cat claw special effect material to be edited and showing the cat claw; in the case that the first part image is an image including a dog tail, the first part special effect material to be edited may be a dog tail special effect material to be edited that presents the dog tail.
Based on the first region image, one or more first region effect materials to be edited may be displayed at any suitable location of the first image. The manner of displaying the first region special effect material to be edited on the first image may be a default of the system or may be set by the user, which is not limited herein.
Step S102, in response to a first special effect editing operation performed on at least one of the first part special effect materials, obtaining a first special effect display parameter.
Here, the first special effect editing operation may be an operation performed by a user according to an actual requirement to complete editing of the display effect of the at least one first part special effect material, and may be a single operation or an operation group formed by a series of operations. By performing the first special effect editing operation on the at least one first part special effect material, the user can edit the display effect of the at least one first part special effect material.
The first special effect display parameter is a display parameter corresponding to a display effect of at least one first part special effect material edited by the first special effect editing operation, and may include, but is not limited to, one or more of the number of the first part special effect materials, a display parameter set for each first part special effect material, a trigger event, a cyclic display mode, and the like. By performing a first special effect editing operation on at least one first portion special effect material, a user can edit the display effect of the at least one first portion special effect material, thereby determining a first special effect display parameter. In implementation, the acquired first special effect display parameter may be determined according to an actually performed first special effect editing operation, which is not limited herein. For example, in a case where the first special effect editing operation is an operation of setting a display position of the head special effect material of the person, the acquired first special effect display parameter may include the set display position of the head special effect material of the person. For another example, in a case where the first special effect editing operation is an operation of setting a trigger event of the cat-paw special effect material, the acquired first special effect display parameter may include the set trigger event of the cat-paw special effect material.
In some embodiments, after obtaining the first effect display parameters, at least one first region effect material may be presented on the first image based on the first effect display parameters. Here, based on the acquired first special effect display parameter, at least one first-region special effect material edited by the special effect editing operation may be presented on the first image. For example, in a case where the first effect display parameters include the set display position and display size of the head effect material of at least one person, the corresponding head effect material may be presented on the first image in accordance with the display position and display size of each head effect material.
Step S103, generating a special effect data packet based on the first special effect display parameter; the special effect data packet is used for presenting at least one second part special effect material corresponding to the first part special effect material on a second image based on the first special effect display parameter and the second part image under the running condition; wherein the second part image and the first part image contain the same type of part.
Here, the second image may be any suitable image to be subjected to special effect processing, and may be an offline image frame or a video acquired in advance, or an image frame or a video acquired in real time, which is not limited herein.
The second part image is any suitable image containing a specific part of the same type as the part contained in the first part image, for example, in the case where the part contained in the first part image is a head of a person, the specific part contained in the second part image is also the head of the person; when the part included in the first part image is a cat claw, the specific part included in the second part image is also a cat claw. The second partial image may include a single image frame or may include a plurality of image frames that are consecutive in time sequence, which is not limited herein. The part included in the second part image may be a human body part, such as a head, a hand, a foot, an arm, a waist, and the like of a human, or a body part of another object, such as a head, a cat claw, a cat tail, a head of a dog, a dog claw, a dog tail, and the like of a cat. In implementation, the second part image may be a system-default image including the specific part (e.g., a system-default human head image, cat paw image, or dog tail image), an image including the specific part previously acquired from the internet or a database and imported by the user, or a sub-image including the specific part and segmented by the user in real time from the second image or another image different from the second image.
The second region special effect material may be special effect material generated based on the second region image and presenting a region included in the second region image. The second part special effect material presented on the second image corresponds to the first part special effect material, and the presentation effect of the second part special effect material on the second image is determined based on the first special effect display parameter. For example, in the case where the first region image is an image including a head of a person, the first region special effect material may be a head special effect material for presenting the head, and the second region image is also an image including a head of a person, and the second region special effect material corresponding to the first region special effect material is a special effect material for presenting a head of a person included in the second region image generated based on the second region image; if the first part image is an image including a cat claw, the first part effect material may be a cat claw effect material for presenting the cat claw, and if the second part image is also an image including a cat claw, the second part effect material corresponding to the first part effect material is a effect material for presenting a cat claw included in the second part image generated based on the second part image.
The special effect data packet may be a data packet for rendering at least one portion special effect material, and may include the first special effect display parameter and a portion to be identified corresponding to the at least one portion special effect material. The special effect data package may be used in any suitable application platform, such as a live broadcast platform, a short video platform, a camera application, and the like. In the case of the operation of the special effect data packet, a second part image may be acquired from the second image or another image different from the second image based on the corresponding part to be recognized, and at least one second part special effect material corresponding to the edited first part special effect material may be presented on the second image based on the corresponding first special effect display parameter and the acquired second part image. In implementation, a person skilled in the art may determine an appropriate special effect data packet according to actual conditions and generate the special effect data packet based on the first special effect display parameter in an appropriate manner, which is not limited herein.
In some embodiments, the special effect data packet may be an executable file including instructions for obtaining a first special effect display parameter included in the special effect data packet and a part to be recognized corresponding to the at least one first part special effect material, and in a case where the special effect data packet is executed, obtaining the first special effect display parameter included in the special effect data packet and the part to be recognized corresponding to the at least one first part special effect material, and obtaining a second part image from a second image or another image different from the second image based on the obtained part to be recognized, and presenting the at least one second part special effect material corresponding to the first part special effect material on the second image based on the obtained first special effect display parameter and the obtained second part image.
In some embodiments, the special effect data package may be a data resource package including a first special effect display parameter and a portion to be identified corresponding to at least one first portion special effect material, and the application platform may load the special effect data package through a specific Software Development Kit (SDK) or a specific program instruction, and parse the special effect data package to obtain a corresponding first special effect display parameter and the portion to be identified corresponding to at least one first portion special effect material, further obtain a second portion image from the second image or another image different from the second image based on the obtained portion to be identified, and present at least one second portion special effect material corresponding to the first portion special effect material on the second image based on the obtained first special effect display parameter and the obtained second portion image.
In the embodiment of the application, first, a first part image is obtained, and at least one first part special effect material to be edited is displayed on the first image based on the first part image, so that a user can edit the at least one first part special effect material on the first image according to an actual special effect editing requirement, and a special effect data packet is generated based on the obtained first special effect display parameter, so that under the condition that the special effect data packet is operated, at least one second part special effect material corresponding to the first part special effect material is displayed on the second image based on the first special effect display parameter and the second part image, the interestingness and the diversity of a special effect can be improved, the personalized customization of the part-based special effect can be realized, and the special effect editing requirement of the user can be better met.
In some embodiments, a special effect editing interface for performing operations related to special effect editing and information display may be displayed on any suitable electronic device with an interface interaction function, for example, the special effect editing interface may be displayed on a notebook computer, a mobile phone, a tablet computer, a palm computer, a personal digital assistant, a digital television, a desktop computer, or the like. In implementation, the electronic device displaying the special effect editing interface may be the same as or different from the computer device executing the data generating method, and is not limited herein. For example, the computer device executing the data generating method may be a notebook computer, the electronic device displaying the special effect editing interface may also be the notebook computer, and the special effect editing interface may be an interactive interface of a client running on the notebook computer, or a web page displayed in a browser running on the notebook computer. For another example, the computer device executing the data generating method may be a server, the electronic device displaying the special effect editing interface may also be a notebook computer, the special effect editing interface may be an interactive interface of a client running on the notebook computer, or a web page displayed in a browser running on the notebook computer, and the notebook computer may access the server through the client or the browser.
In some embodiments, the first image is a reference template image; the displaying at least one first region special effect material to be edited on the first image based on the first region image in step S101 may include:
step S111, determining at least one first part special effect material to be edited based on the first part image;
here, at least one first region effect material to be edited for presenting a region included in the first region image may be generated based on the first region image. In implementation, a person skilled in the art may generate at least one first region special effect material to be edited based on the first region image in an appropriate manner according to actual special effect requirements, which is not limited herein.
And step S112, displaying the reference template image and at least one first part special effect material in an editing operation area.
Here, the editing operation region may be a region for performing an editing operation related to special effect editing on the special effect editing interface. The editing operation area may include an area for displaying and editing the reference template image and the first region special effect material.
The reference template image may be any suitable image used as a reference in the process of performing an operation related to special effect editing in the editing operation area, and may include, but is not limited to, a preset human face template image, a cat face template image, a dog face template image, a human body template image, or the like. For example, for the first special effect editing operation of the face special effect material, a preset standard face template may be used as a reference template image, so that the user refers to the standard face template to perform the first special effect editing operation on the face special effect material. For another example, for the first effect editing operation for the human body effect material, a preset standard human body template may be used as a reference template image, so that the user refers to the standard human body template to perform the first effect editing operation for the human body effect material. In practice, the reference template image and the at least one first region special effect material may be displayed in any suitable manner in the editing operation area according to actual situations, which is not limited herein.
In some embodiments, the step S102 may include: and acquiring first special effect display parameters in response to a first special effect editing operation on at least one first part special effect material on a parameter setting panel and/or a reference template image of an editing operation area. Here, the editing operation area may include a parameter setting panel for setting display parameters of at least one first region special effect material, and/or a visual editing area for displaying and editing the reference template image and the first region special effect material. When implemented, the first special effect editing operation may include an operation of setting a display parameter of at least one first-portion special effect material on the parameter setting panel, and may also include editing operations such as position dragging, rotation, size scaling, and clicking on the reference template image on the first-portion special effect material to be edited.
In some embodiments, where the first effect editing operation comprises a looping display setting operation, the first effect display parameters comprise looping display parameters for each of the first region effect material; the method further comprises the following steps: step S104a, for each of the at least one first-portion special effect material, cyclically displaying a third-portion special effect material corresponding to the first-portion special effect material on the preview image based on the cyclic display parameter of the first-portion special effect material.
Here, the parameters for the cyclic display of the first region special effect material are parameters defining the cyclic display process of the first region special effect material, and may include, but are not limited to, one or more of a cyclic display manner, a number of cycles, a cycle interval duration, and the like of the first region special effect material. In some embodiments, the first portion effect material may include at least one effect animation frame, and the loop display parameter of the first portion effect material is a parameter defining a loop display process of each effect animation frame in the first portion effect material, and may include, but is not limited to, at least one effect animation frame performing loop display, and one or more of a loop display manner, a loop number, a loop interval duration, and the like of the at least one effect animation frame.
The loop display setting operation may be an operation performed by the user to complete the setting of the loop display parameter of the first-part special effect material, and may be a single operation or an operation group formed by a series of operations. In implementation, the user may set an appropriate loop display parameter for the at least one first-portion special effect material or the at least one special effect animation frame of the at least one first-portion special effect material according to an actual situation, which is not limited in this embodiment of the present application. For example, the loop display parameter of each first portion effect material may be acquired in response to an operation of setting the loop display parameter of at least one first portion effect material on the parameter setting panel of the editing operation area and/or the reference template image.
The preview image is a video composed of a single frame image or a plurality of frame images for previewing the presentation effect of the edited first part special effect material. The preview image can be displayed on a special effect editing interface. In practice, the user may select an appropriate preview image according to actual conditions, which is not limited herein. For example, the user may obtain a preview image acquired in advance from a local place, a server, a cloud, or the like, or may obtain a preview image acquired in real time through a camera, a web camera, or the like. Based on the acquired circulation display parameters of the first part special effect material, a third part special effect material corresponding to the first part special effect material can be circularly displayed on the preview image.
In the above embodiment, the user may set the circular display parameter for at least one first-portion special effect material according to the editing requirement, so as to perform the circular display on the first image based on the circular display parameter of the first-portion special effect material, thereby further improving the diversity and interest of the special effect edited by the user, and further better satisfying the special effect editing requirement of the user.
In some embodiments, where the first effect editing operation comprises a trigger event editing operation, the first effect display parameters comprise a trigger event for each of the first region effect material. The above method may further comprise: step S104b, for each first-portion special effect material of at least one first-portion special effect material, based on a trigger event of the first-portion special effect material, presenting a third-portion special effect material corresponding to the first-portion special effect material on a preview image.
Here, the trigger event of the first part special effect material may be any suitable event for triggering the first part special effect material to start or stop presenting, and may include, but is not limited to, an event triggered based on one or more of time, number of frames of images, picture content of images, and relevance to other special effect effects, and the like.
In some embodiments, the triggering event may include at least one of: a time trigger event, an image frame trigger event, a picture trigger event, an associated special effect trigger event.
The time trigger event may include, but is not limited to, a current time, a presentation time of the first image, a presentation time of the first region special effect material, and the like reaching a set time condition. For example, when the current time reaches the preset time, a first part of special effect materials is triggered to start to be presented, after 5 seconds of first image presentation, the first part of special effect materials is triggered to start to be presented, after 10 seconds of first image presentation, the first part of special effect materials is triggered to stop being presented, after 8 seconds of first part of special effect materials, the first part of special effect materials is triggered to stop being presented, and the like.
The image frame trigger event may include, but is not limited to, that the current image frame of the first image, the number of times of loop playing of the first image, the current image frame of the first portion special effect material, the number of times of loop playing of the first portion special effect material, and the like reach the set condition. For example, when the current image frame of the first image is the 2 nd frame, the first part special effect material is triggered to start to be presented, and when the current image frame of the first image is the last frame, the first part special effect material is triggered to stop being presented; when the number of times of the cyclic playing of the first image is 1, triggering the first part special effect material to start to be presented, and when the number of times of the cyclic playing of the first image is 2, triggering the first part special effect material to stop being presented; triggering the first part special effect material to stop presenting when the current image frame of the first part special effect material is the last frame; and triggering the first part special effect material to stop presenting when the circulating playing frequency of the first part special effect material is 6 th time, and the like.
The screen trigger event may include, but is not limited to, a screen content presence setting event of the first image. For example, when the picture content of the first image has a face, triggering a first part special effect material to start presenting; triggering a first part special effect material to start presenting when the mouth of the person or other animals exists in the picture content of the first image, and triggering the first part special effect material to stop presenting when the mouth of the person or other animals exists in the picture content of the first image; triggering a first part special effect material to start to present under the condition that the picture content of the first image is blinking; and triggering the first part special effect material to stop displaying and the like when the 2 nd blink exists in the picture content of the first image.
The associated special effect trigger event may include, but is not limited to, presentation of other special effect effects associated with the first location special effect material in the first image satisfying a set condition. For example, a first region special effect material may be set to be associated with the face distortion special effect, and the first region special effect material may be triggered to start presenting when the face distortion special effect starts to be presented, and may be triggered to stop presenting when the face distortion special effect stops to be presented. For another example, the first location special effect material may be set to start to be presented in association with the filter special effect, and the first location special effect material may be triggered to start to be presented when the filter special effect starts to be presented.
In the embodiment, the trigger event can be set for the first part special effect material according to the actual editing requirement, so that the diversity and the interestingness of the special effect edited by the user can be further improved, and the special effect editing requirement of the user can be better met.
In some embodiments, the step S102 may include:
step S121, responding to the combination operation of the two first part special effect materials, and acquiring at least one first special effect material combination; step S122, in response to the combined special effect editing operation performed on each of the first special effect material combinations, obtaining a third special effect display parameter of each of the first special effect material combinations. Here, at least two first-part special effect materials may be combined to obtain at least one first special effect material combination, and through a combined special effect editing operation, a plurality of first-part special effect materials in each first special effect material combination are subjected to special effect editing as a whole to obtain a third special effect display parameter of each first special effect material combination. The third special effect display parameter of the first special effect material combination is a display parameter for presenting a plurality of first part special effect materials in the first special effect material combination as a whole, and may include, but is not limited to, one or more of a display position, a rotation angle, a display size, a trigger event, a circular display mode, and the like of the whole first special effect material combination.
The above method may further comprise: step S105, aiming at each first special effect material combination in at least one first part special effect material, based on a third special effect display parameter of the first special effect material combination, presenting at least one third special effect material combination corresponding to the first special effect material combination on a preview image; the third special effect material combinations include at least two third location special effect materials corresponding to each first location special effect material in the first special effect material combinations, respectively. Here, at least two third-region special effect materials in the third special effect material combination may be presented as a whole based on the third characteristic display parameter to present a special effect preview effect corresponding to the edited first special effect material combination on the preview image.
In the embodiment, the at least two first part special effect materials are used as one first special effect material combination, and the user can carry out combined special effect editing operation on the at least one first special effect material combination, so that the diversity and the interestingness of the special effect edited by the user can be further improved, the special effect editing requirement of the user can be further met, and the operation and use experience of the user in the special effect editing process can be further improved.
In some embodiments, the step S121 may include: step S131, responding to the selection operation of the special effect materials in the editing operation area, and acquiring at least two selected first part special effect materials to be combined; step S132, in response to the combination operation performed on the selected at least two first part special effect materials to be combined, obtains at least one first special effect material combination after combination. Here, the user may perform a special effect material selection operation on the parameter setting panel and/or the reference template image of the editing operation region, and perform a combination operation on the selected at least two first-portion special effect materials to be combined to obtain at least one first special effect material combination. For example, at least two first part special effect materials to be combined may be set on the parameter setting panel, and the at least two first part special effect materials to be combined are added to at least one first special effect material combination; or selecting at least two first part special effect materials to be combined on the reference template image, and combining the at least two first part special effect materials to be combined into at least one first special effect material combination by clicking or position dragging operation of the selected at least two first part special effect materials to be combined. Therefore, at least one first special effect material combination can be conveniently and quickly acquired, and operation and use experience of a user in special effect editing can be further improved.
In some embodiments, each of the first region special effects material comprises at least one key frame; the first special effect editing operation comprises a special effect material quantity setting operation and/or a frame parameter setting operation, and the first special effect display parameters comprise the quantity of the first part special effect materials and/or key frame display parameters of each first part special effect material. The step S102 may include at least one of the following steps S141 and S142:
step S141, responding to the quantity setting operation of the first part special effect materials, and acquiring the quantity of the first part special effect materials;
here, the number setting operation may be an operation of inputting or selecting a number for each of the first part special effect materials on the parameter setting panel, where the obtained input or selected number is the number of the corresponding first part special effect materials; or the method may be a material copying operation performed on each first portion of the special effect materials on the reference template image, and the obtained number of the copied first portion of the special effect materials is the number of the first portion of the special effect materials. In implementation, a person skilled in the art may set operations in a suitable number according to an actual application scenario, which is not limited in the embodiment of the present application.
Step S142, for each first portion special effect material, obtaining a key frame display parameter of the first portion special effect material in response to a frame parameter setting operation performed on at least one key frame of the first portion special effect material.
Here, each first region effect material has at least one key frame, and each key frame may correspond to a key frame time in the animation of the first region effect material. In implementation, one or more special effect animation frames may be selected from at least one special effect animation frame of the first portion special effect material in an appropriate manner according to actual conditions, and the method is not limited herein.
The frame parameter setting operation may be an operation performed by a user to complete setting of the display parameter of at least one key frame of the first-part special effect material, and may be a single operation or an operation group formed by a series of operations. In implementation, the display parameters that can be set for the key frame in the frame parameter setting operation can be determined according to actual situations, which is not limited in the embodiment of the present application.
In the above embodiment, the user can perform the setting operation of the number of the special effect materials and/or the setting operation of the frame parameters on the key frame of the at least one first part of the special effect material according to the editing requirement, so that the diversity and the interest of the special effect edited by the user can be further improved, and the special effect editing requirement of the user can be better met.
In some embodiments, the key frame display parameters of the first portion special effect material comprise display parameters of at least one key frame of the first portion special effect material, the display parameters of each key frame comprising at least one of: visible state, display position, display size, rotation angle.
In the embodiment, the user can flexibly set the display effect of each key frame of the first part special effect material based on different display parameters according to the editing requirement, so that the diversity and the interestingness of the special effect edited by the user can be further improved, and the special effect editing requirement of the user can be better met.
The embodiment of the application provides a data generation method, which can be executed by computer equipment. As shown in fig. 2, the method includes the following steps S201 to S204:
a step S201 of acquiring the first part image from a third image in response to a part special effect adding operation performed on the first image in an editing operation area; the third image is the same or related image as the first image.
Here, any suitable effect editing operation may be performed on the first image in the editing operation area, including but not limited to one or more of adding, editing, deleting, setting display parameters, etc. of the first part effect material.
The part special effect adding operation may be an operation performed by a user to add at least one first part special effect material on the first image, and may be a single operation or an operation group formed by a series of operations, which is not limited in this embodiment of the present application. In implementation, the part special effect adding operation may be a click operation performed by a user in the editing operation area, or may be an operation instruction input by the user in the editing operation area, which is not limited in the embodiment of the present application.
The third image is the same as or related to the first image, and the third image may include the first region image. In implementation, the user may determine an appropriate third image according to actual requirements, for example, the third image may be the first image, or may be another image related to the first image, or may be the first image presenting a special effect. The third image related to the first image may be determined by the user according to an actual application scene, for example, in a live video of the microphone connecting interaction, the first image may be a live video of the current user, the third image may be a live video of another user who performs microphone connecting interaction with the current user, the first position image (such as a head sub-image, a hand sub-image, and the like) may be acquired from a live video image frame of the other user, and at least one first position special effect material (such as a head special effect material, a hand special effect material, and the like) may be presented in the live video (i.e., the first image) of the current user based on the acquired first position image.
By the part special effect adding operation on the first image, the first part image can be obtained from the third image, and at least one first part special effect material is added to the first image based on the first part image. In practice, the first region image may be acquired from the third image in any suitable manner, which is not limited in this application. For example, a specific portion in the third image may be segmented by an image segmentation algorithm to obtain a first portion image; the third image may also be displayed in an image display area in the special effect editing interface, and in response to an image area selection operation performed by a user on the third image, an image area selected by the user is acquired from the third image as the first part image.
Step S202, at least one first part special effect material to be edited is displayed on the first image based on the first part image.
Step S203, in response to a first special effect editing operation performed on at least one of the first part special effect materials, obtaining a first special effect display parameter.
Step S204, generating a special effect data packet based on the first special effect display parameter; the special effect data packet is used for presenting at least one second part special effect material corresponding to the first part special effect material on a second image based on the first special effect display parameter and the second part image under the running condition; wherein the second part image and the first part image contain the same type of part.
Here, the steps S202 to S204 correspond to the steps S101 to S103, respectively, and in implementation, specific embodiments of the steps S101 to S103 may be referred to.
In the embodiment of the application, the position special effect adding operation can be performed in a visual special effect editing interface, so that the user operation is facilitated, and the operation use experience of the user in the special effect editing process can be improved. In addition, the first part image is acquired from the third image, and the second part special effect material corresponding to the part in the third image can be presented on the second image, so that the diversity of the special effect of the user editing can be further improved, and the special effect editing requirement of the user can be better met.
In some embodiments, the third image comprises at least one image frame; the first portion special effect material includes at least one special effect animation frame.
The acquiring the first region image from the third image in step S201 may include: step S211, obtaining the first part image from the current image frame of the third image.
The step S202 may include: step S212 is to display a special effect animation frame of at least one to-be-edited first part special effect material on the first image based on the first part image.
In the embodiment, the corresponding special effect animation frame can be displayed on the first image in real time based on the first position image acquired from the current image frame of the third image, so that the diversity and the interestingness of the special effect edited by the user can be further improved, and the special effect editing requirement of the user can be better met.
The embodiment of the application provides a data generation method, which can be executed by computer equipment. As shown in fig. 3, the method includes steps S301 to S304 as follows:
step S301, a first part image is obtained, and at least one first part special effect material to be edited is displayed on the first image based on the first part image.
Step S302, in response to a first special effect editing operation performed on at least one of the first part special effect materials, obtaining a first special effect display parameter.
Here, the steps S301 to S302 correspond to the steps S101 to S102, respectively, and in the implementation, specific embodiments of the steps S101 to S102 may be referred to.
Step S303, based on the first special effect display parameter, presenting at least one third part special effect material corresponding to the first part special effect material on the preview image.
Here, the preview image is a single frame image or a video composed of multiple frame images for previewing the effect of presenting the edited first-part special effect material, and the user may select an appropriate preview image according to the actual situation, which is not limited here. In some embodiments, a special effect preview area may be set in the special effect editing interface for displaying a preview effect of the preview image and the edited first part special effect material.
The user may obtain a preview image acquired in advance from a local place, a server, a cloud, or the like, or may obtain a preview image acquired in real time through a camera, a web camera, or the like, which is not limited herein.
Based on the acquired first special effect display parameters, at least one third part special effect material corresponding to the first part special effect material can be presented on the preview image.
Step S304, generating a special effect data packet based on the first special effect display parameter; the special effect data packet is used for presenting at least one second part special effect material corresponding to the first part special effect material on a second image based on the first special effect display parameter and the second part image under the running condition; wherein the second part image and the first part image contain the same type of part.
Here, the step S304 corresponds to the step S103, and when it is performed, reference may be made to a specific embodiment of the step S103.
In some embodiments, the preview image may include one of: the method comprises the steps of importing a single-frame image, importing at least two continuous frames of images, acquiring the single-frame image in real time, and acquiring at least two continuous frames of images in real time. Here, the imported single frame image may be a previously captured picture or the like imported from a local place, a server, a cloud or the like, the imported continuous at least two frame images may be a previously captured video or the like imported from a local place, a server, a cloud or the like, and the real-time captured single frame image or the continuous at least two frame images may be captured in real time by the image capturing component or the like.
The data generation method provided by the embodiment of the application can display the edited special effect in the preview image in real time in the process of the special effect editing operation of the user, so that the operation use experience of the user in the special effect editing process can be effectively improved, the user can adjust and optimize the special effect display parameters in real time based on the real-time displayed special effect, and the special effect editing requirement of the user can be better met.
In some embodiments, the step S303 may include:
step S311, in response to the special effect preview operation performed in the special effect preview region, acquiring a third partial image from the fourth image; wherein the fourth image is the same or related image as the preview image, and the third region image and the first region image contain the same type of region;
here, the special effect preview operation may be performed in the special effect preview region. The special effect preview operation may be any suitable operation for previewing the effect of presenting the edited first part special effect material, for example, one or more of a click operation on a preview button of the special effect preview region, an expansion operation of the special effect preview region, an operation of opening a camera in the special effect preview region, and the like.
The third part image is an image in which any suitable first part image contains parts of the same type, for example, in the case where the part contained in the first part image is a human head, the specific part contained in the third part image is also a human head; when the part included in the first part image is a cat claw, the specific part included in the third part image is also a cat claw. The third partial image may include a single image frame or may include a plurality of image frames that are consecutive in time sequence, which is not limited herein. In implementation, the third partial image may be a system default image (e.g., a system default human head image, cat paw image, or dog tail image), an image that is imported by a user and acquired from the internet or a database in advance, or a sub-image that is obtained by the user by dividing the preview image or another image related to the preview image in real time.
The fourth image is the same as or related to the preview image, and the third partial image may be included in the fourth image. In implementation, the user may determine an appropriate fourth image according to actual requirements, for example, the fourth image may be a preview image, may also be another image related to the preview image, and may also be a preview image presenting a special effect. The fourth image related to the preview image may be determined by the user according to the actual application scenario, and is not limited herein. In practice, the third region image may be acquired from the fourth image in any suitable manner, which is not limited in this application. For example, a specific portion in the fourth image may be segmented by an image segmentation algorithm to obtain a third portion image; the fourth image may be displayed in the special effect preview area, and the image area selected by the user may be acquired from the fourth image as the third part image in response to an image area selection operation performed by the user on the fourth image.
Step S312, based on the third part image and the first special effect display parameter, presenting at least one third part special effect material corresponding to the first part special effect material in the preview image in real time.
Here, the third region special effect material may be a special effect material that is generated based on the third region image and that presents a region included in the third region image. The third part special effect material presented on the preview image corresponds to the first part special effect material, and the presentation effect of the third part special effect material on the preview image is determined based on the first special effect display parameter. For example, in the case where the first region image is an image including a head of a person, the first region special effect material may be a head special effect material for presenting the head, and the third region image is also an image including a head of a person, and the third region special effect material corresponding to the first region special effect material is a special effect material for presenting a head of a person included in the third region image generated based on the third region image; if the first region image is an image including a cat claw, the first region special effect material may be a cat claw special effect material for presenting the cat claw, and if the third region image is also an image including a cat claw, the third region special effect material corresponding to the first region special effect material is a special effect material for presenting a cat claw included in the third region image generated based on the third region image.
In the above embodiment, the third part image may be obtained from a fourth image that is the same as or related to the preview image in the process of performing special effect preview, and the third part special effect material may be presented on the preview image based on the third part image and the first special effect display parameter, so that effect preview of special effect editing operation may be quickly and accurately implemented, and further, the use experience of the user may be further improved.
An embodiment of the present application provides a data generating method, which may be executed by a computer device, as shown in fig. 4, including the following steps S401 to S406:
step S401, in response to a second special effect editing operation performed on the first image, obtains a second special effect display parameter.
Here, the second special effect editing operation may be any suitable special effect editing operation performed on the first image by the user according to actual needs, and may include, but is not limited to, one or more of operations of adding, editing, deleting, setting display parameters, and the like of a special effect, which is not limited in this embodiment of the present application. For example, in the case where the first image is a face image, one or more of a sticker adding operation, a sticker editing operation, a sticker deleting operation, a makeup editing operation, a makeup setting operation, and the like may be performed on the face image; in the case where the first image is a cat image, one or more of a sticker adding operation, a sticker editing operation, a sticker deleting operation, a filter setting operation, a background setting operation, and the like may be performed on the cat image; in the case where the first image is a building image, one or more of a sticker adding operation, a filter setting operation, a background setting operation, a foreground setting operation, a lens special effect setting operation, and the like may be performed on the building image.
The second special effect display parameter is a display parameter corresponding to the special effect edited by the second special effect editing operation, and may include, but is not limited to, one or more of special effect materials in the special effect, a display parameter set for each special effect material, a trigger event of the special effect, and the like. In implementation, the acquired second special effect display parameter may be determined according to an actually performed special effect editing operation, and is not limited herein. For example, in a case where the special effect editing operation is a makeup editing operation performed on the face image, the acquired second special effect display parameters may include a set makeup material, a display position, an effect intensity, a material frame rate, and the like set for the makeup material, and based on the second special effect display parameters, a makeup effect edited by the makeup editing operation may be presented on the face image. For another example, in a case where the special effect editing operation is a sticker adding operation for a cat image, the acquired second special effect display parameters may include a sticker material to be added and a display position, a display size, a transparency, and the like set for the sticker material, and based on the second special effect display parameters, a sticker effect added by the sticker adding operation may be presented on the cat image.
Step S402, presenting a first special effect on the first image based on the second special effect display parameter.
Here, based on the acquired second special effect display parameter, the first special effect edited by the special effect editing operation may be presented on the first image. In practice, the first effect may include at least one of: the special effects of the paster, the beauty, the makeup, the background, the foreground and the lens are achieved.
Step S403, in response to a region special effect adding operation performed on the first image exhibiting the first special effect, acquiring a first region image from the first image exhibiting the first special effect.
Here, the first region special effect material may be added on the first image exhibiting the first special effect. Through the part special effect adding operation on the first image presenting the first special effect, the first part image can be obtained from the first image presenting the first special effect, and at least one first part special effect material is added to the first image based on the first part image. In implementation, any suitable manner may be adopted to obtain the first region image from the first image exhibiting the first special effect, which is not limited in this application. For example, a specific part in a first image showing a first special effect may be segmented by an image segmentation algorithm to obtain a first part image; in response to an image region selection operation performed by a user on a first image exhibiting a first special effect, an image region selected by the user is acquired from the first image exhibiting the first special effect as a first partial image.
Step S404, displaying at least one first part special effect material to be edited on the first image based on the first part image.
Step S405, in response to a first special effect editing operation performed on at least one of the first part special effect materials, obtains a first special effect display parameter.
Step S406, generating a special effect data packet based on the first special effect display parameter; the special effect data packet is used for presenting at least one second part special effect material corresponding to the first part special effect material on a second image based on the first special effect display parameter and the second part image under the running condition; wherein the second part image and the first part image contain the same type of part.
Here, the steps S404 to S406 correspond to the steps S101 to S103, respectively, and in the implementation, specific embodiments of the steps S101 to S103 may be referred to.
According to the data generation method provided by the embodiment of the application, the part special effect material can be added to the first image on the basis of the first special effect, the first part image is obtained from the first image showing the first special effect, and therefore the first part special effect material added to the first image can be combined with the first special effect, the diversity and the interestingness of the special effect edited by a user are further improved, and the special effect editing requirement of the user is further met.
In some embodiments, the first location image includes a location that is a head, the first location effect material includes head effect material, the location effect addition operation includes a head effect addition operation, and the first effect includes a face beautification effect. The step S404 may include:
step S411, displaying at least one head special effect material to be edited on the first image presenting the face beautification special effect based on the first part image; and the face in each head special effect material to be edited presents the face beautifying special effect.
Here, the head special effect material is a special effect material generated based on a head image of a person. The face beautifying effect is any suitable effect for beautifying the face, and may include, but is not limited to, one or more of a face sticker effect, a beauty effect, a makeup effect, and the like. When the method is implemented, a head special effect adding operation can be carried out on the first image presenting the face beautifying special effect, the first position image is obtained from the first image presenting the face beautifying special effect, and at least one head special effect material to be edited is displayed on the first image presenting the face beautifying special effect based on the obtained first position image. Because the face beautifying special effect can be presented in the acquired first part image, the face beautifying special effect can also be presented in at least one head special effect material to be edited displayed on the first image.
In the embodiment, a user can add at least one head special effect material on the first image, and the face in each head special effect material can present the edited face beautifying special effect in the first image, so that the head special effect material added in the first image can be combined with the edited face beautifying special effect in the first image, the diversity and the interestingness of the special effect edited by the user can be further improved, and the special effect editing requirements of the user can be better met.
An embodiment of the present application provides a data generating method, which may be executed by a computer device, as shown in fig. 5, including the following steps S501 to S505:
step S501, a first part image is obtained, and at least one first part special effect material to be edited is displayed on the first image based on the first part image.
Step S502, in response to a first special effect editing operation performed on at least one of the first part special effect materials, obtaining a first special effect display parameter.
Here, the steps S501 to S502 correspond to the steps S101 to S102, respectively, and in the implementation, specific embodiments of the steps S101 to S102 may be referred to.
Step S503, updating the first special effect display parameter in response to the background setting operation performed on at least one of the first part special effect materials.
Here, any suitable background effect may be set for at least one first region effect material by a background setting operation performed on the at least one first region effect material. In implementation, in response to a background setting operation performed on the at least one first portion special effect material, the set background display parameter of the at least one portion feature may be added to the first special effect display parameter, so as to obtain an updated first special effect display parameter.
Step S504 is to display the set background on the preview image based on the first special effect display parameter, and present a third part special effect material corresponding to each of the first part special effect materials on the preview image with the set background.
Step S505, generating a special effect data packet based on the first special effect display parameter; the special effect data packet is used for presenting at least one second part special effect material corresponding to the first part special effect material on a second image based on the first special effect display parameter and the second part image under the running condition; wherein the second part image and the first part image contain the same type of part.
Here, step S505 corresponds to step S103, and in the implementation, reference may be made to a specific embodiment of step S103.
In the embodiment, the background setting operation may be performed on at least one first-portion special effect material to display the set background on the preview image, and at least one third-portion special effect material is presented on the preview image provided with the background, so that diversity and interest of a special effect edited by a user may be further improved, and a special effect editing requirement of the user may be better met.
An embodiment of the present application provides an image processing method, which may be executed by a computer device, as shown in fig. 6, including the following steps S601 to S604:
step S601, a second image to be processed is acquired.
Here, the second image may be any suitable image to be subjected to special effect processing, and may be an offline image frame or a video acquired in advance, or an image frame or a video acquired in real time, which is not limited herein.
In implementation, the second image may be a single image frame or a plurality of image frames consecutive in time sequence, which is to be subjected to special effect processing and is shot, imported or collected in real time by a user in any suitable application platform such as live broadcast, short video and the like.
Step S602, in response to a special effect selection operation performed on the second image, determining a first special effect display parameter and a portion to be recognized based on the running special effect data packet.
Here, the special effect selection operation may be an operation for selecting a special effect data packet for performing special effect processing on the second image, and may be a single operation or an operation group formed by a series of operations, which is not limited in this embodiment of the present application. By executing the special effect selection operation, the user can select one special effect data packet to be operated from a plurality of preset special effect data packets for carrying out special effect processing on the second image to be processed. In implementation, the special effect data packet for performing the special effect processing on the second image may be generated by adopting any one of the above data generation methods in advance, and a user may select an appropriate special effect data packet to perform the special effect processing on the second image through a special effect selection operation according to an actual requirement.
Based on the running special effect data packet, a first special effect display parameter for displaying the part special effect material and a part to be recognized corresponding to at least one part special effect material can be obtained.
In some embodiments, the special effect data packet may be an executable file, and includes instructions for obtaining the first special effect display parameter and the portion to be identified corresponding to the special effect data packet, and by executing the special effect data packet, the instructions may be executed to obtain the first special effect display parameter and the portion to be identified corresponding to the special effect data packet.
In some embodiments, the special effect data package may be a data resource package including the first special effect display parameter and the portion to be recognized corresponding to the at least one first portion special effect material, and the application platform may load the special effect data package through a specific software development kit SDK or a specific program instruction and analyze the special effect data package to obtain the corresponding first special effect display parameter and the portion to be recognized corresponding to the at least one first portion special effect material.
Step S603, acquiring a second part image from a fifth image based on the part to be recognized; wherein the fifth image is the same or related image as the second image.
Here, the second part image may be any suitable image including a part to be recognized, the fifth image may be the same as or related to the second image, and the second part image may be included in the fifth image. In implementation, the user may determine a suitable fifth image according to actual requirements, for example, the fifth image may be the second image, or may be another image related to the second image, or may be the second image presenting a special effect. The fifth image related to the second image may be determined by the user according to an actual application scene, for example, in a live video of the microphone connecting interaction, the second image may be a live video of the current user, the fifth image may be a live video of another user who performs microphone connecting interaction with the current user, the second position image (such as a head sub-image, a hand sub-image, and the like) may be acquired from a live video image frame of the other user, and at least one first position special effect material (such as a head special effect material, a hand special effect material, and the like) may be presented in the live video (i.e., the second image) of the current user based on the acquired second position image.
The image segmentation processing is carried out on the part to be recognized in the fifth image, so that a second part image can be obtained, and at least one second part special effect material is added to the second image based on the second part image. In practice, the second region image may be acquired from the fifth image in any suitable manner, which is not limited in this application.
Step S604, based on the first special effect display parameter and the second part image, at least one second part special effect material is presented on the second image.
Here, at least one second region special effect material corresponding to the second region image may be presented on the second image based on the acquired first special effect display parameter and the second region image. At least one second region special effect material presented on the second image is generated based on the second region image.
In the image processing method provided by the embodiment of the application, a second image to be processed is obtained firstly; secondly, in response to the special effect selection operation carried out on the second image, a first special effect display parameter and a part to be recognized are determined based on the running special effect data packet, a second part image is obtained from the fifth image based on the part to be recognized, and finally at least one second part special effect material is presented on the second image based on the first special effect display parameter and the second part image. Therefore, the second image to be processed can be subjected to special effect processing based on the acquired special effect data packet, so that at least one second part special effect material corresponding to the second part image is presented on the second image, the interestingness and diversity of the special effect can be improved, and the special effect requirement of a user can be better met.
An exemplary application of the embodiments of the present application in a practical application scenario will be described below.
With the development of live broadcast applications and short video applications, users gradually demand various special effect effects applicable to pictures or videos, and the related art lacks a special effect derived based on an image of a body part of a person after image segmentation, such as a series of special effect designs performed on a segmented head image.
In view of this, the embodiment of the present application provides a special effect editing tool based on the data generating method provided above, which can implement customized design on a part special effect material (which may be the first part special effect material), generate a corresponding special effect data packet, and implement a customized part special effect material. Based on the special effect editing tool, a user of an application platform such as a live broadcast application platform, a short video application platform or a camera application platform can edit parameter information of a user-defined part special effect material by utilizing an acquired part image containing a specific part, and then generate a corresponding special effect data packet based on the edited parameter information, wherein the special effect data packet can run in the application platform such as the live broadcast application platform, the short video application platform or the camera application platform so as to provide the user-defined part special effect material for the user of the corresponding application platform.
Taking custom editing of head special effect materials as an example, the special effect editing tool provided by the embodiment of the application can at least realize the following functions:
1) an image containing the head is acquired in real time, and a head segmentation result is extracted from the acquired image.
2) One or more head special effect materials are defined, and for each head special effect material, a key frame animation of the head special effect material can be defined, wherein the display states of key frames corresponding to the same head special effect material at different key frame times can be defined, and the display states comprise whether the key frames are visible, the display positions, the display sizes, the rotation angles and the like of the key frames, the cycle times of the key frame animation and the like. For example, referring to fig. 7A, a header effect material 110 may be added to the image 100, and the display size of the key frames in the header effect material 110 may be set to be larger than the size of the original header in the image 100. For another example, referring to fig. 7B, a plurality of head effect materials 120 may be added to the image 100, and different rotation angles and display positions may be set for each key frame of each head effect material, so that a special effect that the plurality of head effect materials 120 move along the set plurality of display positions may be presented on the image 100 when an animation corresponding to each head effect material is played.
3) One or more special effect combinations are defined, each special effect combination can contain a plurality of head special effect materials, wherein for each special effect combination, the head special effect materials contained in the special effect combination can be defined, and the display states of the key frames corresponding to the special effect combination in different key frame times are defined, including whether the key frames are visible or not, the display positions, the display sizes, the rotation angles and the like of the key frames, the cycle times of key frame animations and the like are defined.
4) Combining the head special effect material with a sticker special effect, a makeup special effect, a beauty special effect and the like, extracting a head segmentation result from an image provided with the sticker special effect, the makeup special effect or the beauty special effect and the like, and editing the head special effect material based on the head segmentation result, wherein the face sticker special effect, the makeup special effect or the beauty special effect and the like arranged at the head of a person in the original image can be synchronously presented in each head special effect material. For example, with continued reference to FIG. 7B, the head segmentation result may be extracted from the image 100 with the face sticker effect 130 set, and the face sticker effect 130 may also be rendered in a plurality of head effect material 120 added to the image 100 based on the head segmentation result.
5) Combining the head special effect material with the background sticker, for example, referring to fig. 7C, the hierarchical relationship between the sticker 140 and the head special effect material 150 may be defined, and the sticker 140 is disposed at the next layer of the head special effect material 150 as the background of the head segmentation result, so that the combination of the head segmentation result and the background sticker may be achieved.
Fig. 7D is a schematic diagram of a special effect editing interface of a special effect editing tool provided in an embodiment of the present application, and as shown in fig. 7D, the special effect editing interface of the special effect editing tool includes an image layer panel 10, a canvas panel 20, a parameter setting panel 30, and a special effect preview area 40. The image layer panel can move up and down each image layer in the special effect, and an editing page of the head special effect material can be one image layer; the canvas panel can display a reference template image for assisting the editing of the head special effect material and related materials of the edited head special effect material, and can adjust the display state, the display size, the display position and the like of the head in the key frame of the material and the head special effect material; the parameter setting panel can set display parameters of the head special effect materials, such as the display state, the display size and the display position of the head in a key frame of the head special effect materials, the number of the head special effect materials, trigger events of the head special effect materials and the like; the special effect preview area can preview the edited head special effect material in real time.
In the special effect editing tool provided in the embodiment of the present application, the process of editing the special effect may include the following steps S701 to S702:
step S701, clicking a newly-added head special effect material in a parameter setting panel to newly add a head special effect material to be edited in a canvas panel; wherein the at least one head special effect material that has been added can be deleted by clicking a delete button.
Step S702, the display position, the rotation angle and the display size of each key frame in the head special effect material are changed in the canvas panel.
In some embodiments, the time at which the head effect material reaches a particular location during the rendering may be set by defining the display locations of key frames of different key frame times in the head effect material.
In some embodiments, trigger events may be set for the head special effects material by performing event-triggered setting operations on the head special effects material, such as opening a mouth, blinking, shaking the head, and so forth.
In addition, in implementation, the image including the head acquired in real time may correspond to the first image or the third image in the foregoing embodiment, and the head segmentation result may correspond to the first region image in the foregoing embodiment.
Based on the foregoing embodiments, the present application provides a data generating apparatus, where the apparatus includes units and modules included in the units, and may be implemented by a processor in a computer device; of course, the implementation can also be realized through a specific logic circuit; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 8 is a schematic structural diagram of a data generating apparatus according to an embodiment of the present application, and as shown in fig. 8, the data generating apparatus 800 includes: a first display module 810, a first editing module 820, and a generating module 830, wherein:
the first display module 810 is configured to obtain a first part image, and display at least one first part special effect material to be edited on the first image based on the first part image;
a first editing module 820, configured to obtain a first special effect display parameter in response to a first special effect editing operation performed on at least one first portion special effect material;
a generating module 830, configured to generate a special effect data packet based on the first special effect display parameter; the special effect data packet is used for presenting at least one second part special effect material corresponding to the first part special effect material on a second image based on the first special effect display parameter and the second part image under the running condition; wherein the second part image and the first part image contain the same type of part.
In some embodiments, the first image is a reference template image; the first display module is further configured to: determining at least one first part special effect material to be edited based on the first part image; and displaying the reference template image and at least one first part special effect material in an editing operation area.
In some embodiments, the apparatus further comprises: and the second display module is used for presenting at least one third part special effect material corresponding to the first part special effect material on the preview image based on the first special effect display parameter.
In some embodiments, the second display module is further configured to: acquiring a third part image from the fourth image in response to a special effect preview operation performed in the special effect preview area; wherein the fourth image is the same or related image as the preview image, and the third region image and the first region image contain the same type of region; and displaying at least one third part special effect material corresponding to the first part special effect material in the preview image in real time based on the third part image and the first special effect display parameter.
In some embodiments, the first display module is further configured to: acquiring the first part image from a third image in response to a part special effect adding operation performed on the first image in an editing operation area; wherein the third image is the same or related image as the first image.
In some embodiments, the third image comprises at least one image frame; the first part special effect material comprises at least one special effect animation frame; the first display module is further configured to: acquiring the first part image from the current image frame of the third image; the first display module is further configured to: and displaying one special effect animation frame of at least one first part special effect material to be edited on the first image based on the first part image.
In some embodiments, each of the first region special effects material comprises at least one key frame; the first special effect editing operation comprises a special effect material quantity setting operation and/or a frame parameter setting operation, and the first special effect display parameters comprise the quantity of the first part special effect materials and/or key frame display parameters of each first part special effect material; the first editing module is further configured to: responding to the quantity setting operation of the first part special effect materials, and acquiring the quantity of the first part special effect materials; and/or, for each first part special effect material, in response to a frame parameter setting operation performed on at least one key frame of the first part special effect material, acquiring a key frame display parameter of the first part special effect material; wherein the key frame display parameters of the first portion special effect material include display parameters of at least one key frame of the first portion special effect material, and the display parameters of each key frame include at least one of: visible state, display position, display size, rotation angle.
In some embodiments, where the first effect editing operation comprises a looping display setting operation, the first effect display parameters comprise looping display parameters for each of the first region effect material; the device further comprises: and a third display module, configured to cyclically display, on the preview image, a third portion special effect material corresponding to each of the at least one first portion special effect material based on a cyclic display parameter of the first portion special effect material.
In some embodiments, where the first effect editing operation comprises a trigger event editing operation, the first effect display parameters comprise a trigger event for each of the first region effect material; the device further comprises: and the fourth display module is used for presenting a third part special effect material corresponding to the first part special effect material on the preview image based on the trigger event of the first part special effect material aiming at each first part special effect material in at least one first part special effect material.
In some embodiments, the first editing module is further to: responding to the combination operation of at least two first part special effect materials, and acquiring at least one first special effect material combination; responding to the combined special effect editing operation carried out on each first special effect material combination, and acquiring a third special effect display parameter of each first special effect material combination; the device further comprises: a fifth display module, configured to, for each first special effect material combination in at least one first portion special effect material, present, based on a third special effect display parameter of the first special effect material combination, at least one third special effect material combination corresponding to the first special effect material combination on a preview image; the third special effect material combinations include at least two third location special effect materials corresponding to each first location special effect material in the first special effect material combinations, respectively.
In some embodiments, the first editing module is further to: responding to the selection operation of the special effect materials in the editing operation area, and acquiring at least two selected first part special effect materials to be combined; and responding to the combination operation of the selected at least two first part special effect materials to be combined, and acquiring at least one first special effect material combination after combination.
In some embodiments, the apparatus further comprises: the second obtaining module is used for responding to a second special effect editing operation carried out on the first image and obtaining a second special effect display parameter; a sixth display module, configured to present a first special effect on the first image based on the second special effect display parameter; the first display module is further configured to: acquiring a first part image from the first image presenting the first special effect in response to a part special effect adding operation performed on the first image presenting the first special effect.
In some embodiments, the first location image includes a location that is a head, the first location effect material includes head effect material, the location effect addition operation includes a head effect addition operation, and the first effect includes a face beautification effect; the first display module is further configured to: displaying at least one head special effect material to be edited on the first image presenting the face beautification special effect based on the first part image; and the face in each head special effect material to be edited presents the face beautifying special effect.
In some embodiments, the apparatus further comprises: the updating module is used for responding to background setting operation carried out on at least one first part special effect material and updating the first special effect display parameter; and the seventh display module is used for displaying the set background on the preview image based on the first special effect display parameters, and presenting third part special effect materials respectively corresponding to each first part special effect material on the preview image provided with the background.
The embodiment of the application provides an image processing device, which comprises all units and all modules included by all the units, and can be realized by a processor in computer equipment; of course, the implementation can also be realized through a specific logic circuit; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 9 is a schematic diagram of a composition structure of an image processing apparatus according to an embodiment of the present application, and as shown in fig. 9, the image processing apparatus 900 includes: a third obtaining module 910, a fourth obtaining module 920, a fifth obtaining module 930, and an eighth displaying module 940, wherein:
a third obtaining module 910, configured to obtain a second image to be processed;
a fourth obtaining module 920, configured to determine, in response to a special effect selection operation performed on the second image, a first special effect display parameter and a to-be-recognized portion based on an operating special effect data packet;
a fifth obtaining module 930, configured to obtain a second part image from the fifth image based on the part to be identified; wherein the fifth image is the same or related image as the second image;
an eighth display module 940, configured to present at least one second-location special effect material on the second image based on the first special effect display parameter and the second location image.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
The application relates to the field of Augmented Reality (AR), and the method comprises the steps of detecting or identifying relevant characteristics, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so that an AR effect combining virtual and Reality matched with specific application is obtained. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, instantaneous localization and mapping (SLAM), three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
It should be noted that, in the embodiment of the present application, if the data generation method is implemented in the form of a software functional module and sold or used as a standalone product, the data generation method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the related art may be embodied in the form of a software product stored in a storage medium, and including several instructions for enabling an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, the embodiment of the present application provides a computer device, which includes a memory and a processor, where the memory stores a computer program that can be run on the processor, and the processor implements the steps in the above method when executing the program.
Correspondingly, the embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program realizes the steps of the above method when being executed by a processor.
Accordingly, embodiments of the present application provide a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and when the computer program is read and executed by a computer, the computer program implements the steps of the method.
Here, it should be noted that: the above description of the storage medium, device and computer program product embodiments, similar to the description of the method embodiments above, has similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium, device and computer program product of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that fig. 10 is a schematic hardware entity diagram of a computer device in an embodiment of the present application, and as shown in fig. 10, the hardware entity of the computer device 1000 includes: a processor 1001, a communication interface 1002, and a memory 1003, wherein,
the processor 1001 generally controls the overall operation of the computer device 1000.
The communication interface 1002 may enable the computer device to communicate with other terminals or servers via a network.
The Memory 1003 is configured to store instructions and applications executable by the processor 1001, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by the processor 1001 and modules in the computer apparatus 1000, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.

Claims (19)

1. A method of data generation, the method comprising:
acquiring a first part image, and displaying at least one first part special effect material to be edited on the first image based on the first part image;
responding to a first special effect editing operation carried out on at least one first part special effect material, and acquiring a first special effect display parameter;
generating a special effect data packet based on the first special effect display parameter; the special effect data packet is used for presenting at least one second part special effect material corresponding to the first part special effect material on a second image based on the first special effect display parameter and the second part image under the running condition; wherein the second part image and the first part image contain the same type of part.
2. The method of claim 1, wherein the first image is a reference template image; the displaying at least one first region special effect material to be edited on the first image based on the first region image comprises:
determining at least one first part special effect material to be edited based on the first part image;
and displaying the reference template image and at least one first part special effect material in an editing operation area.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
and presenting at least one third part special effect material corresponding to the first part special effect material on the preview image based on the first special effect display parameter.
4. The method of claim 3, wherein presenting at least one third region special effect material corresponding to the first region special effect material on a preview image based on the first special effect display parameter comprises:
acquiring a third part image from the fourth image in response to a special effect preview operation performed in the special effect preview area; wherein the fourth image is the same or related image as the preview image, and the third region image and the first region image contain the same type of region;
and displaying at least one third part special effect material corresponding to the first part special effect material in the preview image in real time based on the third part image and the first special effect display parameter.
5. The method of any of claims 1 to 4, wherein said acquiring a first region image comprises:
acquiring the first part image from a third image in response to a part special effect adding operation performed on the first image in an editing operation area; wherein the third image is the same or related image as the first image.
6. The method of claim 5, wherein the third image comprises at least one image frame; the first part special effect material comprises at least one special effect animation frame;
the acquiring the first part image from the third image comprises: acquiring the first part image from the current image frame of the third image;
the displaying at least one first region special effect material to be edited on the first image based on the first region image comprises: and displaying one special effect animation frame of at least one first part special effect material to be edited on the first image based on the first part image.
7. The method according to any of claims 1 to 6, wherein each of the first region special effects material comprises at least one key frame; the first special effect editing operation comprises a special effect material quantity setting operation and/or a frame parameter setting operation, and the first special effect display parameters comprise the quantity of the first part special effect materials and/or key frame display parameters of each first part special effect material;
the obtaining of a first special effect display parameter in response to a first special effect editing operation performed on at least one first portion special effect material includes:
responding to the quantity setting operation of the first part special effect materials, and acquiring the quantity of the first part special effect materials; and/or the presence of a gas in the gas,
for each first part special effect material, responding to frame parameter setting operation carried out on at least one key frame of the first part special effect material, and acquiring key frame display parameters of the first part special effect material; wherein the key frame display parameters of the first portion special effect material include display parameters of at least one key frame of the first portion special effect material, and the display parameters of each key frame include at least one of: visible state, display position, display size, rotation angle.
8. The method according to any one of claims 1 to 7,
in a case where the first special effect editing operation includes a loop display setting operation, the first special effect display parameters include loop display parameters of each of the first part special effect materials; the method further comprises the following steps: for each first part special effect material in at least one first part special effect material, circularly displaying a third part special effect material corresponding to the first part special effect material on a preview image based on the circular display parameters of the first part special effect material;
when the first special effect editing operation comprises a trigger event editing operation, the first special effect display parameters comprise a trigger event of each first part special effect material; the method further comprises the following steps: and for each first part special effect material in at least one first part special effect material, presenting a third part special effect material corresponding to the first part special effect material on a preview image based on a trigger event of the first part special effect material.
9. The method of any of claims 1-8, wherein said obtaining first effect display parameters in response to a first effect editing operation on at least one of said first region effect material comprises:
responding to the combination operation of at least two first part special effect materials, and acquiring at least one first special effect material combination;
responding to the combined special effect editing operation carried out on each first special effect material combination, and acquiring a third special effect display parameter of each first special effect material combination;
the method further comprises the following steps: for each first special effect material combination in at least one first part special effect material, presenting at least one third special effect material combination corresponding to the first special effect material combination on a preview image based on a third special effect display parameter of the first special effect material combination; the third special effect material combinations include at least two third location special effect materials corresponding to each first location special effect material in the first special effect material combinations, respectively.
10. The method of claim 9, wherein obtaining at least one first combination of effect materials in response to combining at least two of the first location effect materials comprises:
responding to the selection operation of the special effect materials in the editing operation area, and acquiring at least two selected first part special effect materials to be combined;
and responding to the combination operation of the selected at least two first part special effect materials to be combined, and acquiring at least one first special effect material combination after combination.
11. The method according to any one of claims 1 to 10, wherein prior to displaying the at least one first region effect material to be edited on the first image based on the first region image, the method further comprises:
responding to a second special effect editing operation carried out on the first image, and acquiring a second special effect display parameter;
presenting a first special effect on the first image based on the second special effect display parameter;
the acquiring a first region image includes: acquiring a first part image from the first image presenting the first special effect in response to a part special effect adding operation performed on the first image presenting the first special effect.
12. The method of claim 11, wherein the first location image includes a location that is a head, the first location effect material includes head effect material, the location effect add operation includes a head effect add operation, the first effect includes a face beautification effect;
the displaying at least one first region special effect material to be edited on the first image based on the first region image comprises:
displaying at least one head special effect material to be edited on the first image presenting the face beautification special effect based on the first part image; and the face in each head special effect material to be edited presents the face beautifying special effect.
13. The method of any of claims 1 to 12, wherein prior to said generating a special effects data packet based on said first special effects display parameter, the method further comprises:
updating the first special effect display parameters in response to a background setting operation performed on at least one first part special effect material;
and displaying the set background on the preview image based on the first special effect display parameters, and presenting third part special effect materials respectively corresponding to each first part special effect material on the preview image provided with the background.
14. An image processing method, characterized in that the method comprises:
acquiring a second image to be processed;
responding to the special effect selection operation of the second image, and determining a first special effect display parameter and a part to be recognized based on the running special effect data packet;
acquiring a second part image from a fifth image based on the part to be identified; wherein the fifth image is the same or related image as the second image;
presenting at least one second region special effect material on the second image based on the first special effect display parameters and the second region image.
15. A data generation apparatus, comprising:
the first display module is used for acquiring a first part image and displaying at least one first part special effect material to be edited on the first image based on the first part image;
the first editing module is used for responding to a first special effect editing operation carried out on at least one first part special effect material and acquiring a first special effect display parameter;
the generating module is used for generating a special effect data packet based on the first special effect display parameter; the special effect data packet is used for presenting at least one second part special effect material corresponding to the first part special effect material on a second image based on the first special effect display parameter and the second part image under the running condition; wherein the second part image and the first part image contain the same type of part.
16. An image processing apparatus characterized by comprising:
the third acquisition module is used for acquiring a second image to be processed;
a fourth obtaining module, configured to determine, in response to a special effect selection operation performed on the second image, a first special effect display parameter and a portion to be recognized based on an operating special effect data packet;
a fifth obtaining module, configured to obtain a second part image from a fifth image based on the part to be identified; wherein the fifth image is the same or related image as the second image;
and the eighth display module is used for presenting at least one second part special effect material on the second image based on the first special effect display parameter and the second part image.
17. A computer device comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor implements the steps of the method of any one of claims 1 to 14 when executing the program.
18. A computer storage medium having a computer program stored thereon, the computer program, when being executed by a processor, performing the steps of the method of any one of claims 1 to 14.
19. A computer program product comprising a non-transitory computer readable storage medium storing a computer program which, when read and executed by a computer, implements the steps of the method of any one of claims 1 to 14.
CN202111016566.XA 2021-08-31 2021-08-31 Data generation method, data generation device, image processing method, image processing device, equipment and storage medium Pending CN113760161A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111016566.XA CN113760161A (en) 2021-08-31 2021-08-31 Data generation method, data generation device, image processing method, image processing device, equipment and storage medium
PCT/CN2022/127583 WO2023030550A1 (en) 2021-08-31 2022-10-26 Data generation method, image processing method, apparatuses, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111016566.XA CN113760161A (en) 2021-08-31 2021-08-31 Data generation method, data generation device, image processing method, image processing device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113760161A true CN113760161A (en) 2021-12-07

Family

ID=78792291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111016566.XA Pending CN113760161A (en) 2021-08-31 2021-08-31 Data generation method, data generation device, image processing method, image processing device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113760161A (en)
WO (1) WO2023030550A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114880057A (en) * 2022-04-22 2022-08-09 北京三快在线科技有限公司 Image display method, image display device, terminal, server, and storage medium
WO2023030550A1 (en) * 2021-08-31 2023-03-09 上海商汤智能科技有限公司 Data generation method, image processing method, apparatuses, device, and storage medium
CN116777940A (en) * 2023-08-18 2023-09-19 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116896649B (en) * 2023-09-11 2024-01-19 北京达佳互联信息技术有限公司 Live interaction method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388434A (en) * 2018-02-08 2018-08-10 北京市商汤科技开发有限公司 The generation of special efficacy program file packet and special efficacy generation method and device, electronic equipment
CN108536790A (en) * 2018-03-30 2018-09-14 北京市商汤科技开发有限公司 The generation of sound special efficacy program file packet and sound special efficacy generation method and device
CN111627086A (en) * 2020-06-03 2020-09-04 上海商汤智能科技有限公司 Head portrait display method and device, computer equipment and storage medium
CN112165632A (en) * 2020-09-27 2021-01-01 北京字跳网络技术有限公司 Video processing method, device and equipment
CN113240777A (en) * 2021-04-25 2021-08-10 北京达佳互联信息技术有限公司 Special effect material processing method and device, electronic equipment and storage medium
CN114432696A (en) * 2021-12-22 2022-05-06 网易(杭州)网络有限公司 Special effect configuration method and device of virtual object, storage medium and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259496B (en) * 2018-01-19 2021-06-04 北京市商汤科技开发有限公司 Method and device for generating special-effect program file package and special effect, and electronic equipment
CN113658298A (en) * 2018-05-02 2021-11-16 北京市商汤科技开发有限公司 Method and device for generating special-effect program file package and special effect
CN110287368B (en) * 2019-05-31 2021-10-08 上海萌鱼网络科技有限公司 Short video template design drawing generation device and short video template generation method
CN110503724A (en) * 2019-08-19 2019-11-26 北京猫眼视觉科技有限公司 A kind of AR expression resource construction management system and method based on human face characteristic point
CN113055611B (en) * 2019-12-26 2022-09-02 北京字节跳动网络技术有限公司 Image processing method and device
CN113760161A (en) * 2021-08-31 2021-12-07 北京市商汤科技开发有限公司 Data generation method, data generation device, image processing method, image processing device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388434A (en) * 2018-02-08 2018-08-10 北京市商汤科技开发有限公司 The generation of special efficacy program file packet and special efficacy generation method and device, electronic equipment
CN108536790A (en) * 2018-03-30 2018-09-14 北京市商汤科技开发有限公司 The generation of sound special efficacy program file packet and sound special efficacy generation method and device
CN111627086A (en) * 2020-06-03 2020-09-04 上海商汤智能科技有限公司 Head portrait display method and device, computer equipment and storage medium
CN112165632A (en) * 2020-09-27 2021-01-01 北京字跳网络技术有限公司 Video processing method, device and equipment
CN113240777A (en) * 2021-04-25 2021-08-10 北京达佳互联信息技术有限公司 Special effect material processing method and device, electronic equipment and storage medium
CN114432696A (en) * 2021-12-22 2022-05-06 网易(杭州)网络有限公司 Special effect configuration method and device of virtual object, storage medium and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023030550A1 (en) * 2021-08-31 2023-03-09 上海商汤智能科技有限公司 Data generation method, image processing method, apparatuses, device, and storage medium
CN114880057A (en) * 2022-04-22 2022-08-09 北京三快在线科技有限公司 Image display method, image display device, terminal, server, and storage medium
CN116777940A (en) * 2023-08-18 2023-09-19 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
CN116777940B (en) * 2023-08-18 2023-11-21 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
WO2023030550A1 (en) 2023-03-09

Similar Documents

Publication Publication Date Title
US9626788B2 (en) Systems and methods for creating animations using human faces
JP7112508B2 (en) Animation stamp generation method, its computer program and computer device
CN113760161A (en) Data generation method, data generation device, image processing method, image processing device, equipment and storage medium
WO2022001593A1 (en) Video generation method and apparatus, storage medium and computer device
KR101535579B1 (en) Augmented reality interaction implementation method and system
KR101737725B1 (en) Content creation tool
CN114930399A (en) Image generation using surface-based neurosynthesis
CN111080759B (en) Method and device for realizing split mirror effect and related product
CN113709549A (en) Special effect data packet generation method, special effect data packet generation device, special effect data packet image processing method, special effect data packet image processing device, special effect data packet image processing equipment and storage medium
CN116601675A (en) Virtual garment fitting
WO2023279705A1 (en) Live streaming method, apparatus, and system, computer device, storage medium, and program
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
WO2020042786A1 (en) Interactive method and device based on augmented reality
CN108134945B (en) AR service processing method, AR service processing device and terminal
CN110928411B (en) AR-based interaction method and device, storage medium and electronic equipment
CN113766296A (en) Live broadcast picture display method and device
WO2024077909A1 (en) Video-based interaction method and apparatus, computer device, and storage medium
CN114332374A (en) Virtual display method, equipment and storage medium
CN112308977B (en) Video processing method, video processing device, and storage medium
CN113965773A (en) Live broadcast display method and device, storage medium and electronic equipment
CN116917938A (en) Visual effect of whole body
CN114116086A (en) Page editing method, device, equipment and storage medium
CN113727039B (en) Video generation method and device, electronic equipment and storage medium
CN117136381A (en) whole body segmentation
US11430158B2 (en) Intelligent real-time multiple-user augmented reality content management and data analytics system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination