CN114003163A - Image processing method and apparatus, storage medium, and electronic device - Google Patents

Image processing method and apparatus, storage medium, and electronic device Download PDF

Info

Publication number
CN114003163A
CN114003163A CN202111258548.2A CN202111258548A CN114003163A CN 114003163 A CN114003163 A CN 114003163A CN 202111258548 A CN202111258548 A CN 202111258548A CN 114003163 A CN114003163 A CN 114003163A
Authority
CN
China
Prior art keywords
target
transparency
smearing
image
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111258548.2A
Other languages
Chinese (zh)
Other versions
CN114003163B (en
Inventor
袁佳平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111258548.2A priority Critical patent/CN114003163B/en
Publication of CN114003163A publication Critical patent/CN114003163A/en
Application granted granted Critical
Publication of CN114003163B publication Critical patent/CN114003163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6615Methods for processing data by generating or executing the game program for rendering three dimensional images using models with different levels of detail [LOD]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses an image processing method and device, a storage medium and electronic equipment. Wherein, the method comprises the following steps: displaying a part area image corresponding to a target body part of a target virtual object to be processed in a display interface; acquiring a drawing processing parameter of a target smearing element matched with sliding track information of a sliding operation in response to the sliding operation performed on the part area image; the invention also discloses a method for rendering the target rendering element on the part area image according to the rendering processing parameters.

Description

Image processing method and apparatus, storage medium, and electronic device
Technical Field
The present invention relates to the field of computers, and in particular, to an image processing method and apparatus, a storage medium, and an electronic device.
Background
In order to make the virtual characters controlled by different players can realize diversified differentiation of images, many game applications usually pre-configure a variety of daubing element options for the players to select. Such as different colors or different positions of blush elements, different colors of lipstick elements, different shapes of tattoo elements, etc.
However, the daubing elements provided in the related art are usually fixed in several forms, and as the number of players increases, more and more players will select the same daubing element. For example, different players may select a blush element that is the same color and location. That is, since the style of the painting element is single, when the image corresponding to the virtual character is painted, the degree of freedom of the process is low.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device, a storage medium and electronic equipment, which at least solve the technical problem of low degree of freedom in the prior art under the condition of smearing an image corresponding to a virtual character.
According to an aspect of an embodiment of the present invention, there is provided an image processing method including: displaying a part area image corresponding to a target body part of a target virtual object to be processed in a display interface; acquiring a drawing processing parameter of a target smearing element matched with sliding track information of a sliding operation in response to the sliding operation performed on the part area image; and drawing the target smearing elements on the part area image according to the drawing processing parameters.
According to another aspect of the embodiments of the present invention, there is also provided an image processing apparatus including: the display unit is used for displaying a part area image corresponding to a target body part of a target virtual object to be processed in a display interface; an acquisition unit configured to acquire, in response to a slide operation performed on the part region image, a drawing processing parameter of a target smear element that matches slide trajectory information of the slide operation; and the drawing unit is used for drawing the target smearing elements on the part area image according to the drawing processing parameters.
According to still another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium in which a computer program is stored, wherein the computer program is configured to execute the above-mentioned image processing method when running.
According to yet another aspect of embodiments herein, there is provided a computer program product comprising a computer program/instructions stored in a computer readable storage medium. The processor of the computer device reads the computer program/instruction from the computer-readable storage medium, and the processor executes the computer program/instruction, so that the computer device executes the image processing method as above.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory and a processor, the memory having a computer program stored therein, the processor being configured to execute the image processing method described above through the computer program.
In the embodiment of the invention, the part area image corresponding to the target body part of the target virtual object to be processed is displayed in the display interface, the drawing processing parameter of the target smearing element matched with the sliding track information of the sliding operation is acquired in response to the sliding operation executed on the part area image, and then the target smearing element is drawn on the part area image according to the drawing processing parameter, so that the technical effect of flexibly drawing the smearing element according to the will of a player is realized, and the technical problem of low degree of freedom in the case of smearing the image in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of a hardware environment for an alternative image processing method according to an embodiment of the invention;
FIG. 2 is a flow diagram of an alternative image processing method according to an embodiment of the invention;
FIG. 3 is a schematic diagram of an alternative image processing method according to an embodiment of the invention;
FIG. 4 is a schematic diagram of another alternative image processing method according to an embodiment of the invention;
FIG. 5 is a schematic diagram of yet another alternative image processing method according to an embodiment of the invention;
FIG. 6 is a schematic diagram of yet another alternative image processing method according to an embodiment of the invention;
FIG. 7 is a schematic diagram of yet another alternative image processing method according to an embodiment of the invention;
FIG. 8 is a schematic diagram of yet another alternative image processing method according to an embodiment of the invention;
FIG. 9 is a schematic diagram of yet another alternative image processing method according to an embodiment of the invention;
FIG. 10 is a schematic diagram of yet another alternative image processing method according to an embodiment of the invention;
FIG. 11 is a schematic diagram of yet another alternative image processing method according to an embodiment of the invention;
FIG. 12 is a flow diagram of another alternative image processing method according to an embodiment of the invention;
FIG. 13 is a block diagram of an alternative image processing apparatus according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of the embodiments of the present invention, an image processing method is provided, and optionally, as an optional implementation, the image processing method may be applied to, but is not limited to, an image processing system in a hardware environment as shown in fig. 1. The image processing system may include, but is not limited to, a terminal device 102, a network 104, a server 106, and a database 108. A target client (such as the client interface shown in fig. 1, which may be a client for applying makeup to a virtual character object) runs in the terminal device 102. The terminal device 102 includes a human-computer interaction screen, a processor and a memory. The human-computer interaction screen is used for displaying a part area image (such as a face area image of a virtual character shown in fig. 1) corresponding to a target body part of a target virtual object to be processed; and the system is also used for providing a human-computer interaction interface to receive human-computer interaction operation for live network broadcast by a user by using live broadcast software. The processor is configured to generate an interaction instruction in response to the human-computer interaction operation, and send the interaction instruction to the server 106. The memory is used for storing relevant makeup material data, such as face-pinching materials, makeup effect materials, various makeup tools and the like provided by the client. The terminal device 109 also includes a human-computer interaction screen, a processor and a memory, wherein the human-computer interaction screen is used for displaying a makeup interface of the user client.
The specific process comprises the following steps: in step S102, a part region image corresponding to a target body part of a target virtual object to be processed is displayed in a display interface in a client operating in the terminal device 102; s104, in response to the sliding operation executed on the part area image, obtaining drawing processing parameters of the target smearing element matched with the sliding track information of the sliding operation; s106, the terminal device 102 sends the region image and the rendering processing parameter to the server 106 via the network 104. The server 106 executes steps S108 to S110, retrieves corresponding material elements from the database based on the part region image and the rendering processing parameters, and transmits the corresponding material elements to the terminal device 102 via the network. Finally, in step S112, the terminal 102 draws the target smearing element on the part area image according to the drawing processing parameters.
As another alternative, when the terminal device 102 has a relatively high computing processing capability, the step S108 may also be performed by the terminal device 102. Here, this is an example, and this is not limited in this embodiment.
Optionally, in this embodiment, the terminal device may be a terminal device configured with a target client, and may include, but is not limited to, at least one of the following: mobile phones (such as Android phones, iOS phones, etc.), notebook computers, tablet computers, palm computers, MID (Mobile Internet Devices), PAD, desktop computers, smart televisions, etc. The target client may be a video client, an instant messaging client, a browser client, an educational client, etc. that supports providing a shooting game task. Such networks may include, but are not limited to: a wired network, a wireless network, wherein the wired network comprises: a local area network, a metropolitan area network, and a wide area network, the wireless network comprising: bluetooth, WIFI, and other networks that enable wireless communication. The server may be a single server, a server cluster composed of a plurality of servers, or a cloud server. The above is merely an example, and this is not limited in this embodiment.
In the embodiment of the invention, the part area image corresponding to the target body part of the target virtual object to be processed is displayed in the display interface, the drawing processing parameter of the target smearing element matched with the sliding track information of the sliding operation is acquired in response to the sliding operation executed on the part area image, and then the target smearing element is drawn on the part area image according to the drawing processing parameter, so that the technical effect of flexibly drawing the smearing element according to the will of a player is realized, and the technical problem of low degree of freedom in the case of smearing the image in the prior art is solved.
As an alternative implementation, as shown in fig. 2, the image processing method includes the following steps:
s202, displaying a part area image corresponding to a target body part of a target virtual object to be processed in a display interface;
s204, in response to the sliding operation executed on the part area image, obtaining drawing processing parameters of the target smearing element matched with the sliding track information of the sliding operation;
and S206, drawing the target smearing element on the part area image according to the drawing processing parameters.
It is understood that the display interface may be an interface provided to a client end of a user for performing a simulated painting operation, wherein the target virtual object may be an avatar, or the like, which is not limited herein. The target body part may be understood as a body part of the target virtual object, such as a face, a hand, a leg, and the like, and is not limited thereto.
It should be further explained that, the method for displaying the part region image corresponding to the target body part may be to intercept the virtual target body part image for enlarged display in response to the selection operation of the user at the option provided by the client, so as to facilitate the user to perform the painting operation on the part region image, and improve the image display efficiency in response to the painting operation of the user.
It should be further explained that the sliding operation may be a long-press sliding operation performed by a user on a mobile terminal, such as a mobile phone, or a moving operation performed by a mouse on a computer terminal, which is not limited herein. The sliding track information may include the start position of the sliding track, the direction of the sliding track, the degree of sliding force, and other relevant information, and is used as a basis for matching the drawing processing parameters. Meanwhile, the drawing processing parameters are used for determining the display mode of the target smearing element.
Optionally, the target painting element is an element selected for displaying in the region image, and may be, but is not limited to, a graffiti line, a graffiti pattern, and other optional graffiti elements, and may also be a makeup element such as a lipstick, an eye shadow, and a blush, which are not limited herein. It is understood that the type of the target smearing element may be set according to the pre-selection of the user, or may be set by default according to the application scenario without any selection by the user.
The above-described embodiment is exemplified below with reference to fig. 3. As shown in fig. 3 (a), in response to a user's touch operation on a face control 301 in a control list on the left side of the interface, a region area image 302 of a target virtual object, that is, one virtual male face image, is displayed in the middle of the interface. Then, in response to a selection operation of the toolbar list on the right side of the interface, in the case where the user selects the makeup control 303, a makeup application function is provided. Next, in response to the slide operation performed on the part area image, and drawing the target smear element according to the drawing processing parameter of the target smear element matching the slide trajectory information, as shown in (b) in fig. 3, a part area image 304 is displayed in the center of the interface. As can be seen, a "cosmetic graffiti" trace is displayed on the part area image after the sliding operation by the user.
The part area image corresponding to the target body part of the target virtual object to be processed is displayed in the display interface, then the drawing processing parameters of the target smearing element matched with the sliding track information of the sliding operation are obtained in response to the sliding operation executed on the part area image, and then the target smearing element is drawn on the part area image according to the drawing processing parameters, so that the technical effect that the smearing element can be drawn flexibly according to the will of a player is achieved, and the technical problem that the degree of freedom is low under the condition of smearing the image in the prior art is solved.
As an alternative, the acquiring, in response to the sliding operation performed on the part area image, the drawing processing parameter of the target paint element that matches the sliding trajectory information of the sliding operation includes:
s1, acquiring the track distance of the sliding track generated based on the sliding operation, wherein the sliding track information comprises the track distance;
and S2, determining the smearing transparency of the target smearing element according to the track distance, wherein the drawing processing parameters comprise the smearing transparency.
It is understood that the sliding track information may include position information of an initial point and a current point of the sliding track, and further may include distance information of the current track obtained by the position information of the two points. The above-mentioned track distance information may be in units of pixel points (px, i.e., pixels) in the client.
It should be further explained that the present solution provides a painting track display effect in which real-time linear gradation display occurs according to the length of the painting track. Specifically, the smearing display effect shown in fig. 4 may be taken as an example, as shown in fig. 4, the smearing effect is a "makeup graffiti" effect, and the display effect of the "makeup graffiti" changes in real time along with the change of the length of the smearing trace. It can be observed from the schematic view shown in fig. 4 that the transparency of the application trace becomes lighter as the application distance is longer. That is, the present solution provides a method for determining drawing processing parameters according to the track distance, wherein the drawing processing parameters include the target smearing transparency.
Through the embodiment provided by the application, the smearing transparency of the target smearing element is determined through the sliding track distance, so that the method for displaying gradually in real time according to the sliding track length is provided, a user can conveniently draw a smearing pattern in real time according to personal wishes, and the flexibility of drawing the target smearing element in response to the sliding operation of the user is improved.
As an optional solution, the determining the smearing transparency of the target smearing element according to the track distance includes:
s1, obtaining the transparency change amplitude of the target smearing element in the unit display length;
s2, determining a target transparency reference array for drawing the target smearing elements according to a first ratio between the track distance and the transparency change amplitude;
and S3, determining the transparency corresponding to each pixel point on the target transparency gradual change image recorded in the target transparency reference array as the smearing transparency.
Optionally, the method for obtaining the transparency change amplitude of the target smearing element within the unit display length may be directly obtaining a preset transparency change amplitude value from a server; or the setting can be directly carried out according to the individual requirements of the user.
As a preferable mode, the method for obtaining the variation range of the transparency of the target smearing element within the unit display length may further include:
s1, acquiring the total display length of the target smearing elements and the number of images in the transparency gradient image sequence configured for the target smearing elements;
s2, determining a second ratio between the total display length and the number of images as the transparency change width.
The number of images in the transparency gradation image sequence of the above-described target smear element is explained below. In order to make the daubing element present the effect of linear gradual change, a group of images representing the process of linear change of transparency needs to be prepared in advance, as shown in fig. 5, 30 image sequences with gradual change of transparency can be preset in advance (only image 1, image 8, image 15, image 22 and image 30 are shown, other images are not shown), the transparency of the pictures is sequentially increased from top to bottom, and the displayed color is lighter as the transparency is higher. And filling different positions of the same sliding track in real time through 30 images with different transparencies, so that the sliding track can display a linear gradual display effect. The number of the images can be determined according to actual requirements, the more the images are, the finer the linear change process is, and the smoother the change process of the display effect of the smearing elements is.
Next, explanation will be given taking "graffiti makeup" as an example of a target application element. In the case that the user selects "graffiti makeup" as a target smearing element to perform a smearing operation, a total display length (e.g., 300px) of the preset "graffiti makeup" element is obtained, and a preset number of transparency-gradient image sequences (e.g., 30 transparency-gradient images in fig. 5) matching the "graffiti makeup" element is obtained. According to the method of this embodiment, the transparency variation range is a second ratio 300/30-10 of the total display length of the preset "graffiti makeup" element to the number of the preset transparency gradient image sequences. That is, in this embodiment, in the case where the "graffiti makeup" is taken as the target smearing element, the corresponding transparency change width is 10, which indicates that if the "graffiti makeup" element is drawn by fusing the 30 images, one image is replaced for the fusion drawing every time a trajectory distance with a length of 10px is passed, so as to display a smooth gradual change effect.
According to the embodiment, a group of pictures representing the transparency linear change process is preset, so that the pictures with the transparency linear change can be fused in the sliding track in real time in the process of responding to the sliding operation, the drawing pause caused by the fact that the transparency is adjusted through calculation in the sliding process of a user is avoided, and the drawing smoothness of the target smearing element is improved.
Optionally, the method for determining the target transparency reference array for drawing the target smearing element according to the first ratio between the track distance and the transparency change amplitude may be that, in the case of directly obtaining a preset transparency change amplitude in the server, the ratio between the track distance and the preset transparency change amplitude is directly obtained; or acquiring the ratio of the track distance to the transparency change amplitude value set by the user under the condition that the transparency change amplitude value is set according to the individual requirement of the user.
In a preferred embodiment, the method for determining the target transparency reference array for drawing the target smearing element according to the first ratio between the track distance and the transparency change amplitude may include:
s1, rounding up the first ratio between the track distance and the transparency change amplitude to obtain a target array sequence number;
and S2, determining the target transparency reference array according to the target array sequence number.
The above embodiment is explained with a schematic view shown in fig. 6. As shown in fig. 6, continuing to use "makeup graffiti" as the target smearing element, the total display length is 300px, and the number of the preset transparency gradient image sequences matching the "graffiti makeup graffiti" element is 30 as an example for illustration. During the sliding operation of the user, in a first time, the sliding track distance is 72px, in this case, the first ratio is 72/10 ═ 7.2, the target array number is rounded up to 8, and the 8 th image in the transparency gradient image sequence, that is, the image 8 in fig. 5, is instructed to participate in the blending drawing; at the second time, the sliding track distance becomes 220px, and the first ratio is 220/10-22, which indicates to control the fusion rendering of the 22 nd image in the transparency-gradient image sequence, i.e., the image 22 in fig. 5. It is easily understood that the first time and the second time are not two consecutive times, but two independent times selected from a duration of time for convenience of illustration. When the drawing is finished, the slide trajectory distance is 280px, and the end of the instruction trajectory controls the 28 th gradation image and the image 28 to perform the fusion drawing. And determining the gradient images participating in the drawing of the target smearing elements and the gradient image sequence participating in the drawing in the whole track in real time according to the method, and then determining the target transparency reference arrays in one-to-one correspondence according to the sequence numbers of the target arrays.
According to the embodiment, the first ratio between the track distance and the transparency change amplitude is rounded up to obtain the target array sequence number, the transparency gradient image for fusion drawing is determined according to the target array sequence number, and then the target transparency reference array corresponding to the determined transparency gradient image is obtained, so that the images in the transparency gradient image sequence are replaced in real time in the real-time change process of the track distance, and the technical effect of drawing the gradient daubing display elements in real time according to the user operation is achieved.
In a preferred embodiment, before the displaying, in the display interface, the part region image corresponding to the target body part of the target virtual object to be processed, the method further includes:
s1, obtaining a transparency gradient image sequence configured for the target smearing element;
and S2, sequentially storing the transparency corresponding to each pixel point on each transparency gradient image in the transparency gradient image sequence into a transparency reference array.
It should be noted that the image display effect in the transparency-gradient image sequence shown in fig. 5 is only an example, and in actual operation, specific drawing may be performed according to actual needs, for example, uniform transparency setting may be performed in the images in the sequence, that is, the transparency of each pixel point in one image may be set as a uniform parameter value. In another alternative embodiment, the images in the transparency-gradient image sequence may be rendered with non-uniform transparency, i.e. the parameter value of the transparency of a pixel point in one image may vary as its position varies.
Specifically, as shown in fig. 5, the value range of the transparency of the pixel point may be set to be 0 to 1. In the case that the transparency of a certain pixel is 0, it indicates that the pixel is completely transparent (invisible). In the case that the transparency of a certain pixel is 1, it indicates that the pixel is completely opaque (seen most clearly). And when the transparency value of a certain pixel point is between 0 and 1, indicating that the pixel point is in a semitransparent state and showing a semitransparent effect. And then, aiming at each pixel point in the image, a transparency value can be obtained according to the coordinate position of the pixel point, the positions of all the pixel points in the image and the corresponding transparency values can be stored as a transparency reference array, and the array is directly called to perform secondary drawing or rendering under the condition that the image needs to be called.
Through the embodiment, the transparency values corresponding to the pixel points on each transparency gradient image in the transparency gradient image sequence are stored in one transparency reference array, so that the secondary drawing of the image can be realized by calling the corresponding transparency reference array under the condition that the image needs to be called for drawing, the subsequent image processing process is accelerated, the real-time transparency calculation is avoided, the performance problem caused by frequent calculation is avoided, and the efficiency of image processing for real-time drawing is improved.
As an alternative, the acquiring, in response to a slide operation performed on the part area image, drawing processing parameters of a target paint element that matches slide trajectory information of the slide operation further includes determining trajectory start point coordinates of a slide trajectory generated based on the slide operation, where the slide trajectory information includes the trajectory start point coordinates; and determining the smearing direction of the target smearing element according to the track starting point coordinates, wherein the drawing processing parameters comprise the smearing direction.
With the above embodiment, obtaining the drawing processing parameters may further include determining coordinates of a trajectory starting point and determining a smearing direction of the smearing element according to the coordinates of the trajectory starting point. That is to say, under the condition of obtaining the track starting point coordinate, the track end point coordinate and the track distance, the direction of the track can be calculated through the three coordinates, and then the direction of the smearing element is determined, so that the technical effect of improving the efficiency of image processing is achieved.
As an optional scheme, the drawing, according to the drawing processing parameter, the target paint element on the part region image includes:
s1, acquiring color rendering data of the target smearing element, wherein the color rendering data comprises color values of all pixel points on a rendering effect graph corresponding to the target smearing element;
s2, fusing the color rendering data and the smearing transparency to obtain drawing control data;
and S3, drawing the target smearing element along the smearing direction according to the drawing control data.
It is understood that the color rendering data may be obtained from a rendering effect map corresponding to the target paint color. An example of an alternative rendering effect diagram of "cosmetic graffiti" is shown in fig. 7, where the colors of different positions in the image are different (not shown). From the data perspective, the change of the color value, namely the RGB value (Red, Green, Blue), of each pixel point in the rendering effect graph can be reflected, and the RGB value of each pixel point is an integer between 0 and 255. Therefore, according to the rendering graph corresponding to the target smearing color, the RGB value of each pixel point in the graph can be obtained, that is, the color rendering data of the target smearing color is obtained.
Optionally, the method for obtaining the drawing control data by fusing the color rendering data and the painting transparency may include:
s1, under the condition that the display size of the rendering effect map is consistent with the display size of each transparency gradient image in the transparency gradient image sequence configured for the target smearing element, traversing each pixel point, sequentially taking each pixel point as a current pixel point, and performing the following operations:
and S2, splicing the color value of the current pixel point and the transparency of the current pixel point to obtain pixel drawing control data of the current pixel point.
It can be understood that the present embodiment provides a method for fusing color rendering data and rendering transparency data in real time to obtain a target rendering element drawn in real time. In the real-time rendering process, the color rendering data of a certain pixel point is from the RGB values of the corresponding pixel point in the rendering effect graph, so that under the condition that the display size of the rendering effect graph is consistent with the display size of each transparency gradient image in the transparency gradient image sequence configured for the target painting element, the transparency value of the pixel point at the corresponding position can be found in the corresponding transparency gradient image, thereby realizing fusion rendering. Furthermore, under the condition that the display size of the rendering effect graph is consistent with the display size of each transparency gradient image in the transparency gradient image sequence configured for the target smearing element, each pixel point in the rendering effect graph can find a corresponding pixel point in the corresponding transparency gradient image, and therefore the fusion processing of the color rendering data and the smearing transparency can be conveniently and quickly achieved.
Meanwhile, in another alternative embodiment, in a case that the display size of the rendering effect map is not consistent with the display size of each transparency gradient image in the transparency gradient image sequence configured for the target smearing element, the rendering effect map may be preprocessed, for example, scaled, so that the rendering effect map is consistent with the transparency gradient image, and then the subsequent image processing operation is performed.
Next, continuing to take the "makeup graffiti" element as an example, a method for fusing the color rendering data and the smearing transparency is specifically explained. In the process of drawing the target smearing element, the RGB value of the corresponding pixel point is fused with the smearing transparency in real time, and the specific method is as follows:
the rendering effect diagram of the "makeup graffiti" element shown in fig. 6 has a length of 300px and a width of 100px, wherein the pixel coordinates indicated by the arrow are (100px, 30px), and the RGB values are (255, 0, 0). Meanwhile, a transparency gradient image sequence of 30 "makeup graffiti" elements is pre-configured, wherein the size of each image is consistent with that of the rendering effect map, the length is 300px, and the width is 100px (not shown). In the process of drawing the "cosmetic doodle" element in real time, in the case that the track length is 150px, the target array serial number is 150/10 ═ 22, which indicates that the image 15 in fig. 5 is called at this time to participate in the fusion drawing. Assuming that the transparency of a pixel point with coordinates of (100px, 30px) in the image 15 is 0.5, combining the RGB values (255, 0, 0) of the pixel point with the corresponding coordinates in the rendering effect map, a new RGBA color value can be obtained through combination, and it can be understood that the RGBA color value is an optional pixel rendering control data.
Next, the RGBA value will be explained. RGBA values of pixel points are stored in a format of # RRGGBBAA in a computer, wherein each bit is a hexadecimal numerical value, each two bits after the # represent a channel which is sequentially red, green, blue and transparent, and the value range of each channel is 0-255, for example, # FF00007F represents red with the transparency of 50%. Specifically, the FF after the "#" sign indicates that the red channel value is 255, and both the green and blue channels are 0, so that the three primary colors are mixed to be pure red, that is, the pixel point is pure red. 255 in the clear channel represents 100%, and the final two digits 7F of "# FF 00007F" is reduced to 127 decimal, which is rounded to half of 255, indicating that the pixel is 50% transparent.
According to the method, the RGBA value of the pixel point with the coordinates of (100px, 30px) in the real-time drawing process can be obtained. Meanwhile, the rendering effect graph and the transparency gradient image have the same size, so that corresponding smearing transparency values and RGB values can be obtained from the rendering effect graph and the transparency gradient image corresponding to each pixel point in real-time drawing. And then following the method, obtaining the RGBA color value of each pixel point in the real-time drawing process.
And finally, rendering the obtained RGBA color value set of all real-time rendering pixel points into a picture by utilizing a Canvas technology. For example, the RGBA color value sets of all the real-time rendering pixel points can be drawn into a picture through an API provided by Canvas, such as putImageData.
According to the embodiment, the color rendering data of the target smearing element is obtained, and then the color rendering data and the smearing transparency are fused to obtain drawing control data; and finally, drawing the target smearing element according to the drawing control data along the smearing direction, wherein the actual smearing color is determined based on the color rendering data of the target smearing element, so that the effect of drawing the target smearing element with the color consistent with that of the effect graph is realized, and meanwhile, the transparency of the actual smearing element changes along with the change of the track length, thereby realizing the technical effect of flexibly drawing the smearing element according to the will of a player.
As an optional scheme, the drawing the target smearing element on the part area image according to the drawing processing parameter further includes:
s1, acquiring a contour image corresponding to the target body part;
s2, superimposing the outline image on the part region image after the drawing of the target paint element.
The above-described embodiment will be specifically described below with reference to fig. 8, 9, and 10. After the technical problem of the freedom degree of drawing the smearing elements is solved by the method, as shown in fig. 8, the technical problem of over smearing may also occur, that is, a user cannot accurately grasp the boundary of the range in which smearing can be performed, so that the display effect of the smearing elements is distorted, and further, imperfect user experience is brought.
In order to solve the above-mentioned technical problem shown in fig. 8, before the user paints the target body part, a contour image corresponding to the target body part may be acquired. As shown in fig. 9, which is an outline of a virtual male face image. After the face contour image shown in fig. 9 is acquired, in a case where the user has completed the smearing operation as shown in fig. 8, the face contour image shown in fig. 9 is superimposed on the display image as shown in fig. 8. It will be appreciated that the facial profile image shown in figure 9 is a layer placed on top and can therefore cover the application elements of figure 8 beyond the actual spreadable range, with the final display effect shown in figure 10. It is to be understood that, as shown in fig. 11, the layer relationship of the above-described outline image 1101, target smear element 1102, and region area image 1103 may be an overlapping relationship as shown in fig. 11.
Through the embodiment, the contour image corresponding to the target body part is acquired, and then the contour image is superposed on the part area image after the target smearing element is drawn, so that the reality of a smearing effect is ensured, and the technical effect of increasing the reliability of an image processing effect is realized.
The following describes the complete process of the image processing method provided by the present application with reference to the flow shown in fig. 12:
after the client for applying the makeup operation is opened by the user arbitrarily, the target body part and the target application element of the virtual object are selected:
in step S1202, the RGB value of each pixel point is read from the target rendering effect graph;
assuming that the target scribbling element selected by the user is a 'makeup scribble' element, as shown in fig. 6, a target rendering effect graph of the 'makeup scribble' element is obtained, and the RGB value of each pixel point in the graph is obtained according to the rendering effect graph of the 'makeup scribble' element, i.e. the color rendering data is obtained.
Then, as in step S1204, the RGB data is saved.
It can be understood that, for each pixel point in the target rendering effect graph, there is a corresponding RGB value, so that the position information of each pixel point of the target rendering effect graph and the value set of the corresponding RGB value can be stored in the form of an array. The rendering effect diagram of the "makeup graffiti" element shown in fig. 6 has a length of 300px and a width of 100px, wherein the pixel coordinates indicated by the arrow are (100px, 30px), and the RGB values are (255, 0, 0). Similarly, the data of each pixel point in the target rendering effect graph can be stored in the above form.
In step S1206, obtaining a transparency value of each pixel point from the transparency gradual change image sequence;
then in step S1208, storing the transparency value data set into the transparency reference array;
it will be appreciated that the transparency reference array is determined as a result of the transparency gradient image sequence corresponding to the "cosmetic graffiti" element. Taking the transparency gradient image sequence of the "makeup graffiti" element as an example, as shown in fig. 5, there are 30 transparent gradient image sequences (a partial image is not shown), and a transparency reference array can be obtained according to the correspondence between the pixel point and the transparency in each image in the image sequence. Assuming that the coordinates in the image 15 in fig. 5 are that the transparency of the pixel (100px, 30px) is 0.5, the data set of the transparency values corresponding to the coordinates of all the pixels in the image 15 and one-to-one correspondence thereof is the transparency reference array of the image 15. The same can be said for the 30 transparency reference arrays described above for the sequence of images corresponding to transparent fades.
In step S1210, a slip event is detected;
as shown in the flow chart of fig. 12, there is no order of execution among step S1202, step S1206, and step S1210.
Then, in step S1212, determining a transparency gradient image sequence number according to the sliding information;
then, in step S1214, obtaining a corresponding transparency value from the transparency reference array;
in the case where the sliding trajectory information is acquired, the smear transparency can be further acquired. Specifically, the explanation will be continued by taking the "makeup graffiti" as an example. In the process of generating the sliding track in real time, for example, in the first time, if the sliding track distance is 72px, the 8 th image in the transparency gradient image sequence, that is, the image 8 in fig. 5, is instructed to be controlled to perform fusion drawing; at the second time, the sliding track distance is 220px, and it is indicated that the 22 nd image in the transparency-gradient image sequence, i.e., the image 22 in fig. 5, is controlled to be fused and drawn at that time. And determining a target transparency reference array corresponding to the images with different gradual change sequence numbers by combining the transparency gradual change image sequence numbers determined in real time and participating in fusion drawing, and further determining the smearing transparency information from the target transparency reference array according to the specific pixel point positions.
Then, in step S1216, RGBA data is obtained by fusion;
the explanation is continued by taking the above-mentioned "makeup graffiti" element as an example. The rendering effect diagram of the "makeup graffiti" element shown in fig. 6 has a length of 300px and a width of 100px, wherein the pixel coordinates indicated by the arrow are (100px, 30px), and the RGB values are (255, 0, 0). Meanwhile, a transparency gradient image sequence of 30 "makeup graffiti" elements is pre-configured, wherein the size of each image is consistent with that of the rendering effect map, the length is 300px, and the width is 100px (not shown). In the process of drawing the "cosmetic doodle" element in real time, in the case that the track length is 150px, the target array serial number is 150/10 ═ 22, which indicates that the image 15 in fig. 5 is called at this time to participate in the fusion drawing. Assuming that the transparency of the pixel point with the coordinate of (100px, 30px) in the image 15 is 0.5, combining the RGB values (255, 0, 0) of the pixel point with the corresponding coordinate in the rendering effect graph, a new RGBA color value can be obtained by combination, that is, the fusion operation is realized.
Then, step S1218 is executed, and the target smearing elements are rendered in real time by using Canva;
it can be understood that, according to the above method, the RGBA value of the pixel point with coordinates of (100px, 30px) in the real-time rendering process can be obtained. Meanwhile, the rendering effect graph and the transparency gradient image have the same size, so that corresponding smearing transparency values and RGB values can be obtained from the rendering effect graph and the transparency gradient image corresponding to each pixel point in real-time drawing. And then following the method, obtaining the RGBA color value of each pixel point in the real-time drawing process. And finally, rendering the obtained RGBA color value set of all real-time rendering pixel points into a picture by utilizing a Canvas technology. For example, the RGBA color value sets of all the real-time rendering pixel points may be rendered into a picture through an API provided by Canvas, such as putImageData, that is, the target rendering element is obtained through rendering.
Finally, step S1220, step S1222 and step S1224 are performed, i.e. the target region image is placed at the bottom layer, the target smearing element is inserted into the real-time rendering layer and the face contour image is placed at the top layer, so as to complete the real-time blending drawing.
The flow shown in fig. 12 is an example, and this is not limited in this embodiment.
In the embodiment of the invention, the part area image corresponding to the target body part of the target virtual object to be processed is displayed in the display interface, the drawing processing parameter of the target smearing element matched with the sliding track information of the sliding operation is acquired in response to the sliding operation executed on the part area image, and then the target smearing element is drawn on the part area image according to the drawing processing parameter, so that the technical effect of flexibly drawing the smearing element according to the will of a player is realized, and the technical problem of low degree of freedom in the case of smearing the image in the prior art is solved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiments of the present invention, there is also provided an image processing apparatus for implementing the above-described image processing method. As shown in fig. 13, the apparatus includes:
a display unit 1302, configured to display, in a display interface, a part region image corresponding to a target body part of a target virtual object to be processed;
an obtaining unit 1304 for obtaining, in response to a slide operation performed on the part region image, a drawing processing parameter of a target paint element that matches slide trajectory information of the slide operation;
a drawing unit 1306, configured to draw the target smearing element on the part area image according to the drawing processing parameter.
Optionally, in this embodiment, reference may be made to the above-mentioned method embodiments for implementing the above-mentioned unit modules, which are not described herein again.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the above-described image processing method, where the electronic device may be a terminal device or a server shown in fig. 14. The present embodiment takes the electronic device as a terminal device as an example for explanation. As shown in fig. 14, the electronic device comprises a memory 1402 and a processor 1404, the memory 1402 having stored therein a computer program, the processor 1404 being arranged to execute the steps of any of the method embodiments described above by means of the computer program.
Optionally, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, displaying a part area image corresponding to the target body part of the target virtual object to be processed in the display interface;
s2, in response to the slide operation performed on the part region image, acquiring a drawing processing parameter of the target paint element matching the slide trajectory information of the slide operation;
and S3, rendering the target paint element on the part area image according to the rendering processing parameter.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 14 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 14 is a diagram illustrating a structure of the electronic device. For example, the electronics may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 14, or have a different configuration than shown in FIG. 14.
The memory 1402 may be used to store software programs and modules, such as program instructions/modules corresponding to the image processing method and apparatus in the embodiments of the present invention, and the processor 1404 executes various functional applications and data processing by running the software programs and modules stored in the memory 1402, so as to implement the image processing method described above. Memory 1402 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1402 may further include memory located remotely from the processor 1404, which may be connected to a terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1402 may be used to store information such as elements in a scene, image processing information, and the like. As an example, as shown in fig. 14, the memory 1402 may include, but is not limited to, a display unit 1302, an acquisition unit 1304, and a rendering unit 1306 in the image processing apparatus. In addition, other module units in the image processing apparatus may also be included, but are not limited to these, and are not described in detail in this example.
Optionally, the transmitting device 1406 is used for receiving or sending data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 12706 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmitting device 1406 is a Radio Frequency (RF) module, which is used to communicate with the internet by wireless means.
In addition, the electronic device further includes: a display 1408 for displaying an image-rendering interface; and a connection bus 1410 for connecting the respective module parts in the above-described electronic apparatus.
In other embodiments, the terminal device or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting a plurality of nodes through a network communication. Nodes can form a Peer-To-Peer (P2P, Peer To Peer) network, and any type of computing device, such as a server, a terminal, and other electronic devices, can become a node in the blockchain system by joining the Peer-To-Peer network.
According to an aspect of the application, there is provided a computer program product comprising a computer program/instructions containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium. When executed by the central processing unit, the computer program performs various functions provided by the embodiments of the present application.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
According to an aspect of the present application, there is provided a computer-readable storage medium, from which a processor of a computer device reads computer instructions, the processor executing the computer instructions, causing the computer device to execute the above-described image processing method.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, displaying a part area image corresponding to the target body part of the target virtual object to be processed in the display interface;
s2, in response to the slide operation performed on the part region image, acquiring a drawing processing parameter of the target paint element matching the slide trajectory information of the slide operation;
and S3, rendering the target paint element on the part area image according to the rendering processing parameter.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the above methods according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (14)

1. An image processing method, comprising:
displaying a part area image corresponding to a target body part of a target virtual object to be processed in a display interface;
in response to a sliding operation performed on the part area image, acquiring drawing processing parameters of a target smearing element matched with sliding track information of the sliding operation;
and drawing the target smearing elements on the part area image according to the drawing processing parameters.
2. The method according to claim 1, wherein the acquiring, in response to a slide operation performed on the part area image, rendering processing parameters of a target smear element that matches slide trajectory information of the slide operation includes:
acquiring a track distance of a sliding track generated based on the sliding operation, wherein the sliding track information comprises the track distance;
and determining the smearing transparency of the target smearing element according to the track distance, wherein the drawing processing parameters comprise the smearing transparency.
3. The method of claim 2, wherein said determining a smearing transparency of the target smearing element as a function of the trajectory distance comprises:
obtaining the transparency change amplitude of the target smearing element in a unit display length;
determining a target transparency reference array for drawing the target smearing elements according to a first ratio between the track distance and the transparency change amplitude;
and determining the transparency corresponding to each pixel point on the target transparency gradual change image recorded in the target transparency reference array as the smearing transparency.
4. The method of claim 3, wherein the obtaining a transparency variation amplitude of the target smearing element within a unit display length comprises:
acquiring the total display length of the target smearing elements and the number of images in a transparency gradient image sequence configured for the target smearing elements;
determining a second ratio between the total display length and the number of images as the transparency change amplitude.
5. The method of claim 3, wherein determining a target transparency reference array for drawing the target smearing element according to a first ratio between the trajectory distance and the transparency change amplitude comprises:
carrying out rounding-up calculation on the first ratio between the track distance and the transparency change amplitude to obtain a target array sequence number;
and determining the target transparency reference array according to the target array sequence number.
6. The method according to claim 3, wherein before displaying the part region image corresponding to the target body part of the target virtual object to be processed in the display interface, the method further comprises:
acquiring a transparency gradient image sequence configured for the target smearing element;
and sequentially storing the transparency corresponding to each pixel point on each transparency gradient image in the transparency gradient image sequence into a transparency reference array.
7. The method according to claim 2, wherein the obtaining, in response to a slide operation performed on the part region image, drawing processing parameters of a target smear element that match slide trajectory information of the slide operation, further comprises:
determining a trajectory start point coordinate of a sliding trajectory generated based on the sliding operation, wherein the sliding trajectory information includes the trajectory start point coordinate;
and determining the smearing direction of the target smearing element according to the track starting point coordinates, wherein the drawing processing parameters comprise the smearing direction.
8. The method of claim 7, wherein said rendering the target paint element on the region image according to the rendering processing parameters comprises:
acquiring color rendering data of the target smearing element, wherein the color rendering data comprises color values of all pixel points on a rendering effect graph corresponding to the target smearing element;
fusing the color rendering data and the smearing transparency to obtain drawing control data;
and drawing the target painting element along the painting direction according to the drawing control data.
9. The method of claim 8, wherein fusing the color rendering data and the smear transparency to obtain drawing control data comprises:
under the condition that the display size of the rendering effect graph is consistent with the display size of each transparency gradient image in the transparency gradient image sequence configured for the target smearing element, traversing each pixel point, sequentially taking each pixel point as a current pixel point, and executing the following operations:
and splicing the color value of the current pixel point and the transparency of the current pixel point to obtain pixel drawing control data of the current pixel point.
10. The method according to any one of claims 1 to 9, wherein the rendering the target smearing element on the part area image according to the rendering processing parameters further comprises:
acquiring a contour image corresponding to the target body part;
and superposing the outline image on the part area image after the target smearing element is drawn.
11. An image processing apparatus characterized by comprising:
the display unit is used for displaying a part area image corresponding to a target body part of a target virtual object to be processed in a display interface;
an acquisition unit configured to acquire, in response to a slide operation performed on the part region image, a drawing processing parameter of a target smear element that matches slide trajectory information of the slide operation;
and the drawing unit is used for drawing the target smearing elements on the part area image according to the drawing processing parameters.
12. A computer-readable storage medium, comprising a stored program, wherein the program when executed performs the method of any one of claims 1 to 10.
13. A computer program product comprising computer program/instructions, characterized in that the computer program/instructions, when executed by a processor, implement the steps of the method of any of claims 1 to 10.
14. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 10 by means of the computer program.
CN202111258548.2A 2021-10-27 2021-10-27 Image processing method and device, storage medium and electronic equipment Active CN114003163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111258548.2A CN114003163B (en) 2021-10-27 2021-10-27 Image processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111258548.2A CN114003163B (en) 2021-10-27 2021-10-27 Image processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN114003163A true CN114003163A (en) 2022-02-01
CN114003163B CN114003163B (en) 2023-10-24

Family

ID=79924384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111258548.2A Active CN114003163B (en) 2021-10-27 2021-10-27 Image processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114003163B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010067104A (en) * 2008-09-12 2010-03-25 Olympus Corp Digital photo-frame, information processing system, control method, program, and information storage medium
US20150128035A1 (en) * 2012-05-21 2015-05-07 Sony Corporation User interface, information display method, and computer readable medium
JP2016126513A (en) * 2014-12-26 2016-07-11 株式会社バンダイナムコエンターテインメント Input processing device and program
JP2019017755A (en) * 2017-07-19 2019-02-07 株式会社カプコン Game program and game system
CN110503725A (en) * 2019-08-27 2019-11-26 百度在线网络技术(北京)有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of image procossing
CN111489429A (en) * 2020-04-16 2020-08-04 诚迈科技(南京)股份有限公司 Image rendering control method, terminal device and storage medium
CN111524210A (en) * 2020-04-10 2020-08-11 北京百度网讯科技有限公司 Method and apparatus for generating drawings
CN113064540A (en) * 2021-03-23 2021-07-02 网易(杭州)网络有限公司 Game-based drawing method, game-based drawing device, electronic device, and storage medium
CN113377270A (en) * 2021-05-31 2021-09-10 北京达佳互联信息技术有限公司 Information display method, device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010067104A (en) * 2008-09-12 2010-03-25 Olympus Corp Digital photo-frame, information processing system, control method, program, and information storage medium
US20150128035A1 (en) * 2012-05-21 2015-05-07 Sony Corporation User interface, information display method, and computer readable medium
JP2016126513A (en) * 2014-12-26 2016-07-11 株式会社バンダイナムコエンターテインメント Input processing device and program
JP2019017755A (en) * 2017-07-19 2019-02-07 株式会社カプコン Game program and game system
CN110503725A (en) * 2019-08-27 2019-11-26 百度在线网络技术(北京)有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of image procossing
CN111524210A (en) * 2020-04-10 2020-08-11 北京百度网讯科技有限公司 Method and apparatus for generating drawings
CN111489429A (en) * 2020-04-16 2020-08-04 诚迈科技(南京)股份有限公司 Image rendering control method, terminal device and storage medium
CN113064540A (en) * 2021-03-23 2021-07-02 网易(杭州)网络有限公司 Game-based drawing method, game-based drawing device, electronic device, and storage medium
CN113377270A (en) * 2021-05-31 2021-09-10 北京达佳互联信息技术有限公司 Information display method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN114003163B (en) 2023-10-24

Similar Documents

Publication Publication Date Title
US5325473A (en) Apparatus and method for projection upon a three-dimensional object
TWI444836B (en) Method and apparatus for remote workspace sharing
JP3932204B1 (en) Color simulation system for hair coloring
CN105929945A (en) Augmented reality interaction method and device, mobile terminal and mini-computer
US20190156522A1 (en) Image processing apparatus, image processing system, and program
JP7209474B2 (en) Information processing program, information processing method and information processing system
CN111784568A (en) Face image processing method and device, electronic equipment and computer readable medium
KR101398188B1 (en) Method for providing on-line game supporting character make up and system there of
CN111282277A (en) Special effect processing method, device and equipment and storage medium
CN109785420A (en) A kind of 3D scene based on Unity engine picks up color method and system
CN105955733A (en) Icon modifying method, icon modifying device and mobile terminal
CN112657195B (en) Virtual character image processing method, device, equipment and storage medium
CN111935489A (en) Network live broadcast method, information display method and device, live broadcast server and terminal equipment
CN109389687A (en) Information processing method, device, equipment and readable storage medium storing program for executing based on AR
CN114003163B (en) Image processing method and device, storage medium and electronic equipment
US11430194B2 (en) 2D graphical coding to create a 3D image
CN110221689A (en) A kind of space drawing method based on augmented reality
CN106502400B (en) virtual reality system and virtual reality system input method
JP6661780B2 (en) Face model editing method and apparatus
CN110688018B (en) Virtual picture control method and device, terminal equipment and storage medium
CN115379195B (en) Video generation method, device, electronic equipment and readable storage medium
CN113301243B (en) Image processing method, interaction method, system, device, equipment and storage medium
CN109568961B (en) Occlusion rate calculation method and device, storage medium and electronic device
CN111240563B (en) Information display method, device, equipment and storage medium
CN108205818A (en) The color method and system of game role image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant